url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/3512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3512/comments | https://api.github.com/repos/huggingface/transformers/issues/3512/events | https://github.com/huggingface/transformers/issues/3512 | 589,768,101 | MDU6SXNzdWU1ODk3NjgxMDE= | 3,512 | how to get activation weights of a pretrained model? | {
"login": "jmamou",
"id": 19263306,
"node_id": "MDQ6VXNlcjE5MjYzMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19263306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmamou",
"html_url": "https://github.com/jmamou",
"followers_url": "https://api.github.com/users/jmamou/followers",
"following_url": "https://api.github.com/users/jmamou/following{/other_user}",
"gists_url": "https://api.github.com/users/jmamou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmamou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmamou/subscriptions",
"organizations_url": "https://api.github.com/users/jmamou/orgs",
"repos_url": "https://api.github.com/users/jmamou/repos",
"events_url": "https://api.github.com/users/jmamou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmamou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
how to get activation weights of a pretrained model?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3512/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3511/comments | https://api.github.com/repos/huggingface/transformers/issues/3511/events | https://github.com/huggingface/transformers/pull/3511 | 589,765,030 | MDExOlB1bGxSZXF1ZXN0Mzk1MjMxNzQ1 | 3,511 | Update the NER TF script | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=h1) Report\n> Merging [#3511](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3511 +/- ##\n==========================================\n+ Coverage 77.79% 77.80% +0.01% \n==========================================\n Files 100 100 \n Lines 17051 17051 \n==========================================\n+ Hits 13265 13267 +2 \n+ Misses 3786 3784 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/3511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `79.27% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.94% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=footer). Last update [601ac5b...572e04d](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,589 | 1,585 | CONTRIBUTOR | null | PR to remove the softmax and make the pad token label id to -1. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3511/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3511",
"html_url": "https://github.com/huggingface/transformers/pull/3511",
"diff_url": "https://github.com/huggingface/transformers/pull/3511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3511.patch",
"merged_at": 1585576213000
} |
https://api.github.com/repos/huggingface/transformers/issues/3510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3510/comments | https://api.github.com/repos/huggingface/transformers/issues/3510/events | https://github.com/huggingface/transformers/issues/3510 | 589,757,840 | MDU6SXNzdWU1ODk3NTc4NDA= | 3,510 | reproducing the performance of XLM-ROBERTA on MLQA dataset on the zh language | {
"login": "nooralahzadeh",
"id": 1093791,
"node_id": "MDQ6VXNlcjEwOTM3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1093791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nooralahzadeh",
"html_url": "https://github.com/nooralahzadeh",
"followers_url": "https://api.github.com/users/nooralahzadeh/followers",
"following_url": "https://api.github.com/users/nooralahzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/nooralahzadeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nooralahzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nooralahzadeh/subscriptions",
"organizations_url": "https://api.github.com/users/nooralahzadeh/orgs",
"repos_url": "https://api.github.com/users/nooralahzadeh/repos",
"events_url": "https://api.github.com/users/nooralahzadeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/nooralahzadeh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Credit to people at Microsoft Asia: Ning Wu and Nan Dua\r\nTo achieve the best performance on the \"zh\" test sets, you just need to add \r\n\"final_text = tok_text\" after line 497 in squad_metrics.py (only for zh). Because there isn't space and subword in Chinese, so we don't need to execute the get_final_test() function.",
"This is very interesting! Thanks for letting us know @nooralahzadeh!",
"@LysandreJik Do you think the training model \"RobertaForQuestionAnswering\" also need to be updated for 'zh' lang. Because when I try to fine-tune the xlm-r on 'zh' language and evaluate on its test set, the results became very lower than not fine-tuning.\r\n",
"Just found the same problem, thanks bro!\r\n\r\n> Credit to people at Microsoft Asia: Ning Wu and Nan Dua\r\n> To achieve the best performance on the \"zh\" test sets, you just need to add\r\n> \"final_text = tok_text\" after line 497 in squad_metrics.py (only for zh). Because there isn't space and subword in Chinese, so we don't need to execute the get_final_test() function.\r\n\r\n",
"I used huggingface to train & predict then use https://github.com/facebookresearch/MLQA/blob/main/mlqa_evaluation_v1.py to calculate the scores. Seem to match the paper. "
] | 1,585 | 1,661 | 1,586 | NONE | null | # ❓ Questions & Help
## Details
<!-- Description of your issue -->
I have trouble in reproducing the result of XLM-ROBERTA on MLQA dataset for Chinese languages. The result of the rest of the language seems fine, however on the zh section, I have very low {'exact_match': 4.263188631496982, 'f1': 17.451059178461946})! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3510/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3509 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3509/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3509/comments | https://api.github.com/repos/huggingface/transformers/issues/3509/events | https://github.com/huggingface/transformers/pull/3509 | 589,737,967 | MDExOlB1bGxSZXF1ZXN0Mzk1MjEyMzI5 | 3,509 | Fix for continuing training | {
"login": "xeb",
"id": 7634,
"node_id": "MDQ6VXNlcjc2MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xeb",
"html_url": "https://github.com/xeb",
"followers_url": "https://api.github.com/users/xeb/followers",
"following_url": "https://api.github.com/users/xeb/following{/other_user}",
"gists_url": "https://api.github.com/users/xeb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xeb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xeb/subscriptions",
"organizations_url": "https://api.github.com/users/xeb/orgs",
"repos_url": "https://api.github.com/users/xeb/repos",
"events_url": "https://api.github.com/users/xeb/events{/privacy}",
"received_events_url": "https://api.github.com/users/xeb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=h1) Report\n> Merging [#3509](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3509 +/- ##\n=======================================\n Coverage 77.79% 77.79% \n=======================================\n Files 100 100 \n Lines 17051 17051 \n=======================================\n Hits 13265 13265 \n Misses 3786 3786 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=footer). Last update [601ac5b...43fda01](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | If `args.should_continue` is true, then there is no way in the current example to reload a checkpoint since it is assumed it exists within the output_dir. Checking here fixes everything as expected. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3509/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3509",
"html_url": "https://github.com/huggingface/transformers/pull/3509",
"diff_url": "https://github.com/huggingface/transformers/pull/3509.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3509.patch",
"merged_at": 1585850828000
} |
https://api.github.com/repos/huggingface/transformers/issues/3508 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3508/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3508/comments | https://api.github.com/repos/huggingface/transformers/issues/3508/events | https://github.com/huggingface/transformers/issues/3508 | 589,731,690 | MDU6SXNzdWU1ODk3MzE2OTA= | 3,508 | [Bart] when output_paste=False BartForConditionalGeneration raises confusing error | {
"login": "manishiitg",
"id": 1370315,
"node_id": "MDQ6VXNlcjEzNzAzMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1370315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manishiitg",
"html_url": "https://github.com/manishiitg",
"followers_url": "https://api.github.com/users/manishiitg/followers",
"following_url": "https://api.github.com/users/manishiitg/following{/other_user}",
"gists_url": "https://api.github.com/users/manishiitg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manishiitg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manishiitg/subscriptions",
"organizations_url": "https://api.github.com/users/manishiitg/orgs",
"repos_url": "https://api.github.com/users/manishiitg/repos",
"events_url": "https://api.github.com/users/manishiitg/events{/privacy}",
"received_events_url": "https://api.github.com/users/manishiitg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer I'm seeing this as well. It doesn't happen if `num_beams=1`. Might have to do with the recent generation and bart changes. Only started happening in the last week or so. ",
"Thanks for contributing!\r\n\r\nA few thoughts:\r\n\r\n1. if you pass `output_past=True` to `BartForConditionalGeneration.from_pretrained`, the code works.\r\n2. We only expect 'bart-large-xsum' and 'bart-large-cnn' to generate high quality summaries.\r\n3. The error message/traceback should be improved. Feel free to send a PR if you'd like.\r\n3. Thanks for copy pasting usable code, it made this really easy to verify :) I added \"```python\" at the beginning to prettify.\r\n\r\n\r\n### Working example\r\ncopy paste [LONG_BORING_TENNIS_ARTICLE](https://gist.github.com/sshleifer/8d9df1937fec07cf77266e222689e9a9)\r\n```python\r\nmodel_name = 'bart-large-mnli'\r\nfrom transformers import *\r\ntorch_device='cpu'\r\ntokenizer = BartTokenizer.from_pretrained(model_name)\r\nmodel = BartForConditionalGeneration.from_pretrained(model_name, output_past=True)\r\narticle_input_ids = tokenizer.batch_encode_plus([LONG_BORING_TENNIS_ARTICLE], return_tensors='pt', max_length=1024)['input_ids'].to(torch_device)\r\nsummary_ids = model.generate(article_input_ids,\r\n num_beams=4,\r\n length_penalty=2.0,\r\n max_length=100,\r\n early_stopping=True)\r\n\r\nprint([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])\r\n```",
"This wasn't solved. I am using a trainer on BART and I have tried to use ```use_cache```, but it still doesn't work."
] | 1,585 | 1,613 | 1,586 | NONE | null | # 🐛 Bug
## Information
i am using BART
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Summarization
## To reproduce
Steps to reproduce the behavior:
```python
tokenizer = BartTokenizer.from_pretrained('bart-large-mnli')
model = BartForConditionalGeneration.from_pretrained('bart-large-mnli')
article_input_ids = tokenizer.batch_encode_plus([LONG_BORING_TENNIS_ARTICLE], return_tensors='pt', max_length=1024)['input_ids'].to(torch_device)
summary_ids = model.generate(article_input_ids,
num_beams=4,
length_penalty=2.0,
max_length=100,
early_stopping=True)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
```
i get error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-27-2df1c6607426> in <module>()
4 length_penalty=2.0,
5 max_length=100,
----> 6 early_stopping=True)
7
8 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
3 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py in _reorder_cache(past, beam_idx)
921 @staticmethod
922 def _reorder_cache(past, beam_idx):
--> 923 ((enc_out, enc_mask), decoder_cached_states) = past
924 reordered_past = []
925 for layer_past in decoder_cached_states:
ValueError: too many values to unpack (expected 2)
```
this works with bart-large-cnn
but gives error with other models? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3508/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3507 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3507/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3507/comments | https://api.github.com/repos/huggingface/transformers/issues/3507/events | https://github.com/huggingface/transformers/pull/3507 | 589,697,789 | MDExOlB1bGxSZXF1ZXN0Mzk1MTgyOTc3 | 3,507 | [T5] Add training documenation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=h1) Report\n> Merging [#3507](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3507 +/- ##\n=======================================\n Coverage 77.79% 77.80% \n=======================================\n Files 100 100 \n Lines 17051 17051 \n=======================================\n+ Hits 13265 13266 +1 \n+ Misses 3786 3785 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3507/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.58% <ø> (ø)` | |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3507/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.29% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3507/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `94.98% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3507/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.94% <0.00%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=footer). Last update [601ac5b...ea7e6a8](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | - Fixes T5 docstring regarding pretraining
- Add detailed description on how to process input and target for T5 training | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3507/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3507",
"html_url": "https://github.com/huggingface/transformers/pull/3507",
"diff_url": "https://github.com/huggingface/transformers/pull/3507.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3507.patch",
"merged_at": 1585568155000
} |
https://api.github.com/repos/huggingface/transformers/issues/3506 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3506/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3506/comments | https://api.github.com/repos/huggingface/transformers/issues/3506/events | https://github.com/huggingface/transformers/issues/3506 | 589,696,985 | MDU6SXNzdWU1ODk2OTY5ODU= | 3,506 | No grad feature in model parameters | {
"login": "lalalapotter",
"id": 27332689,
"node_id": "MDQ6VXNlcjI3MzMyNjg5",
"avatar_url": "https://avatars.githubusercontent.com/u/27332689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lalalapotter",
"html_url": "https://github.com/lalalapotter",
"followers_url": "https://api.github.com/users/lalalapotter/followers",
"following_url": "https://api.github.com/users/lalalapotter/following{/other_user}",
"gists_url": "https://api.github.com/users/lalalapotter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lalalapotter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lalalapotter/subscriptions",
"organizations_url": "https://api.github.com/users/lalalapotter/orgs",
"repos_url": "https://api.github.com/users/lalalapotter/repos",
"events_url": "https://api.github.com/users/lalalapotter/events{/privacy}",
"received_events_url": "https://api.github.com/users/lalalapotter/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is more of a general PyTorch question than a Transformers-question. Have you tried asking on StackOverflow or the PyTorch forums?"
] | 1,585 | 1,585 | 1,585 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I wonder if parameters in model.named_parameters() have the feature grad? If none, how can I add to it ?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3506/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3505 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3505/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3505/comments | https://api.github.com/repos/huggingface/transformers/issues/3505/events | https://github.com/huggingface/transformers/pull/3505 | 589,696,612 | MDExOlB1bGxSZXF1ZXN0Mzk1MTgyMTgy | 3,505 | Add clear description of how to train T5 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,651 | 1,585 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3505/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3505",
"html_url": "https://github.com/huggingface/transformers/pull/3505",
"diff_url": "https://github.com/huggingface/transformers/pull/3505.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3505.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3504 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3504/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3504/comments | https://api.github.com/repos/huggingface/transformers/issues/3504/events | https://github.com/huggingface/transformers/pull/3504 | 589,685,818 | MDExOlB1bGxSZXF1ZXN0Mzk1MTc0Mjcw | 3,504 | [Docs] Update usage doc regarding generate fn | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=h1) Report\n> Merging [#3504](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3504 +/- ##\n==========================================\n- Coverage 77.79% 77.76% -0.03% \n==========================================\n Files 100 100 \n Lines 17051 17051 \n==========================================\n- Hits 13265 13260 -5 \n- Misses 3786 3791 +5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.65% <0.00%> (-0.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=footer). Last update [601ac5b...5779256](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | Update `model.generate()` docs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3504/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3504",
"html_url": "https://github.com/huggingface/transformers/pull/3504",
"diff_url": "https://github.com/huggingface/transformers/pull/3504.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3504.patch",
"merged_at": 1585661507000
} |
https://api.github.com/repos/huggingface/transformers/issues/3503 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3503/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3503/comments | https://api.github.com/repos/huggingface/transformers/issues/3503/events | https://github.com/huggingface/transformers/issues/3503 | 589,678,444 | MDU6SXNzdWU1ODk2Nzg0NDQ= | 3,503 | Distil-BART? | {
"login": "parker84",
"id": 12496987,
"node_id": "MDQ6VXNlcjEyNDk2OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/12496987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parker84",
"html_url": "https://github.com/parker84",
"followers_url": "https://api.github.com/users/parker84/followers",
"following_url": "https://api.github.com/users/parker84/following{/other_user}",
"gists_url": "https://api.github.com/users/parker84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parker84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parker84/subscriptions",
"organizations_url": "https://api.github.com/users/parker84/orgs",
"repos_url": "https://api.github.com/users/parker84/repos",
"events_url": "https://api.github.com/users/parker84/events{/privacy}",
"received_events_url": "https://api.github.com/users/parker84/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838876023,
"node_id": "MDU6TGFiZWwxODM4ODc2MDIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distillation",
"name": "Distillation",
"color": "d4c5f9",
"default": false,
"description": "Related to model distillation"
},
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Interesting idea! What do you think @thomwolf ?",
"Hi, any update on this, even partial code?",
"I'm gunna take a crack this weekend, hopefully, By starting from the [distilbert example](https://github.com/huggingface/transformers/tree/master/examples/distillation) and modifying. I'll post a branch if I make meaningful progress.",
"Hi, just checking in to see if there's a branch already (couldn't find it). Thanks!",
"Yes, it is will be great! Any updates?\r\n",
"I'm gunna wait until the code is stable/reusable to release it, sorry for the change of plans.",
"As per https://twitter.com/sam_shleifer/status/1276160367853547522, it looks like distilBART has been released :)",
"https://huggingface.co/sshleifer/distilbart-cnn-12-6# the tokenizer by name sshleifer/distilbart-cnn-12-6 leads to an error, works with facebook/bart-cnn-large-tokenizer",
"I've faced the same issue with sshleifer/distilbart-cnn-12-6\n\n_________\n\nBest regards,\nVladislav Kozlenko\n[image: phone:] +380 685954166\n\n[image: skype:]\[email protected] <[email protected]>\n[image: position:] Software Engineer\n\n[image: SP Group]\nSoftware Planet Group\nCompany No: 9428594\n[image: phone:] +44 1483 80 24 23\n\n[image: location:] Ukraine, Cherkasy 18000\n[image: site:] softwareplanetgroup.com\n\n\nThis email and any files transmitted with it are confidential and intended\nsolely for the use of the individual or entity to whom they are addressed.\nIf you have received this email in error please notify the sender\nimmediately.\n\n\n\nOn Thu, 25 Jun 2020 at 20:42, Amanpreet Singh <[email protected]>\nwrote:\n\n> https://huggingface.co/sshleifer/distilbart-cnn-12-6# the tokenizer by\n> name sshleifer/distilbart-cnn-12-6 leads to an error, works with\n> facebook/bart-cnn-large-tokenizer\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3503#issuecomment-649724552>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AJWNBQIUYRMVUXUT6B42V4LRYOD67ANCNFSM4LVX67LQ>\n> .\n>\n",
"I can't reproduce the error on master. If somebody can, it would be great if they could make a separate issue and I will try to resolve.\r\n\r\nAll the distilbart- tokenizers are identical to the is identical to the `facebook/bart-large-cnn` tokenizer, which is identical to the facebook/bart-cnn-xsum` tokenizer. @julien-c is there a fancy AWS way to synchronize/symlink them? ",
"I've tried several models and I'm getting the same error each time it\ncreates a tokenizer.\n[image: image.png]\n\n_________\n\nBest regards,\nVladislav Kozlenko\n[image: phone:] +380 685954166\n\n[image: skype:]\[email protected] <[email protected]>\n[image: position:] Software Engineer\n\n[image: SP Group]\nSoftware Planet Group\nCompany No: 9428594\n[image: phone:] +44 1483 80 24 23\n\n[image: location:] Ukraine, Cherkasy 18000\n[image: site:] softwareplanetgroup.com\n\n\nThis email and any files transmitted with it are confidential and intended\nsolely for the use of the individual or entity to whom they are addressed.\nIf you have received this email in error please notify the sender\nimmediately.\n\n\n\nOn Thu, 25 Jun 2020 at 21:34, Sam Shleifer <[email protected]> wrote:\n\n> I can't reproduce the error on master. If somebody can, it would be great\n> if they could make a separate issue and I will try to resolve.\n>\n> All the distilbart- tokenizers are identical to the is identical to the\n> facebook/bart-large-cnn tokenizer, which is identical to the\n> facebook/bart-cnn-xsum` tokenizer. @julien-c <https://github.com/julien-c>\n> is there a fancy AWS way to synchronize/symlink them?\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3503#issuecomment-649748636>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AJWNBQKQZXUC4YAY7WWJQ7TRYOKCVANCNFSM4LVX67LQ>\n> .\n>\n",
"@vladislavkoz Please make a new issue with instructions to reproduce, following the issue template. Feel free to assign me.",
"Here is an issue https://github.com/huggingface/transformers/issues/5286\n\n_________\n\nBest regards,\nVladislav Kozlenko\n[image: phone:] +380 685954166\n\n[image: skype:]\[email protected] <[email protected]>\n[image: position:] Software Engineer\n\n[image: SP Group]\nSoftware Planet Group\nCompany No: 9428594\n[image: phone:] +44 1483 80 24 23\n\n[image: location:] Ukraine, Cherkasy 18000\n[image: site:] softwareplanetgroup.com\n\n\nThis email and any files transmitted with it are confidential and intended\nsolely for the use of the individual or entity to whom they are addressed.\nIf you have received this email in error please notify the sender\nimmediately.\n\n\n\nOn Thu, 25 Jun 2020 at 21:38, Vladislav Kozlenko <\[email protected]> wrote:\n\n> I've tried several models and I'm getting the same error each time it\n> creates a tokenizer.\n> [image: image.png]\n>\n> _________\n>\n> Best regards,\n> Vladislav Kozlenko\n> [image: phone:] +380 685954166\n>\n> [image: skype:]\n> [email protected] <[email protected]>\n> [image: position:] Software Engineer\n>\n> [image: SP Group]\n> Software Planet Group\n> Company No: 9428594\n> [image: phone:] +44 1483 80 24 23\n>\n> [image: location:] Ukraine, Cherkasy 18000\n> [image: site:] softwareplanetgroup.com\n>\n>\n> This email and any files transmitted with it are confidential and\n> intended solely for the use of the individual or entity to whom they are\n> addressed. If you have received this email in error please notify the\n> sender immediately.\n>\n>\n>\n> On Thu, 25 Jun 2020 at 21:34, Sam Shleifer <[email protected]>\n> wrote:\n>\n>> I can't reproduce the error on master. If somebody can, it would be great\n>> if they could make a separate issue and I will try to resolve.\n>>\n>> All the distilbart- tokenizers are identical to the is identical to the\n>> facebook/bart-large-cnn tokenizer, which is identical to the\n>> facebook/bart-cnn-xsum` tokenizer. @julien-c\n>> <https://github.com/julien-c> is there a fancy AWS way to\n>> synchronize/symlink them?\n>>\n>> —\n>> You are receiving this because you commented.\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/transformers/issues/3503#issuecomment-649748636>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/AJWNBQKQZXUC4YAY7WWJQ7TRYOKCVANCNFSM4LVX67LQ>\n>> .\n>>\n>\n",
"I didn't assign you. I Just read the message to late.\n\n_________\n\nBest regards,\nVladislav Kozlenko\n[image: phone:] +380 685954166\n\n[image: skype:]\[email protected] <[email protected]>\n[image: position:] Software Engineer\n\n[image: SP Group]\nSoftware Planet Group\nCompany No: 9428594\n[image: phone:] +44 1483 80 24 23\n\n[image: location:] Ukraine, Cherkasy 18000\n[image: site:] softwareplanetgroup.com\n\n\nThis email and any files transmitted with it are confidential and intended\nsolely for the use of the individual or entity to whom they are addressed.\nIf you have received this email in error please notify the sender\nimmediately.\n\n\n\nOn Thu, 25 Jun 2020 at 21:45, Vladislav Kozlenko <\[email protected]> wrote:\n\n> Here is an issue https://github.com/huggingface/transformers/issues/5286\n>\n> _________\n>\n> Best regards,\n> Vladislav Kozlenko\n> [image: phone:] +380 685954166\n>\n> [image: skype:]\n> [email protected] <[email protected]>\n> [image: position:] Software Engineer\n>\n> [image: SP Group]\n> Software Planet Group\n> Company No: 9428594\n> [image: phone:] +44 1483 80 24 23\n>\n> [image: location:] Ukraine, Cherkasy 18000\n> [image: site:] softwareplanetgroup.com\n>\n>\n> This email and any files transmitted with it are confidential and\n> intended solely for the use of the individual or entity to whom they are\n> addressed. If you have received this email in error please notify the\n> sender immediately.\n>\n>\n>\n> On Thu, 25 Jun 2020 at 21:38, Vladislav Kozlenko <\n> [email protected]> wrote:\n>\n>> I've tried several models and I'm getting the same error each time it\n>> creates a tokenizer.\n>> [image: image.png]\n>>\n>> _________\n>>\n>> Best regards,\n>> Vladislav Kozlenko\n>> [image: phone:] +380 685954166\n>>\n>> [image: skype:]\n>> [email protected] <[email protected]>\n>> [image: position:] Software Engineer\n>>\n>> [image: SP Group]\n>> Software Planet Group\n>> Company No: 9428594\n>> [image: phone:] +44 1483 80 24 23\n>>\n>> [image: location:] Ukraine, Cherkasy 18000\n>> [image: site:] softwareplanetgroup.com\n>>\n>>\n>> This email and any files transmitted with it are confidential and\n>> intended solely for the use of the individual or entity to whom they are\n>> addressed. If you have received this email in error please notify the\n>> sender immediately.\n>>\n>>\n>>\n>> On Thu, 25 Jun 2020 at 21:34, Sam Shleifer <[email protected]>\n>> wrote:\n>>\n>>> I can't reproduce the error on master. If somebody can, it would be\n>>> great if they could make a separate issue and I will try to resolve.\n>>>\n>>> All the distilbart- tokenizers are identical to the is identical to the\n>>> facebook/bart-large-cnn tokenizer, which is identical to the\n>>> facebook/bart-cnn-xsum` tokenizer. @julien-c\n>>> <https://github.com/julien-c> is there a fancy AWS way to\n>>> synchronize/symlink them?\n>>>\n>>> —\n>>> You are receiving this because you commented.\n>>> Reply to this email directly, view it on GitHub\n>>> <https://github.com/huggingface/transformers/issues/3503#issuecomment-649748636>,\n>>> or unsubscribe\n>>> <https://github.com/notifications/unsubscribe-auth/AJWNBQKQZXUC4YAY7WWJQ7TRYOKCVANCNFSM4LVX67LQ>\n>>> .\n>>>\n>>\n",
"@sshleifer ATM you need to duplicate the tokenizer files in each model if you want them to be loadable by the model hub, the inference API, etc.",
"I was able to create tokenizer only with 'distilbart-xsum-12-1' and\n'distilbart-xsum-9-6'. Then on the summarization step, I'm getting another\nerror. I've added a comment here\n<https://github.com/huggingface/transformers/issues/5286>\n\n_________\n\nBest regards,\nVladislav Kozlenko\n[image: phone:] +380 685954166\n\n[image: skype:]\[email protected] <[email protected]>\n[image: position:] Software Engineer\n\n[image: SP Group]\nSoftware Planet Group\nCompany No: 9428594\n[image: phone:] +44 1483 80 24 23\n\n[image: location:] Ukraine, Cherkasy 18000\n[image: site:] softwareplanetgroup.com\n\n\nThis email and any files transmitted with it are confidential and intended\nsolely for the use of the individual or entity to whom they are addressed.\nIf you have received this email in error please notify the sender\nimmediately.\n\n\n\nOn Fri, 26 Jun 2020 at 02:11, Julien Chaumond <[email protected]>\nwrote:\n\n> @sshleifer <https://github.com/sshleifer> ATM you need to duplicate the\n> tokenizer files in each model if you want them to be loadable by the model\n> hub, the inference API, etc.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3503#issuecomment-649862320>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AJWNBQMZYWYTBBAM5PXRRHDRYPKTPANCNFSM4LVX67LQ>\n> .\n>\n",
"Hey @sshleifer , thanks for the distilled BART version I was able to fine tune it with the same script on BillSum dataset as T5 but the numbers are way different between the two. I just wanted to understand if I might be doing something wrong with regards to fine tuning distilBART, does it require student training everytime?\r\nReference numbers on BillSum Dataset:\r\n\r\nT5-base:\r\navg_train_loss = tensor(1.5333, device='cuda:0')\r\navg_val_loss = tensor(1.4528, device='cuda:0')\r\nepoch = 1\r\nloss = tensor(1.6734, device='cuda:0')\r\nrouge1 = 0.49188267841912325\r\nrouge2 = 0.26436589848185027\r\nrougeL = 0.3591894400892483\r\ntrain_loss = tensor(1.6734, device='cuda:0')\r\nval_loss = tensor(1.4528, device='cuda:0')\r\n\r\ndBART-cnn-12-6:\r\navg_train_loss = tensor(1.3013, device='cuda:0')\r\navg_val_loss = tensor(1.4013, device='cuda:0')\r\nepoch = 1\r\nloss = tensor(1.4901, device='cuda:0')\r\nrouge1 = 0.3681518923769047\r\nrouge2 = 0.15683286277623087\r\nrougeL = 0.23453727441540043\r\ntrain_loss = tensor(1.4901, device='cuda:0')\r\nval_loss = tensor(1.4013, device='cuda:0')\r\n\r\nPS. I am using a modified version of the older finetune.py so it doesn't have Rouge for validation epochs.\r\n\r\nThanks",
"@amanpreet692 I moved your issue [here](https://github.com/huggingface/transformers/issues/5336) and will reply there.\r\nOthers, I am closing this since the model is released and I don't want to spam everyone. This shouldn't discourage people making new issues! \r\n"
] | 1,585 | 1,593 | 1,593 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3503/reactions",
"total_count": 10,
"+1": 9,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3503/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/3502 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3502/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3502/comments | https://api.github.com/repos/huggingface/transformers/issues/3502/events | https://github.com/huggingface/transformers/issues/3502 | 589,622,307 | MDU6SXNzdWU1ODk2MjIzMDc= | 3,502 | Bert Batch Encode Plus adding an extra [SEP] | {
"login": "creat89",
"id": 17121539,
"node_id": "MDQ6VXNlcjE3MTIxNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/17121539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/creat89",
"html_url": "https://github.com/creat89",
"followers_url": "https://api.github.com/users/creat89/followers",
"following_url": "https://api.github.com/users/creat89/following{/other_user}",
"gists_url": "https://api.github.com/users/creat89/gists{/gist_id}",
"starred_url": "https://api.github.com/users/creat89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/creat89/subscriptions",
"organizations_url": "https://api.github.com/users/creat89/orgs",
"repos_url": "https://api.github.com/users/creat89/repos",
"events_url": "https://api.github.com/users/creat89/events{/privacy}",
"received_events_url": "https://api.github.com/users/creat89/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @creat89, \r\n\r\nThanks for posting this issue! \r\n\r\nYou are correct there is some inconsistent behavior here. \r\n\r\n1. We should probably in general not allow using `batch_encode_plus()` of a simple string. For this the `encode_plus()` function should be used. \r\n2. It seems like there is an inconsistency between `encode_plus([string])` and `encode_plus(string)`. This should probably be fixed.",
"Well, the issue not only happens with a simple string. In my actual code I was using a batch of size 2. However, I just used a simple example to demonstrate the issue.\r\n\r\nI didn't find any inconsistency between `encode_plus([string])` and `encode_plus(string)` but `batch_encode_plus([strings])` and `batch_encode_plus([[tokens]])` ",
"Sorry, I was skimming through your problem too quickly - I see what you mean now. \r\nI will take a closer look at this.",
"Created a PR this fixes this behavior. Thanks for pointing this out @creat89 :-) ",
"There has been a big change in tokenizers recently :-) which adds a `is_pretokenized` flag to the input which makes everything much easier. This should then be used as follows: \r\n``\r\nbert_tokenizer.batch_encode_plus([tokens], is_pretokenized=True))\r\n``",
"Cool, that's awesome and yes, I'm sure that makes everything easier. Cheers!"
] | 1,585 | 1,586 | 1,586 | NONE | null | # 🐛 Bug
## Information
I'm using `bert-base-multilingual-cased` tokenizer and model for creating another model. However, the `batch_encode_plus` is adding an extra `[SEP]` token id in the middle.
The problem arises when using:
* Specific strings to encode, e.g. `16.`, `3.`, `10.`,
* The `bert-base-multilingual-cased` tokenizer is used beforehand to tokenize the previously described strings and
* The `batch_encode_plus` is used to convert the tokenized strings
In fact, `batch_encode_plus` will generate an `input_ids` list containing two `[SEP]`, such as in `[101, 10250, 102, 119, 102]`
I have seen similar issues, but they don't indicate the version of transformers:
https://github.com/huggingface/transformers/issues/2658
https://github.com/huggingface/transformers/issues/3037
Thus, I'm not sure if it is related to transformers version `2.6.0`
## To reproduce
Steps to reproduce the behavior (simplified steps):
1. Have a string of type `16.` or `6.`
2. Use `tokens = bert_tokenizer.tokenize("16.")`
3. Use `bert_tokenizer.batch_encode_plus([tokens])`
You can reproduce the error with this code
```python
from transformers import BertTokenizer
import unittest
class TestListElements(unittest.TestCase):
def setUp(self):
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
problematic_string = "16."
tokens = bert_tokenizer.tokenize(problematic_string)
self.encoded_batch_1 = bert_tokenizer.batch_encode_plus([tokens]) #list[list[]]
self.encoded_batch_2 = bert_tokenizer.batch_encode_plus([problematic_string]) #list[]
self.encoded_tokens_1 = bert_tokenizer.encode_plus(problematic_string)
self.encoded_tokens_2 = bert_tokenizer.encode_plus(tokens)
def test_tokens_vs_tokens(self):
self.assertListEqual(self.encoded_tokens_1["input_ids"], self.encoded_tokens_2["input_ids"])
def test_tokens_vs_batch_string(self):
self.assertListEqual(self.encoded_tokens_1["input_ids"], self.encoded_batch_2["input_ids"][0])
def test_tokens_vs_batch_list_tokens(self):
self.assertListEqual(self.encoded_tokens_1["input_ids"], self.encoded_batch_1["input_ids"][0])
if __name__ == "__main__":
unittest.main(verbosity=2)
```
The code will break at test `test_tokens_vs_batch_list_tokens`, with the following summarized output:
```
- [101, 10250, 119, 102]
+ [101, 10250, 102, 119, 102]
```
## Expected behavior
The `batch_encode_plus` should always produce the same `input_ids` no matter whether we pass them a list of tokens or a list of strings.
For instance, for the string `16.` we should get always `[101, 10250, 119, 102]`. However, using `batch_encode_plus` we get `[101, 10250, 102, 119, 102]` if we pass them an input already tokenized.
## Environment info
- `transformers` version: 2.6.0
- Platform: Linux (Manjaro)
- Python version: Python 3.8.1 (default, Jan 8 2020, 22:29:32)
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): ---
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3502/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3501 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3501/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3501/comments | https://api.github.com/repos/huggingface/transformers/issues/3501/events | https://github.com/huggingface/transformers/pull/3501 | 589,610,966 | MDExOlB1bGxSZXF1ZXN0Mzk1MTE5NDYx | 3,501 | [BART] Update encoder and decoder on set_input_embedding | {
"login": "dougian",
"id": 4057349,
"node_id": "MDQ6VXNlcjQwNTczNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4057349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dougian",
"html_url": "https://github.com/dougian",
"followers_url": "https://api.github.com/users/dougian/followers",
"following_url": "https://api.github.com/users/dougian/following{/other_user}",
"gists_url": "https://api.github.com/users/dougian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dougian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dougian/subscriptions",
"organizations_url": "https://api.github.com/users/dougian/orgs",
"repos_url": "https://api.github.com/users/dougian/repos",
"events_url": "https://api.github.com/users/dougian/events{/privacy}",
"received_events_url": "https://api.github.com/users/dougian/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=h1) Report\n> Merging [#3501](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3501 +/- ##\n=======================================\n Coverage 77.79% 77.80% \n=======================================\n Files 100 100 \n Lines 17051 17053 +2 \n=======================================\n+ Hits 13265 13268 +3 \n+ Misses 3786 3785 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3501/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.59% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3501/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.94% <0.00%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=footer). Last update [601ac5b...5e11181](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Since _resize_token_embeddings will create a new embedding layer,
resizing the input embeddings for BART will currently break, as the
model.shared will refer to the new embedding created but the
model.encoder.embed_tokens and the model.decoder.embed_tokens will
still refer to the old embedding created.
We need to re-assign the encoder/decoder or just their weights, I opted for the second option.
Unfortunately can't see how to write a test in the test_resize_tokens_embeddings to capture this without putting a BART-specific if statement there, but this is also related to https://github.com/huggingface/transformers/issues/3378
Run tests:
733 passed, 319 skipped, 80 warnings in 269.62s (0:04:29) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3501/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3501",
"html_url": "https://github.com/huggingface/transformers/pull/3501",
"diff_url": "https://github.com/huggingface/transformers/pull/3501.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3501.patch",
"merged_at": 1585585237000
} |
https://api.github.com/repos/huggingface/transformers/issues/3500 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3500/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3500/comments | https://api.github.com/repos/huggingface/transformers/issues/3500/events | https://github.com/huggingface/transformers/pull/3500 | 589,603,851 | MDExOlB1bGxSZXF1ZXN0Mzk1MTE0OTAz | 3,500 | [Wait to merge] [Bart] Rename lm_labels argument to masked_lm_labels | {
"login": "dougian",
"id": 4057349,
"node_id": "MDQ6VXNlcjQwNTczNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4057349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dougian",
"html_url": "https://github.com/dougian",
"followers_url": "https://api.github.com/users/dougian/followers",
"following_url": "https://api.github.com/users/dougian/following{/other_user}",
"gists_url": "https://api.github.com/users/dougian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dougian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dougian/subscriptions",
"organizations_url": "https://api.github.com/users/dougian/orgs",
"repos_url": "https://api.github.com/users/dougian/repos",
"events_url": "https://api.github.com/users/dougian/events{/privacy}",
"received_events_url": "https://api.github.com/users/dougian/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Could you make sure the kwarg in `examples/summarization/bart/run_bart_sum.py` is correct? Thanks!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=h1) Report\n> Merging [#3500](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **increase** coverage by `0.50%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3500 +/- ##\n==========================================\n+ Coverage 77.79% 78.30% +0.50% \n==========================================\n Files 100 100 \n Lines 17051 17051 \n==========================================\n+ Hits 13265 13351 +86 \n+ Misses 3786 3700 -86 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.58% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.94% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.49% <0.00%> (+27.59%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=footer). Last update [601ac5b...70f9258](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Could you make sure the kwarg in `examples/summarization/bart/run_bart_sum.py` is correct? Thanks!\r\n\r\nSure, updated all occurrences on that file",
"@dougian : we are going to merge/adopt this after a few other PRs are merged in order to coordinate the signature with T5's signature. Thanks for your contribution!\r\n",
"Fixed on master by other PRs, closing. Thanks!"
] | 1,585 | 1,593 | 1,593 | CONTRIBUTOR | null | While the docstring correctly lists masked_lm_labels, forward
was expecting lm_labels. Renaming to match BERT and the rest of
the MLM-based models
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3500/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3500",
"html_url": "https://github.com/huggingface/transformers/pull/3500",
"diff_url": "https://github.com/huggingface/transformers/pull/3500.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3500.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3499 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3499/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3499/comments | https://api.github.com/repos/huggingface/transformers/issues/3499/events | https://github.com/huggingface/transformers/issues/3499 | 589,597,058 | MDU6SXNzdWU1ODk1OTcwNTg= | 3,499 | masked_lm_loss in BertForMaskedLM model | {
"login": "Drpulti",
"id": 23643762,
"node_id": "MDQ6VXNlcjIzNjQzNzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/23643762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Drpulti",
"html_url": "https://github.com/Drpulti",
"followers_url": "https://api.github.com/users/Drpulti/followers",
"following_url": "https://api.github.com/users/Drpulti/following{/other_user}",
"gists_url": "https://api.github.com/users/Drpulti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Drpulti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Drpulti/subscriptions",
"organizations_url": "https://api.github.com/users/Drpulti/orgs",
"repos_url": "https://api.github.com/users/Drpulti/repos",
"events_url": "https://api.github.com/users/Drpulti/events{/privacy}",
"received_events_url": "https://api.github.com/users/Drpulti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Have a look at the `mask_tokens` method in `run_language_modeling.py`. This takes in the `input_ids`, performs masking on them and returns the masked `input_ids` and corresponding `masked_lm_labels`.",
"@Drpulti I am also getting the same error as you, and I believe it is because `-100` exists in the `masked_lm_labels` returned by `mask_tokens`. \r\n\r\nThese are fed to the `forward` hook of `BertForMaskedLM` (or whatever pre-trained model you are using), and ultimately to `CrossEntropyLoss`, which throws an error for `labels < 0`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/601ac5b1dc1438f00d09696588f2deb0f045ae3b/src/transformers/modeling_bert.py#L1001-L1004\r\n\r\nThe docstring says:\r\n\r\nhttps://github.com/huggingface/transformers/blob/601ac5b1dc1438f00d09696588f2deb0f045ae3b/src/transformers/modeling_bert.py#L933-L937\r\n\r\nbut I don't see the logic where `masked_lm_labels == -100` are ignored. You can even see a comment that says `-100` is masked, \r\n\r\nhttps://github.com/huggingface/transformers/blob/601ac5b1dc1438f00d09696588f2deb0f045ae3b/src/transformers/modeling_bert.py#L1002\r\n\r\nbut again, where is the code that does this? I figure that both of us might be missing the step that properly handles these `-100` values.",
"I believe that the `-100` part is handled by `CrossEntropyLoss` (https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#nll_loss)\r\n\r\nI think that in your case, you might be having some mismatch between pytorch and transformers versions. Try upgrading to the latest of both and check if the error is still there.",
"When my label contains -100, I get this error when running “IndexError: Target -100 is out of bounds.”",
"Could you be a bit more specific as to where the error is coming from? Maybe a stack trace would be nice. Also, please upgrade your pytorch and transformers packages. I'm running transformers 2.5.0 and pytorch 1.4.0 and don't get any such issue.",
"@Genius1237 in fact,i think i don't relly know what is the meaning of masked_lm_labels, I want to know what he expresses and how can we get him\r\n\r\n",
"@tom1125 I'm not understanding you. Are you saying that you want to know how `masked_lm_labels` are computed and how it's used in computing the loss?",
"@Genius1237 yes ,and i want to know how to get it,thanks",
"An input sentence is a sequence of sub-word tokens, represented by their IDs. This is what `input_ids` would represent (before masking). The `mask_tokens` methods takes in this, and chooses 15% of the tokens for a \"corruption\" process. In this \"corruption\" process, 80% of the chosen tokens become [MASK], 10% get replaced with a random word and 10% are untouched.\r\n\r\nThe goal of the bert model will be to take in the \"corrupted\" `input_ids` and predict the correct token for each token. The correct tokens, `masked_lm_labels` are also produced by the `mask_token` methods. The values of this tensor would ideally be a clone of the \"uncorrupted\" `input_ids`, but since the loss is computed over only the \"corrupted\" tokens, the value of `masked_lm_labels` for the 85% of tokens that aren't chosen for \"corruption\" is set to `-100` so that it gets ignored by `CrossEntropyLoss`.",
"@Genius1237 thank you very much,it really helps me.",
"> I believe that the `-100` part is handled by `CrossEntropyLoss` (https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#nll_loss)\r\n> \r\n> I think that in your case, you might be having some mismatch between pytorch and transformers versions. Try upgrading to the latest of both and check if the error is still there.\r\n\r\nYou are right! Thanks. I will try updating both packages",
"> I believe that the `-100` part is handled by `CrossEntropyLoss` (https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#nll_loss)\r\n> \r\n> I think that in your case, you might be having some mismatch between pytorch and transformers versions. Try upgrading to the latest of both and check if the error is still there.\r\n\r\nYou are right, upgrade helped to resolve the issue. I'm closing the thread."
] | 1,585 | 1,585 | 1,585 | NONE | null | Altought I've read the documentation related to BertForMaskedLM class, I still cannot understand how to properly calculate loss for my problem.
Let's suppose that my target sentence is:
"_I will be writing when you arrive._"
I want to calculate loss for all words except 'arrive'.
The documentation says:
> **masked_lm_labels (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None)** Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Due to the way I understood, I should pass to the _masked_lm_labels_ argument a tensor that contains following indices:
`tensor([[ 101, 1045, 2097, 2022, 3015, 2043, -100, 7180, 1012, 101]])`
It returns error:
`RuntimeError: Assertion 'cur_target >= 0 && cur_target < n_classes' failed.`
Can you help me and point out what is wrong in my thinking?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3499/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3499/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3498 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3498/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3498/comments | https://api.github.com/repos/huggingface/transformers/issues/3498/events | https://github.com/huggingface/transformers/issues/3498 | 589,593,554 | MDU6SXNzdWU1ODk1OTM1NTQ= | 3,498 | XLM-ROBERTA | {
"login": "nooralahzadeh",
"id": 1093791,
"node_id": "MDQ6VXNlcjEwOTM3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1093791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nooralahzadeh",
"html_url": "https://github.com/nooralahzadeh",
"followers_url": "https://api.github.com/users/nooralahzadeh/followers",
"following_url": "https://api.github.com/users/nooralahzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/nooralahzadeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nooralahzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nooralahzadeh/subscriptions",
"organizations_url": "https://api.github.com/users/nooralahzadeh/orgs",
"repos_url": "https://api.github.com/users/nooralahzadeh/repos",
"events_url": "https://api.github.com/users/nooralahzadeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/nooralahzadeh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"That's a good point @nooralahzadeh! We should probably add the `model_type` field to the XLM-ROBERTA config. \r\n\r\n@julien-c - If it's fine for you I can add the `model_type` `xlm-roberta` to all `xlm-roberta` configs of the following models: \r\nhttps://huggingface.co/models?search=xlm-rob \r\n\r\nAs @nooralahzadeh pointed out, the fine-tuned XLM-ROBERTA models would otherwise default to the XLM model (because of its name and the implemented Fallback pattern in https://github.com/huggingface/transformers/blob/f6a23d19116a62bd3c662d0aa381130b49abcff7/src/transformers/configuration_auto.py#L190) \r\n\r\nShould be easy with a S3 AWS script. ",
"Hmm, there is a `xlm-roberta` before `xlm` and `roberta` in [CONFIG_MAPPING](https://github.com/huggingface/transformers/blob/f6a23d19116a62bd3c662d0aa381130b49abcff7/src/transformers/configuration_auto.py#L65) so it should be correctly picked up, no?",
"True, @nooralahzadeh I guess in order to avoid falling to 'xlm' you have to name your files/folders 'xlm-roberta-something' (so you need to include the full xlm-roberta name) or you can just manually change the config of your xlm-roberta and add the `model_type`",
"Thanks, this is what I said in the first comment: need to have \"xlm-roberta\" in its name!",
"Hello thanks, this command not work! can you help me? please\r\npython run_squad.py \\\r\n--model_type xlm-roberta\r\n\r\n",
"Hey @Forutanrad,\r\n\r\ncould you please open a new issue for this?",
"Hello, yes thanks.\n\nOn Thu, 3 Feb 2022, 13:28 Patrick von Platen, ***@***.***>\nwrote:\n\n> Hey @Forutanrad <https://github.com/Forutanrad>,\n>\n> could you please open a new issue for this?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3498#issuecomment-1028806963>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ASUDOKXALOAGOJF5NSHE4MDUZJGSRANCNFSM4LVTH64A>\n> .\n> Triage notifications on the go with GitHub Mobile for iOS\n> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>\n> or Android\n> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.\n>\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n"
] | 1,585 | 1,643 | 1,585 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLM-ROBERTA
In order to fine-tune XLM-ROBERTA from pre_trained file, since the config file does not have the model_type field, all the required files should be in a folder which contains "xlm-robert" in its name! otherwise the "XLM" config will be assigned !
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3498/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3497 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3497/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3497/comments | https://api.github.com/repos/huggingface/transformers/issues/3497/events | https://github.com/huggingface/transformers/issues/3497 | 589,552,346 | MDU6SXNzdWU1ODk1NTIzNDY= | 3,497 | REALM | {
"login": "aced125",
"id": 44452903,
"node_id": "MDQ6VXNlcjQ0NDUyOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/44452903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aced125",
"html_url": "https://github.com/aced125",
"followers_url": "https://api.github.com/users/aced125/followers",
"following_url": "https://api.github.com/users/aced125/following{/other_user}",
"gists_url": "https://api.github.com/users/aced125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aced125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aced125/subscriptions",
"organizations_url": "https://api.github.com/users/aced125/orgs",
"repos_url": "https://api.github.com/users/aced125/repos",
"events_url": "https://api.github.com/users/aced125/events{/privacy}",
"received_events_url": "https://api.github.com/users/aced125/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"Code is released for the preceding paper: Latent Retrieval for Weakly Supervised Open Domain Question Answering\" https://www.aclweb.org/anthology/P19-1612/\r\nhttps://github.com/google-research/language/tree/master/language/orqa",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am also interested on it. Any news? Thanks",
"I'm also interested on this, are you planning to include this model in the library??",
"Finally the REALM code has been released here https://github.com/google-research/language/tree/master/language/realm.\r\nI think that this issue should be re-opened. @aced125 are you able to re-open it?",
"I hope somebody can reopen this and make it happen",
"Any updates on this ? ",
"Any update? \r\n",
"I think this could be implemented easily with RAG by making sure that we\nboth finetune doc encoder and question encoder of the retriever model. This\nmight be very useful for us.\n\nOn Wed, Feb 10, 2021, 23:42 Lysandre Debut <[email protected]> wrote:\n\n> Reopened #3497 <https://github.com/huggingface/transformers/issues/3497>.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3497#event-4314060510>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGULJ5VEDTMJD6F4ZXLS6JPJPANCNFSM4LVQ3UVQ>\n> .\n>\n",
"Any Update on this?",
"@OctoberChang I did extend the RAG in a way that can be used to experiment REALM stuff. \r\n\r\nhttps://paperswithcode.com/paper/fine-tune-the-entire-rag-architecture",
"@shamanez , thanks for the pointer to your RAG project [(link)](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag-end2end-retriever)\r\nDo you have any pre-trained (not fine-tuned on downstream tasks) models with ICT or Salient span masking [[link]](https://arxiv.org/pdf/2002.08909.pdf) that can be load from Huggingface Transformers?",
"I haven't extended into the ICT. Sorry.\r\n@OctoberChang I think to get REALM models loaded with the current HF retrieval framework, we might need some workarounds. This is mainly because RAG uses a generative model and REALM consists of extractive LM. ",
"@qqaatw's WIP PR (https://github.com/huggingface/transformers/pull/13292) adds REALM."
] | 1,585 | 1,642 | 1,642 | NONE | null | # 🌟 New model addition
REALM is from some of the authors of BERT (I like to think of it as the next BERT :) ) that have found a way to incorporate world knowledge (from Wikipedia) into the model.
They do this by having the concept of a retriever module that retrieves information from wikipedia articles.
<!-- Important information -->
## Open source status
Code not released at the moment but will probably be released by Google soon I'd imagine.
https://arxiv.org/abs/2002.08909
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3497/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3497/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3496 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3496/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3496/comments | https://api.github.com/repos/huggingface/transformers/issues/3496/events | https://github.com/huggingface/transformers/issues/3496 | 589,510,716 | MDU6SXNzdWU1ODk1MTA3MTY= | 3,496 | How to load BertforSequenceClassification models weights into BertforTokenClassification model? | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Have you tried loading your checkpoint in the model using `from_pretrained`:\r\n\r\n```py\r\nmodel = BertForTokenClassification.from_pretrained(\"checkpoint)\r\n```\r\n\r\n? It should work out of the box. Please be aware that the NER head will be randomly initialized as it is not in the sequence classification checkpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | Initially, I have a fine-tuned BERT base cased model using a text classification dataset and I have used BertforSequenceClassification class for this. Now I want to use this fine-tuned BERT model weights for Named Entity Recognition and I have to use BertforTokenClassification class for this. I'm unable to figure out how to load the fine-tuned BERT model weights into the new model created using BertforTokenClassification. @thomwolf
Thanks in advance....................... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3496/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3495 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3495/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3495/comments | https://api.github.com/repos/huggingface/transformers/issues/3495/events | https://github.com/huggingface/transformers/issues/3495 | 589,509,071 | MDU6SXNzdWU1ODk1MDkwNzE= | 3,495 | CUDA error: CUBLAS_STATUS_ALLOC_FAILED When running language modeling using bert-base-cased | {
"login": "Rshcaroline",
"id": 22836973,
"node_id": "MDQ6VXNlcjIyODM2OTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/22836973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rshcaroline",
"html_url": "https://github.com/Rshcaroline",
"followers_url": "https://api.github.com/users/Rshcaroline/followers",
"following_url": "https://api.github.com/users/Rshcaroline/following{/other_user}",
"gists_url": "https://api.github.com/users/Rshcaroline/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rshcaroline/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rshcaroline/subscriptions",
"organizations_url": "https://api.github.com/users/Rshcaroline/orgs",
"repos_url": "https://api.github.com/users/Rshcaroline/repos",
"events_url": "https://api.github.com/users/Rshcaroline/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rshcaroline/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
}
] | closed | false | null | [] | [
"I have encountered the same error running the LM finetuning script on Google Colab, also with the Bert model type but using the pretrained weights from 'allenai/scibert_scivocab_uncased'\r\n\r\n```\r\n!python run_language_modeling.py \\\r\n --output_dir=models \\\r\n --model_type=bert \\\r\n --model_name_or_path='allenai/scibert_scivocab_uncased' \\\r\n --do_train \\\r\n --train_data_file=train_sample.txt \\\r\n --do_eval \\\r\n --eval_data_file=val_sample.txt \\\r\n --mlm \\\r\n --line_by_line\r\n```\r\n\r\nEnvironment\r\n- transformers : 2.7.0\r\n- Python: 3.6.9\r\n- PyTorch: 1.4.0\r\n\r\n\r\nAbbreviated stack trace : \r\n```\r\nIteration: 0% 0/2971 [00:00<?, ?it/s]/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. \r\n/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n...\r\n/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 799, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 749, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_language_modeling.py\", line 353, in train\r\n outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py\", line 987, in forward\r\n encoder_attention_mask=encoder_attention_mask,\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py\", line 790, in forward\r\n encoder_attention_mask=encoder_extended_attention_mask,\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py\", line 407, in forward\r\n hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py\", line 368, in forward\r\n self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py\", line 314, in forward\r\n hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py\", line 216, in forward\r\n mixed_query_layer = self.query(hidden_states)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py\", line 87, in forward\r\n return F.linear(input, self.weight, self.bias)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py\", line 1372, in linear\r\n output = input.matmul(weight.t())\r\nRuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`\r\n`",
"I believe this is because the `allenai/scibert_scivocab_uncased` checkpoint does not have a maximum length for the tokenizer.\r\n\r\nWould you mind trying again but this time specifying `--block_size=512` as an additional argument, to limit the size of sequences to 512 tokens?g",
"@Rshcaroline do you mind trying the same argument `--block_size=512` and see if that fixes your issue?",
"@LysandreJik Thanks very much for the tip, that solved the problem.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I think this is OOM from CUDA side.",
"In case somebody is having the same in issue in the latest version, `--max_seq_length 512` fixes this issue for me."
] | 1,585 | 1,635 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) Language modeling
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a shell file under example folder called run_lm.sh
2. bash run_lm.sh
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
export TRAIN_FILE=../../data/wikitext-2-raw/wiki.train.raw
export TEST_FILE=../../data/wikitext-2-raw/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=bert \
--model_name_or_path=bert-base-cased \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--mlm
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It would arouse:
```
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
```
However, if you only change ```--model_name_or_path=bert-base-cased \``` to ```--model_name_or_path=bert-base-uncased \```, it will work well. Hence, I'm not sure what went wrong.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-5.3.0-28-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3495/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3495/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3494 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3494/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3494/comments | https://api.github.com/repos/huggingface/transformers/issues/3494/events | https://github.com/huggingface/transformers/issues/3494 | 589,503,320 | MDU6SXNzdWU1ODk1MDMzMjA= | 3,494 | TypeError when using Feature Extraction Pipeline with XLM roberta | {
"login": "aeshapar",
"id": 29237123,
"node_id": "MDQ6VXNlcjI5MjM3MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/29237123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aeshapar",
"html_url": "https://github.com/aeshapar",
"followers_url": "https://api.github.com/users/aeshapar/followers",
"following_url": "https://api.github.com/users/aeshapar/following{/other_user}",
"gists_url": "https://api.github.com/users/aeshapar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aeshapar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aeshapar/subscriptions",
"organizations_url": "https://api.github.com/users/aeshapar/orgs",
"repos_url": "https://api.github.com/users/aeshapar/repos",
"events_url": "https://api.github.com/users/aeshapar/events{/privacy}",
"received_events_url": "https://api.github.com/users/aeshapar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLM-R
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. after importing the required libraries, run the following snipped of code:
```
xlmr_model = XLMRobertaForSequenceClassification.from_pretrained("xlm-roberta-base")
xlmr_tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base")
nlp = pipeline(task ="feature-extraction", model = xlmr_model, tokenizer=xlmr_tokenizer, framework="tf")
features = nlp('We are very happy to include pipeline into the transformers repository.')
print(features)
```
2. You should get the following error: result = self.forward(*input, **kwargs)
**TypeError: forward() got an unexpected keyword argument 'training'**
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect this code to run without this error.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.6.1
- Platform: Ubuntu 16.04.6 LTS
- Python version: 3.6.1
- PyTorch version (GPU?): No GPU
- Tensorflow version (GPU?): No GPU
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3494/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3493 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3493/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3493/comments | https://api.github.com/repos/huggingface/transformers/issues/3493/events | https://github.com/huggingface/transformers/issues/3493 | 589,485,513 | MDU6SXNzdWU1ODk0ODU1MTM= | 3,493 | Finetuning GPT-2 | {
"login": "sadhikamalladi",
"id": 6046676,
"node_id": "MDQ6VXNlcjYwNDY2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6046676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadhikamalladi",
"html_url": "https://github.com/sadhikamalladi",
"followers_url": "https://api.github.com/users/sadhikamalladi/followers",
"following_url": "https://api.github.com/users/sadhikamalladi/following{/other_user}",
"gists_url": "https://api.github.com/users/sadhikamalladi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadhikamalladi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadhikamalladi/subscriptions",
"organizations_url": "https://api.github.com/users/sadhikamalladi/orgs",
"repos_url": "https://api.github.com/users/sadhikamalladi/repos",
"events_url": "https://api.github.com/users/sadhikamalladi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadhikamalladi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can use `run_language_modeling.py` for this purpose. Just set `--model_type` to `gpt2`, set `--model_name_or_path` to the gpt2 model checkpoint you want (`gpt2`) and set `--train_data_file` to your dataset and you should be ready to go.",
"Thanks! The issue is that I want to use the pre-trained version of GPT-2. I remember `run_lm_finetuning` had a few lines of code where it would download and load that pre-trained model.",
"You can specify `gpt2` in `--model_name_or_path`. That corresponds to one of the pre-trained checkpoints that it'll download and use. The other possible pre-trained models that you can specify there are `gpt2-medium`,`gpt2-large`,`gpt2-xl` and `distilgpt2`.",
"If specifying `gpt2` there downloads the checkpoints then how do you train from scratch? I've been specifying that parameter and it seems like it is training from scratch (starting perplexity ~1000).",
"I believe that if you want to train from scratch, you'll have to point that to a folder with a config file (with the parameters of the model) and no `pytorch_model.bin` checkpoint file in that folder.",
"@Genius1237 Hi there, \r\nI am finetuning the 124M model based on my dataset (almost 2mb) and I am using Colab by Max Wolf. I am wondering if there is a way that I can generate texts based on my trained model + internet context (not only metadata). I wanna generate some notes regarding current issues (such as COVID-19) based on my trained model. \r\nCould you help me with that, please? thanks.\r\n ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,592 | 1,592 | NONE | null | It looks like there used to be a script `run_lm_finetuning.py` that has been replaced by `run_language_modeling.py`. It's unclear to me how to use this script to run finetuning. I want to finetune GPT-2 on a variety of downstream tasks, and would love some help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3493/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3493/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3492 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3492/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3492/comments | https://api.github.com/repos/huggingface/transformers/issues/3492/events | https://github.com/huggingface/transformers/pull/3492 | 589,411,064 | MDExOlB1bGxSZXF1ZXN0Mzk0OTY4MDEz | 3,492 | [model_cards]: use MIT license for all dbmdz models | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=h1) Report\n> Merging [#3492](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/17dceae7a1de5577cd0c07a97dcd5821a08af07c&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3492 +/- ##\n==========================================\n- Coverage 77.80% 77.79% -0.01% \n==========================================\n Files 100 100 \n Lines 17051 17051 \n==========================================\n- Hits 13266 13265 -1 \n- Misses 3785 3786 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0.00%> (-0.14%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=footer). Last update [17dceae...70cab16](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Awesome! One more data point for #3357 ",
"Hey @stefan-it, thanks for creating some great models. It looks like a few [dbmdz models](https://huggingface.co/dbmdz) are missing model cards, including the default model for NER (`dbmdz/bert-large-cased-finetuned-conll03-english`). Are these licensed as MIT as well?"
] | 1,585 | 1,601 | 1,585 | COLLABORATOR | null | Hi,
this PR adds MIT license tag for all dbmdz models 🤗 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3492/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3492",
"html_url": "https://github.com/huggingface/transformers/pull/3492",
"diff_url": "https://github.com/huggingface/transformers/pull/3492.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3492.patch",
"merged_at": 1585346786000
} |
https://api.github.com/repos/huggingface/transformers/issues/3491 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3491/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3491/comments | https://api.github.com/repos/huggingface/transformers/issues/3491/events | https://github.com/huggingface/transformers/pull/3491 | 589,373,752 | MDExOlB1bGxSZXF1ZXN0Mzk0OTM3Mjg1 | 3,491 | cased -> uncased for example cmdline consistency | {
"login": "mattolson93",
"id": 32203230,
"node_id": "MDQ6VXNlcjMyMjAzMjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32203230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mattolson93",
"html_url": "https://github.com/mattolson93",
"followers_url": "https://api.github.com/users/mattolson93/followers",
"following_url": "https://api.github.com/users/mattolson93/following{/other_user}",
"gists_url": "https://api.github.com/users/mattolson93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mattolson93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mattolson93/subscriptions",
"organizations_url": "https://api.github.com/users/mattolson93/orgs",
"repos_url": "https://api.github.com/users/mattolson93/repos",
"events_url": "https://api.github.com/users/mattolson93/events{/privacy}",
"received_events_url": "https://api.github.com/users/mattolson93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In the same file, there is also `--do_lower_case` wrongly apply to `roberta-base` and `xlnet-large-cased` model. It will be great if this PR also includes that fix.",
"> In the same file, there is also `--do_lower_case` wrongly apply to `roberta-base` and `xlnet-large-cased` model. It will be great if this PR also includes that fix.\r\n\r\nThe intention is to have consistency across all the experiments, where all input is lowercase for all examples. Just because roberta and xlnet don't have lowercased models, doesn't mean their input has to be the cased version.",
"Thanks for raising this. Should be fixed by #3738\r\n\r\nif one saves their tokenizers using `save_pretrained()` there shouldn't be a need for passing the `do_lower_case` manually anymore."
] | 1,585 | 1,586 | 1,586 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3491/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3491",
"html_url": "https://github.com/huggingface/transformers/pull/3491",
"diff_url": "https://github.com/huggingface/transformers/pull/3491.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3491.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3490 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3490/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3490/comments | https://api.github.com/repos/huggingface/transformers/issues/3490/events | https://github.com/huggingface/transformers/issues/3490 | 589,360,195 | MDU6SXNzdWU1ODkzNjAxOTU= | 3,490 | which iterator to use for different hugging face transformer models for solving multiple choice questions? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do I need to build my own iterator for this?\r\nHow should the tokens be arranged for multiple choice question solving?\r\n\r\ne.g. (question) (token1) (token2) (MCoption)(answer1)\r\n. ----> is this the right format for all of the GPT-2, BERT and XLNet? \r\n\r\nThank you",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | Hello,
I have used Hugging Face GPT-2 models to do Natural Language Processing.
When I used the GPT-2 model, I used the ```BPTTIterator``` from TorchText to pre-process my text data, since GPT-2 essentially performs regular language modelling (next token prediction).
I am wondering, when I use GPT-2, BERT and XLNet for **multiple-choice solving**:
1. What type of iterator should I use for multiple-choice question solving? If there is no specific iterator that can accommodate multiple-choice question solving, how should I pre-process my text (multiple-choice questions) before feeding them into the different transformer models?
2. if hugging face transformer models (BERT, XLNet, GPT-2) use special tokens to separate questions from multiple-choice options, what are those special tokens? I want to make the use of the pre-trained models rather than training a new model with new special tokens on my own.
3. Is the format of multiple-choice questions that can be processed with, for example, BERT different than the multiple-choice question format that can be processed by GPT-2? or can each of the transformer models process any type of multiple-choice questions? (for example, BERT could only be used to solve fill-in-the-blank multiple-choice questions, whereas multiple-choice question for GPT-2 does not have to have the fill-in-the-blank format?)
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3490/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3489 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3489/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3489/comments | https://api.github.com/repos/huggingface/transformers/issues/3489/events | https://github.com/huggingface/transformers/issues/3489 | 589,335,968 | MDU6SXNzdWU1ODkzMzU5Njg= | 3,489 | missing import in BartForConditionalGeneration example | {
"login": "cgnorthcutt",
"id": 27030270,
"node_id": "MDQ6VXNlcjI3MDMwMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/27030270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cgnorthcutt",
"html_url": "https://github.com/cgnorthcutt",
"followers_url": "https://api.github.com/users/cgnorthcutt/followers",
"following_url": "https://api.github.com/users/cgnorthcutt/following{/other_user}",
"gists_url": "https://api.github.com/users/cgnorthcutt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cgnorthcutt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cgnorthcutt/subscriptions",
"organizations_url": "https://api.github.com/users/cgnorthcutt/orgs",
"repos_url": "https://api.github.com/users/cgnorthcutt/repos",
"events_url": "https://api.github.com/users/cgnorthcutt/events{/privacy}",
"received_events_url": "https://api.github.com/users/cgnorthcutt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Please complete the issue template, I don't understand what the issue is.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | Add
`from transformers import AutoTokenizer, AutoModelWithLMHead`
to Bart for Conditional Generation example in
# 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3489/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3488 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3488/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3488/comments | https://api.github.com/repos/huggingface/transformers/issues/3488/events | https://github.com/huggingface/transformers/pull/3488 | 589,285,805 | MDExOlB1bGxSZXF1ZXN0Mzk0ODc4ODMx | 3,488 | [bart-tiny-random] Put a 5MB model on S3 to allow faster examples test | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=h1) Report\n> Merging [#3488](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/17dceae7a1de5577cd0c07a97dcd5821a08af07c&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3488 +/- ##\n==========================================\n- Coverage 77.80% 77.79% -0.01% \n==========================================\n Files 100 100 \n Lines 17051 17051 \n==========================================\n- Hits 13266 13265 -1 \n- Misses 3785 3786 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3488/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0.00%> (-0.14%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=footer). Last update [17dceae...3e0a394](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM",
"Merging to unblock other testing efforts!"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | - Vocab size is the same to avoid requiring a new tokenizer.
- this allows examples that parametrize `model_name`, like `evaluate_cnn.py`, to run much quicker: `summarization/bart/test_bart_examples.py` runs in 6 seconds vs 22 + download time previously.
- Would be happy to do this for more models.
- Will update `run_sum.py` if this idea is OK with people.
- this also makes local debugging easier. You can easily pull down a tiny model without making a config. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3488/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3488",
"html_url": "https://github.com/huggingface/transformers/pull/3488",
"diff_url": "https://github.com/huggingface/transformers/pull/3488.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3488.patch",
"merged_at": 1585585707000
} |
https://api.github.com/repos/huggingface/transformers/issues/3487 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3487/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3487/comments | https://api.github.com/repos/huggingface/transformers/issues/3487/events | https://github.com/huggingface/transformers/pull/3487 | 589,255,391 | MDExOlB1bGxSZXF1ZXN0Mzk0ODU0Mzk1 | 3,487 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"@julien-c Do you know why does it fail?"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3487/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3487",
"html_url": "https://github.com/huggingface/transformers/pull/3487",
"diff_url": "https://github.com/huggingface/transformers/pull/3487.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3487.patch",
"merged_at": 1585655963000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3486 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3486/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3486/comments | https://api.github.com/repos/huggingface/transformers/issues/3486/events | https://github.com/huggingface/transformers/issues/3486 | 589,255,313 | MDU6SXNzdWU1ODkyNTUzMTM= | 3,486 | setup.py succeeds, then can't import transformers | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you use the `zsh` shell? cause in this case I think you have to do `pip install -e \".\"`\r\n\r\nI think this has something to do with the shells. Can you compare `zsh` to pure `bash` shell?",
"Fails in bash identically.\r\nOne clue, this is also happening for `pip install tokenizers` followed by import `tokenizers`, but not for `import numpy`.",
"And with `pip install -e \".\"` it works ?",
"I fixed it by **restarting** my zsh shell. Now I can't reproduce the bug in a new zsh shell. Closing.\r\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | ```bash
pip install -e .
```
works succesfully and makes `transformers.egg-info/`
Then,
```python
import transformers
```
fails with
```
ModuleNotFoundError: No module named 'transformers'
```
env
```
- `transformers` version: 2.6.0
- Platform: Darwin-19.0.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3486/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3485 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3485/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3485/comments | https://api.github.com/repos/huggingface/transformers/issues/3485/events | https://github.com/huggingface/transformers/pull/3485 | 589,211,407 | MDExOlB1bGxSZXF1ZXN0Mzk0ODE4Mjk2 | 3,485 | Fix circle ci flaky fail of wmt example | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is from this right? https://circleci.com/gh/huggingface/transformers/26455?utm_campaign=workflow-failed&utm_medium=email&utm_source=notification",
"Yeah exactly - it should be fixed now. \r\nI will push a slightly cleaner version in a second :-) ",
"In general, every example test that creates folder or files should deleted them afterwards to avoid same file naming collisions. Will open a PR about this. @julien-c @sshleifer @LysandreJik "
] | 1,585 | 1,585 | 1,585 | MEMBER | null | Weird bug which might be fixed by forcing bleu scorer.
Test pass locally, but seem to fail on circle ci. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3485/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3485",
"html_url": "https://github.com/huggingface/transformers/pull/3485",
"diff_url": "https://github.com/huggingface/transformers/pull/3485.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3485.patch",
"merged_at": 1585328489000
} |
https://api.github.com/repos/huggingface/transformers/issues/3484 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3484/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3484/comments | https://api.github.com/repos/huggingface/transformers/issues/3484/events | https://github.com/huggingface/transformers/issues/3484 | 589,206,684 | MDU6SXNzdWU1ODkyMDY2ODQ= | 3,484 | Sphinx build for documentation fails when tensorflow is installed | {
"login": "venkatasg",
"id": 22871413,
"node_id": "MDQ6VXNlcjIyODcxNDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/22871413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/venkatasg",
"html_url": "https://github.com/venkatasg",
"followers_url": "https://api.github.com/users/venkatasg/followers",
"following_url": "https://api.github.com/users/venkatasg/following{/other_user}",
"gists_url": "https://api.github.com/users/venkatasg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/venkatasg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venkatasg/subscriptions",
"organizations_url": "https://api.github.com/users/venkatasg/orgs",
"repos_url": "https://api.github.com/users/venkatasg/repos",
"events_url": "https://api.github.com/users/venkatasg/events{/privacy}",
"received_events_url": "https://api.github.com/users/venkatasg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This seems to be similar to [Issue #3466](https://github.com/huggingface/transformers/issues/3466)",
"Hi, this issue was solved with https://github.com/huggingface/transformers/commit/e2c05f06ef58ea77103d2c64492dd8d9a0b21c3f\r\n\r\nCould you try to pull the repository once again and try then?",
"Yes, that seems to have fixed it! I still get a bunch of warnings which are also related to indentation I believe? I've posted the full log of my build below with all the other warnings, but I'll close this issue for now:\r\n\r\n```\r\nRunning Sphinx v2.4.4\r\nmaking output directory... done\r\nbuilding [mo]: targets for 0 po files that are out of date\r\nbuilding [html]: targets for 38 source files that are out of date\r\nupdating environment: [new config] 38 added, 0 changed, 0 removed\r\n/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document\r\n warn(\"Container node skipped: type={0}\".format(mdnode.t))\r\n/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document\r\n warn(\"Container node skipped: type={0}\".format(mdnode.t))\r\n/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document\r\n warn(\"Container node skipped: type={0}\".format(mdnode.t))\r\nreading sources... [100%] usage\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_utils.py:docstring of transformers.PreTrainedModel.from_pretrained:23: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_utils.py:docstring of transformers.TFPreTrainedModel.from_pretrained:20: WARNING: Definition list ends without a blank line; unexpected unindent.\r\nWARNING: error while formatting arguments for transformers.pipeline: 'function' object has no attribute '__mro__'\r\n/Users/venkat/Downloads/transformers/src/transformers/pipelines.py:docstring of transformers.Pipeline:6: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/data/processors/utils.py:docstring of transformers.data.processors.utils.DataProcessor.get_dev_examples:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.\r\n/Users/venkat/Downloads/transformers/src/transformers/data/processors/utils.py:docstring of transformers.data.processors.utils.DataProcessor.get_example_from_tensor_dict:3: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/data/processors/utils.py:docstring of transformers.data.processors.utils.DataProcessor.get_train_examples:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.batch_encode_plus:32: WARNING: Bullet list ends without a blank line; unexpected unindent.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.build_inputs_with_special_tokens:4: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.encode:37: WARNING: Bullet list ends without a blank line; unexpected unindent.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.encode_plus:36: WARNING: Bullet list ends without a blank line; unexpected unindent.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.prepare_for_model:17: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.prepare_for_model:18: WARNING: Block quote ends without a blank line; unexpected unindent.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.tokenize:9: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.tokenize:10: WARNING: Block quote ends without a blank line; unexpected unindent.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.tokenize:10: WARNING: Inline strong start-string without end-string.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.truncate_sequences:3: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_albert.py:docstring of transformers.TFAlbertModel.call:47: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_albert.py:docstring of transformers.TFAlbertModel.call:48: WARNING: Block quote ends without a blank line; unexpected unindent.\r\n/Users/venkat/Downloads/transformers/src/transformers/configuration_auto.py:docstring of transformers.AutoConfig.from_pretrained:7: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_auto.py:docstring of transformers.AutoTokenizer:12: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_auto.py:docstring of transformers.AutoTokenizer.from_pretrained:6: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModel.from_pretrained:10: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:29: WARNING: Title underline too short.\r\n\r\n``AutoModelForPreTraining``\r\n~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:29: WARNING: Title underline too short.\r\n\r\n``AutoModelForPreTraining``\r\n~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModelForPreTraining.from_pretrained:9: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:35: WARNING: Title underline too short.\r\n\r\n``AutoModelWithLMHead``\r\n~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:35: WARNING: Title underline too short.\r\n\r\n``AutoModelWithLMHead``\r\n~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModelWithLMHead.from_pretrained:10: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:41: WARNING: Title underline too short.\r\n\r\n``AutoModelForSequenceClassification``\r\n~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:41: WARNING: Title underline too short.\r\n\r\n``AutoModelForSequenceClassification``\r\n~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModelForSequenceClassification.from_pretrained:10: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:47: WARNING: Title underline too short.\r\n\r\n``AutoModelForQuestionAnswering``\r\n~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:47: WARNING: Title underline too short.\r\n\r\n``AutoModelForQuestionAnswering``\r\n~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModelForQuestionAnswering.from_pretrained:10: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:53: WARNING: Title underline too short.\r\n\r\n``AutoModelForTokenClassification``\r\n~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:53: WARNING: Title underline too short.\r\n\r\n``AutoModelForTokenClassification``\r\n~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModelForTokenClassification.from_pretrained:10: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/t5.rst:7: WARNING: Title underline too short.\r\n\r\nOverview\r\n~~~~~\r\n/Users/venkat/Downloads/transformers/src/transformers/tokenization_t5.py:docstring of transformers.T5Tokenizer.build_inputs_with_special_tokens:4: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5Model:9: WARNING: Duplicate explicit target name: \"exploring the limits of transfer learning with a unified text-to-text transformer\".\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5Model.forward:54: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5Model.forward:55: WARNING: Block quote ends without a blank line; unexpected unindent.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5ForConditionalGeneration:9: WARNING: Duplicate explicit target name: \"exploring the limits of transfer learning with a unified text-to-text transformer\".\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5ForConditionalGeneration.forward:58: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5ForConditionalGeneration.forward:59: WARNING: Block quote ends without a blank line; unexpected unindent.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5Model:9: WARNING: Duplicate explicit target name: \"exploring the limits of transfer learning with a unified text-to-text transformer\".\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5Model:25: WARNING: Inline interpreted text or phrase reference start-string without end-string.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5Model.call:56: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5Model.call:57: WARNING: Block quote ends without a blank line; unexpected unindent.\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/t5.rst:60: WARNING: Title underline too short.\r\n\r\nTFT5ForConditionalGeneration\r\n~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/docs/source/model_doc/t5.rst:60: WARNING: Title underline too short.\r\n\r\nTFT5ForConditionalGeneration\r\n~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5ForConditionalGeneration:9: WARNING: Duplicate explicit target name: \"exploring the limits of transfer learning with a unified text-to-text transformer\".\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5ForConditionalGeneration:25: WARNING: Inline interpreted text or phrase reference start-string without end-string.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5ForConditionalGeneration.call:60: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5ForConditionalGeneration.call:61: WARNING: Block quote ends without a blank line; unexpected unindent.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5Model:1: WARNING: Duplicate target name, cannot be used as a unique reference: \"exploring the limits of transfer learning with a unified text-to-text transformer\".\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5ForConditionalGeneration:1: WARNING: Duplicate target name, cannot be used as a unique reference: \"exploring the limits of transfer learning with a unified text-to-text transformer\".\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5Model:1: WARNING: Duplicate target name, cannot be used as a unique reference: \"exploring the limits of transfer learning with a unified text-to-text transformer\".\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5ForConditionalGeneration:1: WARNING: Duplicate target name, cannot be used as a unique reference: \"exploring the limits of transfer learning with a unified text-to-text transformer\".\r\n/Users/venkat/Downloads/transformers/src/transformers/configuration_xlnet.py:docstring of transformers.XLNetConfig:52: WARNING: Unexpected indentation.\r\n/Users/venkat/Downloads/transformers/src/transformers/modeling_xlnet.py:docstring of transformers.XLNetForMultipleChoice.forward:65: WARNING: Inline literal start-string without end-string.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:25: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:29: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:36: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:40: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:44: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:48: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:52: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:56: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:60: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:64: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:68: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:73: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:78: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:82: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:86: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:90: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:94: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:152: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:156: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:160: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:164: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:168: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:172: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:176: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:180: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:184: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:188: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:192: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:196: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:200: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:207: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:211: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:215: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:219: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:223: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:227: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:231: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:235: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:239: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:264: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:268: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:272: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:276: WARNING: Line block ends without a blank line.\r\n/Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:280: WARNING: Line block ends without a blank line.\r\nlooking for now-outdated files... none found\r\npickling environment... done\r\nchecking consistency... done\r\npreparing documents... done\r\nwriting output... [100%] usage\r\n/Users/venkat/Downloads/transformers/docs/source/examples.md:440: WARNING: None:any reference target not found:\r\ngenerating indices... genindexdone\r\nhighlighting module code... [100%] transformers.tokenization_xlnet\r\nwriting additional pages... search/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx_rtd_theme/search.html:21: RemovedInSphinx30Warning: To modify script_files in the theme is deprecated. Please insert a <script> tag directly in your theme instead.\r\n {% endblock %}\r\ndone\r\ncopying images... [100%] imgs/warmup_linear_schedule.png\r\ncopying static files... ... done\r\ncopying extra files... done\r\ndumping search index in English (code: en)... done\r\ndumping object inventory... done\r\nbuild succeeded, 107 warnings.\r\n```",
"Yes there are a few warnings, but they're inoffensive. We're working towards removing most of them :)"
] | 1,585 | 1,585 | 1,585 | NONE | null | ## Information
I am trying to build the documentation in `docs/` using `sphinx-build` so that I can add it to [Dash](https://kapeli.com/dash) as one of the docsets. However, I get an assertion error with sphinx-build when I run `make html` in the `docs/` folder *only when tensorflow is installed*. The build works without tensorflow installed, but the tensorflow methods and classes are emtpy in the generated documentation - only the pytorch ones have documentation in the resulting HTML.
## To reproduce
Steps to reproduce the behavior:
1. Create a conda environment, and install pytorch, tensorflow and transformers using the official methods.
2. Run `pip install -e ".[docs]` in source directory, before running `make html` in `docs` folder. You will get the following error (full stacktrace given later):
```
Exception occurred:
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/util/docfields.py", line 260, in transform
assert len(field) == 2
AssertionError
```
Full stacktrace:
```
# Sphinx version: 2.4.4
# Python version: 3.6.10 (CPython)
# Docutils version: 0.16 release
# Jinja2 version: 2.11.1
# Last messages:
# reading sources... [ 13%] glossary
#
# reading sources... [ 16%] index
#
# reading sources... [ 18%] installation
#
# reading sources... [ 21%] main_classes/configuration
#
# reading sources... [ 24%] main_classes/model
#
# Loaded extensions:
# sphinx.ext.mathjax (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/mathjax.py
# sphinxcontrib.applehelp (1.0.2) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinxcontrib/applehelp/__init__.py
# sphinxcontrib.devhelp (1.0.2) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinxcontrib/devhelp/__init__.py
# sphinxcontrib.htmlhelp (1.0.3) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinxcontrib/htmlhelp/__init__.py
# sphinxcontrib.serializinghtml (1.1.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinxcontrib/serializinghtml/__init__.py
# sphinxcontrib.qthelp (1.0.3) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinxcontrib/qthelp/__init__.py
# alabaster (0.7.12) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/alabaster/__init__.py
# sphinx.ext.autodoc.type_comment (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/autodoc/type_comment.py
# sphinx.ext.autodoc (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/autodoc/__init__.py
# sphinx.ext.coverage (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/coverage.py
# sphinx.ext.napoleon (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/napoleon/__init__.py
# recommonmark (0.6.0) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/recommonmark/__init__.py
# sphinx.ext.viewcode (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/viewcode.py
# sphinx_markdown_tables (<module 'sphinx_markdown_tables.__version__' from '/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx_markdown_tables/__version__.py'>) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx_markdown_tables/__init__.py
Traceback (most recent call last):
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/cmd/build.py", line 276, in build_main
app.build(args.force_all, filenames)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/application.py", line 349, in build
self.builder.build_update()
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 299, in build_update
len(to_build))
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 311, in build
updated_docnames = set(self.read())
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 418, in read
self._read_serial(docnames)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 439, in _read_serial
self.read_doc(docname)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 479, in read_doc
doctree = read_doc(self.app, self.env, self.env.doc2path(docname))
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/io.py", line 316, in read_doc
pub.publish()
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/core.py", line 218, in publish
self.settings)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/io.py", line 130, in read
self.parse()
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/readers/__init__.py", line 77, in parse
self.parser.parse(self.input, document)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/parsers.py", line 93, in parse
self.statemachine.run(inputlines, document, inliner=self.inliner)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 171, in run
input_source=document['source'])
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run
context, state, transitions)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line
return method(match, context, next_state)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2769, in underline
self.section(title, source, style, lineno - 1, messages)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 327, in section
self.new_subsection(title, lineno, messages)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 395, in new_subsection
node=section_node, match_titles=True)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse
node=node, match_titles=match_titles)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 196, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run
context, state, transitions)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line
return method(match, context, next_state)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2769, in underline
self.section(title, source, style, lineno - 1, messages)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 327, in section
self.new_subsection(title, lineno, messages)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 395, in new_subsection
node=section_node, match_titles=True)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse
node=node, match_titles=match_titles)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 196, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run
context, state, transitions)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line
return method(match, context, next_state)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2342, in explicit_markup
nodelist, blank_finish = self.explicit_construct(match)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2354, in explicit_construct
return method(self, expmatch)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2097, in directive
directive_class, match, type_name, option_presets)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2146, in run_directive
result = directive_instance.run()
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/autodoc/directive.py", line 157, in run
result = parse_generated_content(self.state, params.result, documenter)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/autodoc/directive.py", line 104, in parse_generated_content
state.nested_parse(content, 0, node)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse
node=node, match_titles=match_titles)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 196, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run
context, state, transitions)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line
return method(match, context, next_state)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2342, in explicit_markup
nodelist, blank_finish = self.explicit_construct(match)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2354, in explicit_construct
return method(self, expmatch)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2097, in directive
directive_class, match, type_name, option_presets)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2146, in run_directive
result = directive_instance.run()
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/domains/__init__.py", line 265, in run
return super().run()
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/directives/__init__.py", line 195, in run
self.state.nested_parse(self.content, self.content_offset, contentnode)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse
node=node, match_titles=match_titles)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 196, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run
context, state, transitions)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line
return method(match, context, next_state)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2344, in explicit_markup
self.explicit_list(blank_finish)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2374, in explicit_list
match_titles=self.state_machine.match_titles)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 319, in nested_list_parse
node=node, match_titles=match_titles)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 196, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run
context, state, transitions)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line
return method(match, context, next_state)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2647, in explicit_markup
nodelist, blank_finish = self.explicit_construct(match)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2354, in explicit_construct
return method(self, expmatch)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2097, in directive
directive_class, match, type_name, option_presets)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2146, in run_directive
result = directive_instance.run()
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/domains/__init__.py", line 265, in run
return super().run()
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/directives/__init__.py", line 198, in run
DocFieldTransformer(self).transform_all(contentnode)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/util/docfields.py", line 248, in transform_all
self.transform(child)
File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/util/docfields.py", line 260, in transform
assert len(field) == 2
AssertionError
```
3. Uninstall tensorflow. Now when running `make html`, it does fininsh building, albeit with a bunch of warnings of the following form: `AttributeError: module 'transformers' has no attribute 'TFCamembertForMaskedLM'` -- for every `TFmethod`.
## Expected behavior
`make html` should build with no errors.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.6.0
- Platform: macOS 10.15.4
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (No)
- Tensorflow version (GPU?): 2.1.0 (No)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
- sphinx version: 2.4.4
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3484/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3483 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3483/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3483/comments | https://api.github.com/repos/huggingface/transformers/issues/3483/events | https://github.com/huggingface/transformers/issues/3483 | 589,196,576 | MDU6SXNzdWU1ODkxOTY1NzY= | 3,483 | Tests for more examples | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
}
] | [
"I agree. Tried for an hour last night to add coverage for `run_bart_sum.py` and got stuck on two things.\r\n\r\n1) A `circleci` job that installs `pytorch_lightning` (and potentially other dependencies)\r\n2) the ability to import `examples/transformer_base.py`\r\n\r\n### Suggested Approach:\r\n- add the aforementioned circleci job\r\n- a flag like `@require_lightning` to decorate some tests\r\n- some code changes to get the imports working sanely. \r\n- Checklist/Instructions for how new examples/ contributors can add test coverage.\r\n\r\nHappy to help with this!\r\nCC: @LysandreJik , @julien-c, @thomwolf @patrickvonplaten ",
"Cool. Most of these seem easy. Sharing code between examples is a bit harder. I am not sure if we should have and examples package, or symlink that shared file somehow. Currently we add it to the path before running the code. ",
"With the Pytorch-Lightning examples, I've been running into an [issue](https://github.com/huggingface/transformers/pull/3437) loading trained models with `--do_predict` (when not also using `--do_train`), so it would be helpful to add some model-loading test as well :) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | CONTRIBUTOR | null |
# Add tests for more of the examples/
We need to have more testing code for examples. This is particularly true with NER which recently had a tokenizer issue.
(Self-assigning) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3483/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3483/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3482 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3482/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3482/comments | https://api.github.com/repos/huggingface/transformers/issues/3482/events | https://github.com/huggingface/transformers/pull/3482 | 589,189,395 | MDExOlB1bGxSZXF1ZXN0Mzk0ODAwMDE4 | 3,482 | Correct output shape for Bert NSP models in docs | {
"login": "Genius1237",
"id": 15867363,
"node_id": "MDQ6VXNlcjE1ODY3MzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/15867363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Genius1237",
"html_url": "https://github.com/Genius1237",
"followers_url": "https://api.github.com/users/Genius1237/followers",
"following_url": "https://api.github.com/users/Genius1237/following{/other_user}",
"gists_url": "https://api.github.com/users/Genius1237/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Genius1237/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Genius1237/subscriptions",
"organizations_url": "https://api.github.com/users/Genius1237/orgs",
"repos_url": "https://api.github.com/users/Genius1237/repos",
"events_url": "https://api.github.com/users/Genius1237/events{/privacy}",
"received_events_url": "https://api.github.com/users/Genius1237/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | In the docs, for Bert models that have the NSP head, the output shape for one of the params returned by the forward method, `seq_relationship_scores`, is incorrect. Fixing it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3482/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3482",
"html_url": "https://github.com/huggingface/transformers/pull/3482",
"diff_url": "https://github.com/huggingface/transformers/pull/3482.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3482.patch",
"merged_at": 1585767879000
} |
https://api.github.com/repos/huggingface/transformers/issues/3481 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3481/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3481/comments | https://api.github.com/repos/huggingface/transformers/issues/3481/events | https://github.com/huggingface/transformers/issues/3481 | 589,188,022 | MDU6SXNzdWU1ODkxODgwMjI= | 3,481 | Do I need to pad non-fixed examples or does run_language_modeling.py already takes care of that? | {
"login": "timsoraro",
"id": 61194445,
"node_id": "MDQ6VXNlcjYxMTk0NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/61194445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timsoraro",
"html_url": "https://github.com/timsoraro",
"followers_url": "https://api.github.com/users/timsoraro/followers",
"following_url": "https://api.github.com/users/timsoraro/following{/other_user}",
"gists_url": "https://api.github.com/users/timsoraro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timsoraro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timsoraro/subscriptions",
"organizations_url": "https://api.github.com/users/timsoraro/orgs",
"repos_url": "https://api.github.com/users/timsoraro/repos",
"events_url": "https://api.github.com/users/timsoraro/events{/privacy}",
"received_events_url": "https://api.github.com/users/timsoraro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I believe that the `collate` function should take care of it. However, you will need to create the `attention_mask` variable when you have inputs of variable length, so that the model does not attend to the padded indices.",
"@Genius1237 I'm a real noob (I come from a programming background and no ML), but I'm using the GPT2 model, not a masked-language modeling like BERT. Do I still need to do such a thing? I didn't see any `attention_mask` reference in TextDataset loaders in the example file.\r\n\r\nAlso, how does the `collate` function knows to pad the examples to `bucket_size` if no such variable is passed to it?",
"The `TextDataset` class takes text and converts it into blocks of size `block_size` (512), concatenating consecutive blocks if needed. My guess is that `LineByLineTextDataset` exists to cater to those who would like to have examples being limited to one sentence each, and thus the max size in one batch would be determined by the longest sentence in that batch.\r\n\r\n`attention_mask` is definitely needed when you have sequences of different length. Have a look at Have a look at https://github.com/huggingface/transformers/issues/1408 . Something like this should do on top of the existing version of `LineByLineTextDataset`.\r\n```\r\ndef collate(examples: List[torch.Tensor]):\r\n padding_value = 0 if tokenizer._pad_token is None else tokenizer.pad_token_id\r\n input_ids = pad_sequence(examples, batch_first=True, padding_value=tokenizer.pad_token_id)\r\n\r\n max_length = input_ids.shape[1]\r\n attention_mask = torch.stack([torch.cat([torch.ones(len(t), dtype=torch.long), torch.zeros(max_length - len(t), dtype=torch.long)]) for t in examples])\r\n\r\n return input_ids, attention_mask\r\n```",
"Gotcha, thank you so much @Genius1237 ! I hope it would work.\r\nDo I need to unpack `attention_mask` somehow or the collate function in the DataLoader will take care of that?",
"And in your code you probably meant to wrote this instead, right?\r\n\r\n```python\r\ninput_ids = pad_sequence(examples, batch_first=True, padding_value=padding_value)\r\n```\r\n\r\nAlso, the same code should be pasted in ```def evaluate():``` right?",
"Same code in evaluate. The collate function is called by the dataloader. The dataloader calls the __getitem__ on the dataset `batch_size` times and sends that output to the collate function. The output of the collate function is what you will get when you do `for batch in train_dataloader`. The batch in this case will be a 2 tuple, with `batch[0]` having the input_ids and `batch[1]` having the attention_mask.",
"I really appreciate your help, thanks a lot. I'll ping the maintainers to change the example code so others can benefit. @thomwolf @LysandreJik @patrickvonplaten Thanks!",
"Hey, @Genius1237, I'm getting this error using your code:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 974, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 924, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_language_modeling.py\", line 508, in train\r\n inputs = inputs.to(args.device)\r\nAttributeError: 'tuple' object has no attribute 'to'\r\n```\r\nUsing the previous code I'm not getting any error.",
"When you do `for batch in train_dataloader`, batch is basically whatever is returned by the `collate` function, which is this cause becomes a 2-tuple. In your case `input_ids` is a 2-tuple containing 2 tensors. You'll have to split that into 2, i.e `input_ids, attention_mask = inputs`, and move forward with that, pushing both those tensors to the required device (`input_ids = input_ids.to(args.device); attention_mask = attention_mask.to(args.device)`).",
"Can you please help me? I'm really clueless...\r\n\r\nI tried:\r\n\r\n```python\r\nfor _ in train_iterator:\r\n epoch_iterator = tqdm(train_dataloader, desc=\"Iteration\", disable=args.local_rank not in [-1, 0])\r\n for step, batch in enumerate(epoch_iterator):\r\n\r\n # Skip past any already trained steps if resuming training\r\n if steps_trained_in_current_epoch > 0:\r\n steps_trained_in_current_epoch -= 1\r\n continue\r\n\r\n inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch)\r\n input_ids, attention_mask = inputs\r\n inputs = input_ids.to(args.device)\r\n labels = labels.to(args.device)\r\n attention_mask = attention_mask.to(args.device)\r\n model.train()\r\n outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)\r\n loss = outputs[0] # model outputs are always tuple in transformers (see doc)\r\n```\r\n\r\nBut I get this error:\r\n```\r\nTraceback (most recent call last): \r\n File \"run_language_modeling.py\", line 976, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 926, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_language_modeling.py\", line 510, in train\r\n labels = labels.to(args.device)\r\nAttributeError: 'tuple' object has no attribute 'to'\r\n```\r\n\r\nAlso, the rest of the code doesn't seem to use `attention_mask` variable, wouldn't it be redundant?",
"```\r\nfor _ in train_iterator:\r\n epoch_iterator = tqdm(train_dataloader, desc=\"Iteration\", disable=args.local_rank not in [-1, 0])\r\n for step, batch in enumerate(epoch_iterator):\r\n\r\n # Skip past any already trained steps if resuming training\r\n if steps_trained_in_current_epoch > 0:\r\n steps_trained_in_current_epoch -= 1\r\n continue\r\n\r\n input_ids, attention_mask = batch\r\n inputs, labels = mask_tokens(input_ids, tokenizer, args) if args.mlm else (input_ids, input_ids)\r\n inputs = inputs.to(args.device)\r\n labels = labels.to(args.device)\r\n attention_mask = attention_mask.to(args.device)\r\n model.train()\r\n outputs = model(inputs, masked_lm_labels=labels, attention_mask=attention_mask) if args.mlm else model(inputs, labels=labels, attention_mask=attention_mask)\r\n loss = outputs[0] # model outputs are always tuple in transformers (see doc)\r\n```",
"Thank you, it works!\r\n\r\nBtw, I don't see a difference in output between training using the attention mask and the original code. Does it mean something?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
" In training a GPT-2 i have the same question too. I don't see any significant change between training using the attention mask or without. Did u have any answer to this @timsoraro ?\r\nAlso my perplexity score i can say is too low, any idea for this behaviour?\r\n\r\n",
"@niklaras I didn't see much difference either after many experiments with or without, I got the same quality of generation."
] | 1,585 | 1,593 | 1,591 | NONE | null | I use a custom TextDataset in [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). I can't figure if I need to pad the examples to `bucket_size` myself, or is it already been taken care of in [L221-L223](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py#L221-L223)?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3481/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3480 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3480/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3480/comments | https://api.github.com/repos/huggingface/transformers/issues/3480/events | https://github.com/huggingface/transformers/pull/3480 | 589,175,966 | MDExOlB1bGxSZXF1ZXN0Mzk0Nzg4ODAx | 3,480 | Add option to choose T5 model size. | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | I believe the error mentioned in #3469 is due to the example tests loading T5-large in memory. One of the workers load that model which fills up the machine's memory, and other workers crash with an oom error.
This PR gives the option to choose the T5 model size, and changes the tests to only use the small model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3480/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3480",
"html_url": "https://github.com/huggingface/transformers/pull/3480",
"diff_url": "https://github.com/huggingface/transformers/pull/3480.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3480.patch",
"merged_at": 1585321020000
} |
https://api.github.com/repos/huggingface/transformers/issues/3479 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3479/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3479/comments | https://api.github.com/repos/huggingface/transformers/issues/3479/events | https://github.com/huggingface/transformers/pull/3479 | 589,152,027 | MDExOlB1bGxSZXF1ZXN0Mzk0NzY5MDMw | 3,479 | Add Colab with the evaluation procedure | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3479/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3479",
"html_url": "https://github.com/huggingface/transformers/pull/3479",
"diff_url": "https://github.com/huggingface/transformers/pull/3479.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3479.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3478 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3478/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3478/comments | https://api.github.com/repos/huggingface/transformers/issues/3478/events | https://github.com/huggingface/transformers/pull/3478 | 589,088,231 | MDExOlB1bGxSZXF1ZXN0Mzk0NzE2NDI5 | 3,478 | add summarization and translation to notebook | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | Add summarization and translation to pipeline notebook | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3478/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3478",
"html_url": "https://github.com/huggingface/transformers/pull/3478",
"diff_url": "https://github.com/huggingface/transformers/pull/3478.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3478.patch",
"merged_at": 1585321538000
} |
https://api.github.com/repos/huggingface/transformers/issues/3477 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3477/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3477/comments | https://api.github.com/repos/huggingface/transformers/issues/3477/events | https://github.com/huggingface/transformers/pull/3477 | 589,076,319 | MDExOlB1bGxSZXF1ZXN0Mzk0NzA2NjM1 | 3,477 | Added CovidBERT-NLI model card | {
"login": "gsarti",
"id": 16674069,
"node_id": "MDQ6VXNlcjE2Njc0MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsarti",
"html_url": "https://github.com/gsarti",
"followers_url": "https://api.github.com/users/gsarti/followers",
"following_url": "https://api.github.com/users/gsarti/following{/other_user}",
"gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsarti/subscriptions",
"organizations_url": "https://api.github.com/users/gsarti/orgs",
"repos_url": "https://api.github.com/users/gsarti/repos",
"events_url": "https://api.github.com/users/gsarti/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsarti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3477/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3477",
"html_url": "https://github.com/huggingface/transformers/pull/3477",
"diff_url": "https://github.com/huggingface/transformers/pull/3477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3477.patch",
"merged_at": 1585655990000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3476 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3476/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3476/comments | https://api.github.com/repos/huggingface/transformers/issues/3476/events | https://github.com/huggingface/transformers/issues/3476 | 589,053,293 | MDU6SXNzdWU1ODkwNTMyOTM= | 3,476 | Finetuning FlauBERT with hugging face's Transformers : WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: True | {
"login": "keloemma",
"id": 40454218,
"node_id": "MDQ6VXNlcjQwNDU0MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keloemma",
"html_url": "https://github.com/keloemma",
"followers_url": "https://api.github.com/users/keloemma/followers",
"following_url": "https://api.github.com/users/keloemma/following{/other_user}",
"gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keloemma/subscriptions",
"organizations_url": "https://api.github.com/users/keloemma/orgs",
"repos_url": "https://api.github.com/users/keloemma/repos",
"events_url": "https://api.github.com/users/keloemma/events{/privacy}",
"received_events_url": "https://api.github.com/users/keloemma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're probably on an old transformers version."
] | 1,585 | 1,585 | 1,585 | NONE | null | I am trying to use the script run_glue present in the transformers library to fintenue Flaubert for french data using huggingface's transformers and I am gettinh this error :
`
03/27/2020 11:54:48 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: True
Traceback (most recent call last):
File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 693, in <module>
main()
File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 613, in main
config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type]
KeyError: 'flaubert'
`
Could you help me find how to solve it ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3476/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3475 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3475/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3475/comments | https://api.github.com/repos/huggingface/transformers/issues/3475/events | https://github.com/huggingface/transformers/pull/3475 | 589,045,758 | MDExOlB1bGxSZXF1ZXN0Mzk0NjgxMDk4 | 3,475 | [WIP] General docs polish | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,593 | 1,593 | MEMBER | null | Rebased from ##3461. So merge after this one.
- [x] Add Bart, T5 and MMBT to main docs page
- [x] Add MMBT docs
- [ ] Add MMBT pretrained info
- [ ] Polish MMBT docstring | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3475/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3475",
"html_url": "https://github.com/huggingface/transformers/pull/3475",
"diff_url": "https://github.com/huggingface/transformers/pull/3475.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3475.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3474 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3474/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3474/comments | https://api.github.com/repos/huggingface/transformers/issues/3474/events | https://github.com/huggingface/transformers/pull/3474 | 588,996,825 | MDExOlB1bGxSZXF1ZXN0Mzk0NjQwMjIz | 3,474 | [examples] fine-tuning `bert-base-finnish-(un)cased-v1` model for Named Entity Recognition | {
"login": "bmichele",
"id": 21679029,
"node_id": "MDQ6VXNlcjIxNjc5MDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/21679029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmichele",
"html_url": "https://github.com/bmichele",
"followers_url": "https://api.github.com/users/bmichele/followers",
"following_url": "https://api.github.com/users/bmichele/following{/other_user}",
"gists_url": "https://api.github.com/users/bmichele/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmichele/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmichele/subscriptions",
"organizations_url": "https://api.github.com/users/bmichele/orgs",
"repos_url": "https://api.github.com/users/bmichele/repos",
"events_url": "https://api.github.com/users/bmichele/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmichele/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=h1) Report\n> Merging [#3474](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e392ba6938f50655a195ea7ec8a260b1e9fc6058&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3474 +/- ##\n=======================================\n Coverage 77.56% 77.56% \n=======================================\n Files 100 100 \n Lines 16970 16970 \n=======================================\n+ Hits 13162 13163 +1 \n+ Misses 3808 3807 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3474/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.85% <0.00%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=footer). Last update [e392ba6...d9e6d4e](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,594 | 1,594 | NONE | null | No new features added and no modifications to the existing code has been done. Just addition of scripts for the fine-tuning. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3474/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3474",
"html_url": "https://github.com/huggingface/transformers/pull/3474",
"diff_url": "https://github.com/huggingface/transformers/pull/3474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3474.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3473 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3473/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3473/comments | https://api.github.com/repos/huggingface/transformers/issues/3473/events | https://github.com/huggingface/transformers/issues/3473 | 588,911,203 | MDU6SXNzdWU1ODg5MTEyMDM= | 3,473 | Inversion of a mask in newer pytorch versions | {
"login": "Shashi456",
"id": 18056781,
"node_id": "MDQ6VXNlcjE4MDU2Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/18056781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shashi456",
"html_url": "https://github.com/Shashi456",
"followers_url": "https://api.github.com/users/Shashi456/followers",
"following_url": "https://api.github.com/users/Shashi456/following{/other_user}",
"gists_url": "https://api.github.com/users/Shashi456/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shashi456/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shashi456/subscriptions",
"organizations_url": "https://api.github.com/users/Shashi456/orgs",
"repos_url": "https://api.github.com/users/Shashi456/repos",
"events_url": "https://api.github.com/users/Shashi456/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shashi456/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do you mind giving a reproducible example so that we may debug easily?",
"I was following this issue because I ran into the same problem. I won't use the code I ran into the problem with here but here is the gist:\r\n\r\n```python\r\nfrom transformers import XLNetForSequenceClassification\r\n\r\nmodel = XLNetForSequenceClassification.from_pretrained(\"xlnet-base-cased\")\r\ninputs = torch.randint(0, 100, (32, 100)) # 100 words in vocab, batch size 32, seq_len = 100\r\nmasks = torch.ones(inputs.size(), dtype=torch.bool) # none of the tokens are padding\r\nlabels = torch.randint(0, 2, (32,)) # binary classification\r\n\r\nresult = model(inputs, attention_mask=masks, labels=labels)\r\n```\r\n\r\nError:\r\n\r\n> Traceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/modeling_xlnet.py\", line 1150, in forward\r\n inputs_embeds=inputs_embeds,\r\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/modeling_xlnet.py\", line 778, in forward\r\n input_mask = 1.0 - attention_mask\r\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/tensor.py\", line 394, in __rsub__\r\n return _C._VariableFunctions.rsub(self, other)\r\nRuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead.\r\n\r\nI use OSX Mojave, torch==1.4.0, if it helps.\r\n\r\nEdit: by converting the masks to dtype `torch.uint32`, I was able to get it to work, but I'm not sure if masking using an integer mask is the correct way of handling this.",
"The `torch.bool` was introduced in `torch==1.2.0` but we're looking to accommodate `torch>=1.0.0`, so we have to handle such cases with a `uint` syntax. All model inputs should be `uint`, and not `bool` for that specific reason.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLNet
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Finetuning on downstream task.
## To reproduce
Steps to reproduce the behavior:
1.Just run the XLNet Model with a newer pytorch version.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
```
RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead.
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.6.0
- Platform: Ubuntu
- Python version: 3.6
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: parallel
## Additional Comments
This change was made in pytorch 1.2.0, check the release notes [here](https://github.com/pytorch/pytorch/releases/tag/v1.2.0).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3473/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3472 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3472/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3472/comments | https://api.github.com/repos/huggingface/transformers/issues/3472/events | https://github.com/huggingface/transformers/pull/3472 | 588,908,682 | MDExOlB1bGxSZXF1ZXN0Mzk0NTY4NDc2 | 3,472 | Optimize tokenization (85-92% time reduction) | {
"login": "soni-n",
"id": 13745813,
"node_id": "MDQ6VXNlcjEzNzQ1ODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13745813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soni-n",
"html_url": "https://github.com/soni-n",
"followers_url": "https://api.github.com/users/soni-n/followers",
"following_url": "https://api.github.com/users/soni-n/following{/other_user}",
"gists_url": "https://api.github.com/users/soni-n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soni-n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soni-n/subscriptions",
"organizations_url": "https://api.github.com/users/soni-n/orgs",
"repos_url": "https://api.github.com/users/soni-n/repos",
"events_url": "https://api.github.com/users/soni-n/events{/privacy}",
"received_events_url": "https://api.github.com/users/soni-n/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,594 | 1,594 | NONE | null | This optimization in tokenization reduces the tokenization time by ~ 85-92 %
Stats:
Before fix:
8%|████████▉ | 119574/1486030 [01:03<12:09, 1874.22it/s]
After fix:
98%|████████████████████████████████████ | 1459590/1486030 [01:59<00:02, 12244.82it/s]
Another stats:
When huge number of new tokens (~58K) are added, tokenization time bumps up to ~27-28 hours
With this fix the time drops to ~2.15 hours ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3472/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3472",
"html_url": "https://github.com/huggingface/transformers/pull/3472",
"diff_url": "https://github.com/huggingface/transformers/pull/3472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3472.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3471 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3471/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3471/comments | https://api.github.com/repos/huggingface/transformers/issues/3471/events | https://github.com/huggingface/transformers/issues/3471 | 588,896,667 | MDU6SXNzdWU1ODg4OTY2Njc= | 3,471 | TFAlbertForMaskedLM Decoding Error | {
"login": "zzj0402",
"id": 15345547,
"node_id": "MDQ6VXNlcjE1MzQ1NTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/15345547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zzj0402",
"html_url": "https://github.com/zzj0402",
"followers_url": "https://api.github.com/users/zzj0402/followers",
"following_url": "https://api.github.com/users/zzj0402/following{/other_user}",
"gists_url": "https://api.github.com/users/zzj0402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zzj0402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zzj0402/subscriptions",
"organizations_url": "https://api.github.com/users/zzj0402/orgs",
"repos_url": "https://api.github.com/users/zzj0402/repos",
"events_url": "https://api.github.com/users/zzj0402/events{/privacy}",
"received_events_url": "https://api.github.com/users/zzj0402/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"When you encode with `tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')` a [CLS] token is added in the beginning and a [SEP] token is added at the end. You can verify this by: \r\n\r\n```python \r\nfrom transformers import AlbertTokenizer, TFAlbertForMaskedLM\r\n\r\ntokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')\r\ntokenizer.decode(tokenizer.encode(\"This is a test!\")) # gives '[CLS] this is a test![SEP]'\r\n```\r\n\r\nNow the encoded string has two added tokens, one in the begging, one in the end. This means that the two new tokens also produce two logits whose argmax token that in your case happened to be `time` and `your`. If you don`t want to add [CLS] and [SEP] when encoding, use:\r\n\r\n```python \r\nfrom transformers import AlbertTokenizer, TFAlbertForMaskedLM\r\n\r\ntokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')\r\ntokenizer.decode(tokenizer.encode(\"This is a test!\", add_special_tokens=False)) # gives 'this is a test!'\r\n```\r\n\r\n",
"So they can't output special tokens properly.",
"I mean they can, but the model does not have to output \"[CLS]\" when you feed \"[CLS]\" in the model. "
] | 1,585 | 1,585 | 1,585 | NONE | null | # 🐛 Bug
## Information
### Model
TFAlbertForMaskedLM <https://huggingface.co/transformers/model_doc/albert.html#tfalbertformaskedlm>
### Language
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```python
import tensorflow as tf
from transformers import AlbertTokenizer, TFAlbertForMaskedLM
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = TFAlbertForMaskedLM.from_pretrained('albert-base-v2')
input_ids = tf.constant(tokenizer.encode("This is a test!"))[
None, :] # Batch size 1
outputs = model(input_ids)
prediction_scores = outputs[0]
outputTokens = tf.math.argmax(prediction_scores, axis=2)
outputTokens = tf.keras.backend.eval(outputTokens[0])
outputTokens = tokenizer.decode(outputTokens)
print(outputTokens)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Copy the above code
2. See output in terminal
```
time this is a test! your
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Output
`This is a test!`
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: How to install transformers-cli?
- Platform: MacOS 10.15.2
- Python version: 3.6.6
- PyTorch version (GPU?): 1.4.0 CPU
- Tensorflow version (GPU?): 2.1.0 CPU
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3471/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3470 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3470/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3470/comments | https://api.github.com/repos/huggingface/transformers/issues/3470/events | https://github.com/huggingface/transformers/pull/3470 | 588,851,829 | MDExOlB1bGxSZXF1ZXN0Mzk0NTIzMTE0 | 3,470 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Fix typo | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3470/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3470",
"html_url": "https://github.com/huggingface/transformers/pull/3470",
"diff_url": "https://github.com/huggingface/transformers/pull/3470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3470.patch",
"merged_at": 1585656070000
} |
https://api.github.com/repos/huggingface/transformers/issues/3469 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3469/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3469/comments | https://api.github.com/repos/huggingface/transformers/issues/3469/events | https://github.com/huggingface/transformers/issues/3469 | 588,830,784 | MDU6SXNzdWU1ODg4MzA3ODQ= | 3,469 | CircleCI ExamplesTests::test_run_squad failing | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"\r\n\r\nHappy to help on this @LysandreJik @patrickvonplaten",
"The test runs fine locally on my computer. A couple of wild thoughts:\r\n\r\n- It happened after merging #3411 and #3428 which adds quite some long `t5-large` and `t5-base` tests to the test examples. Could this test fail because of some kind of time-out error?\r\n- The test consumes 10GB RAM when running locally - this doesn't seem too much to fail though.",
"Did you try connecting to a failing circle-ci box, to investigate? ",
"unsubscribe\r\n\r\n\r\n\r\n\r\n------------------ 原始邮件 ------------------\r\n发件人: \"Julien Chaumond\"<[email protected]>;\r\n发送时间: 2020年3月27日(星期五) 晚上10:15\r\n收件人: \"huggingface/transformers\"<[email protected]>;\r\n抄送: \"Subscribed\"<[email protected]>;\r\n主题: Re: [huggingface/transformers] CircleCI ExamplesTests::test_run_squad failing (#3469)\r\n\r\n\r\n\r\n Did you try connecting to a failing circle-ci box, to investigate?\r\n—\r\nYou are receiving this because you are subscribed to this thread.\r\nReply to this email directly, view it on GitHub, or unsubscribe.",
"I think this is solved with https://github.com/huggingface/transformers/pull/3485 ."
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Started happening at https://github.com/huggingface/transformers/pull/3428
and has been happening consistently.
scroll all the way down for [traceback](https://circleci.com/gh/huggingface/transformers/26044?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3469/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3468 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3468/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3468/comments | https://api.github.com/repos/huggingface/transformers/issues/3468/events | https://github.com/huggingface/transformers/issues/3468 | 588,819,743 | MDU6SXNzdWU1ODg4MTk3NDM= | 3,468 | Issue in generating samples for text generation | {
"login": "Ahanmr",
"id": 26196002,
"node_id": "MDQ6VXNlcjI2MTk2MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/26196002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ahanmr",
"html_url": "https://github.com/Ahanmr",
"followers_url": "https://api.github.com/users/Ahanmr/followers",
"following_url": "https://api.github.com/users/Ahanmr/following{/other_user}",
"gists_url": "https://api.github.com/users/Ahanmr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ahanmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ahanmr/subscriptions",
"organizations_url": "https://api.github.com/users/Ahanmr/orgs",
"repos_url": "https://api.github.com/users/Ahanmr/repos",
"events_url": "https://api.github.com/users/Ahanmr/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ahanmr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"The file `run_language_modeling.py` does indeed not have a variable called `MODEL_CLASSES`. Can you explain what you are trying to do exactly?"
] | 1,585 | 1,585 | 1,585 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):GPT2
```python
def generate_samples(args, model, prompt_text):
"""Generating sampling for the provided prompt using the provided model."""
set_seed(args.seed)
_, _, tokenizer_class = run_language_modeling.MODEL_CLASSES[args.model_type]
tokenizer = tokenizer_class.from_pretrained(args.model_name_or_path, cache_dir=None)
requires_preprocessing = args.model_type in run_generation.PREPROCESSING_FUNCTIONS.keys()
encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt")
encoded_prompt = encoded_prompt.to(args.device)
```
Error: run_language_modeling has no 'MODEL_CLASSES'
Language I am using the model on (English):
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3468/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3467 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3467/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3467/comments | https://api.github.com/repos/huggingface/transformers/issues/3467/events | https://github.com/huggingface/transformers/pull/3467 | 588,788,723 | MDExOlB1bGxSZXF1ZXN0Mzk0NDczMDUy | 3,467 | Model Cards: Fix grammar error | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=h1) Report\n> Merging [#3467](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/63f4d8cad010f1972254007ad56b22fe5ed203fe&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3467 +/- ##\n=======================================\n Coverage 77.84% 77.84% \n=======================================\n Files 100 100 \n Lines 17060 17060 \n=======================================\n+ Hits 13280 13281 +1 \n+ Misses 3780 3779 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=footer). Last update [63f4d8c...1681ac8](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3467/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3467",
"html_url": "https://github.com/huggingface/transformers/pull/3467",
"diff_url": "https://github.com/huggingface/transformers/pull/3467.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3467.patch",
"merged_at": 1585272814000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3466 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3466/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3466/comments | https://api.github.com/repos/huggingface/transformers/issues/3466/events | https://github.com/huggingface/transformers/issues/3466 | 588,780,511 | MDU6SXNzdWU1ODg3ODA1MTE= | 3,466 | Docstring cannot be build anymore | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
},
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Was fixed with https://github.com/huggingface/transformers/commit/e2c05f06ef58ea77103d2c64492dd8d9a0b21c3f"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | # 🐛 Bug
## Information
Docstring
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. `pip install -e ".[docs]"` at `transformers` root folder
2. `cd docs`
3. `make html`
## Expected behavior
It should work, but an error message is displayed:
> /home/patrick/hugging_face/transformers/src/transformers/modeling_utils.py:docstring of transformers.PreTrainedModel.from_pretrained:23: WARNING: Unexpected indentation.
> /home/patrick/hugging_face/transformers/src/transformers/modeling_tf_utils.py:docstring of transformers.TFPreTrainedModel.from_pretrained:20: WARNING: Definition list ends without a blank line; unexpected unindent.
>
> Exception occurred:
> File "/home/patrick/hugging_face/transformers_venv/lib/python3.6/site-packages/sphinx/util/docfields.py", line 260, in transform
> assert len(field) == 2
> AssertionError
> The full traceback has been saved in /tmp/sphinx-err-mblqzztk.log, if you want to report the issue to the developers.
> Please also report this if it was a user error, so that a better error message can be provided next time.
> A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
> Makefile:19: recipe for target 'html' failed
## Environment info
- `transformers` version: 2.6.0
- Platform: Linux-5.3.0-42-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0+cpu (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?:No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3466/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3465 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3465/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3465/comments | https://api.github.com/repos/huggingface/transformers/issues/3465/events | https://github.com/huggingface/transformers/pull/3465 | 588,778,021 | MDExOlB1bGxSZXF1ZXN0Mzk0NDY0NDE0 | 3,465 | Add link to 16 POS tags model | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3465/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3465",
"html_url": "https://github.com/huggingface/transformers/pull/3465",
"diff_url": "https://github.com/huggingface/transformers/pull/3465.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3465.patch",
"merged_at": 1585656000000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3464 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3464/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3464/comments | https://api.github.com/repos/huggingface/transformers/issues/3464/events | https://github.com/huggingface/transformers/pull/3464 | 588,774,456 | MDExOlB1bGxSZXF1ZXN0Mzk0NDYxNTI2 | 3,464 | Add text shown in example of usage | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3464/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3464",
"html_url": "https://github.com/huggingface/transformers/pull/3464",
"diff_url": "https://github.com/huggingface/transformers/pull/3464.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3464.patch",
"merged_at": 1585655977000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3463 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3463/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3463/comments | https://api.github.com/repos/huggingface/transformers/issues/3463/events | https://github.com/huggingface/transformers/issues/3463 | 588,741,115 | MDU6SXNzdWU1ODg3NDExMTU= | 3,463 | Question Answering pipeline not working | {
"login": "paras55",
"id": 52483274,
"node_id": "MDQ6VXNlcjUyNDgzMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/52483274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paras55",
"html_url": "https://github.com/paras55",
"followers_url": "https://api.github.com/users/paras55/followers",
"following_url": "https://api.github.com/users/paras55/following{/other_user}",
"gists_url": "https://api.github.com/users/paras55/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paras55/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paras55/subscriptions",
"organizations_url": "https://api.github.com/users/paras55/orgs",
"repos_url": "https://api.github.com/users/paras55/repos",
"events_url": "https://api.github.com/users/paras55/events{/privacy}",
"received_events_url": "https://api.github.com/users/paras55/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Had the same Issue, I think it was already fixed in an older commit, but the pip package doesnt seem to be updated. Try installing transformers from this repo:\r\n`git clone https://github.com/huggingface/transformers`\r\n`cd transformers`\r\n`pip install .`",
"I also had the same issue as @paras55, and the solution provided by @mowoe fixed it! Thanks!",
"this is still an issue in 2.7.0.",
"Are you sure? I think it was fixed in this commit: [c76c3ce](https://github.com/huggingface/transformers/commit/c76c3cebed3c707178d9f721349c5abd5206a57f). ",
"I cloned the repo, checked out the 2.7.0 tag and built and installed the wheel and was running into the same issue.\r\n\r\nResult of `pip list`\r\n\r\n```\r\nPackage Version \r\n---------------------- ------------\r\nabsl-py 0.9.0 \r\nastor 0.8.1 \r\nastroid 2.3.3 \r\nasttokens 2.0.3 \r\nattrs 19.3.0 \r\nautopep8 1.5 \r\nboto3 1.12.27 \r\nbotocore 1.15.27 \r\ncachetools 4.0.0 \r\ncertifi 2019.11.28 \r\nchardet 3.0.4 \r\nclick 7.1.1 \r\ndataclasses 0.7 \r\ndecorator 4.4.2 \r\ndocutils 0.15.2 \r\nentrypoints 0.3 \r\nfilelock 3.0.12 \r\nflake8 3.7.9 \r\nflake8-aaa 0.7.1 \r\ngast 0.2.2 \r\ngoogle-auth 1.11.3 \r\ngoogle-auth-oauthlib 0.4.1 \r\ngoogle-pasta 0.2.0 \r\ngrpcio 1.27.2 \r\nh5py 2.10.0 \r\nidna 2.9 \r\nimportlab 0.5.1 \r\nimportlib-metadata 1.5.0 \r\nisort 4.3.21 \r\njmespath 0.9.5 \r\njoblib 0.14.1 \r\nKeras-Applications 1.0.8 \r\nKeras-Preprocessing 1.1.0 \r\nlazy-object-proxy 1.4.3 \r\nMarkdown 3.2.1 \r\nmccabe 0.6.1 \r\nmore-itertools 8.2.0 \r\nmypy 0.761 \r\nmypy-extensions 0.4.3 \r\nnetworkx 2.4 \r\nninja 1.9.0.post1 \r\nnumpy 1.18.2 \r\noauthlib 3.1.0 \r\nopt-einsum 3.2.0 \r\npackaging 20.1 \r\npandas 1.0.3 \r\npip 20.0.2 \r\npluggy 0.13.1 \r\nprotobuf 3.11.3 \r\npy 1.8.1 \r\npyasn1 0.4.8 \r\npyasn1-modules 0.2.8 \r\npycodestyle 2.5.0 \r\npyflakes 2.1.1 \r\npylint 2.4.4 \r\npyparsing 2.4.6 \r\npytest 5.3.5 \r\npython-dateutil 2.8.1 \r\npytype 2020.2.20 \r\npytz 2019.3 \r\nPyYAML 5.3.1 \r\nregex 2020.2.20 \r\nrequests 2.23.0 \r\nrequests-oauthlib 1.3.0 \r\nrsa 4.0 \r\ns3transfer 0.3.3 \r\nsacremoses 0.0.38 \r\nscikit-learn 0.22.2.post1\r\nscipy 1.4.1 \r\nsentencepiece 0.1.85 \r\nsetuptools 45.2.0 \r\nsix 1.14.0 \r\ntensorboard 2.0.2 \r\ntensorflow 2.0.0 \r\ntensorflow-determinism 0.3.0 \r\ntensorflow-estimator 2.0.1 \r\ntermcolor 1.1.0 \r\ntokenizers 0.5.2 \r\ntqdm 4.43.0 \r\ntransformers 2.7.0 \r\ntyped-ast 1.4.1 \r\ntyping-extensions 3.7.4.1 \r\nurllib3 1.25.8 \r\nwcwidth 0.1.8 \r\nWerkzeug 1.0.0 \r\nwheel 0.34.2 \r\nwrapt 1.11.2 \r\nzipp 3.0.0 \r\n```\r\n\r\n\r\nthe code I ran (unnecessary parts excluded):\r\n```\r\nfor a, b in zip(train_text_a, train_text_b):\r\n tokens_dict = tokenizer.encode_plus(a, b, max_length=10,\r\n pad_to_max_length=True)\r\n train_input_ids.append(np.asarray([tokens_dict[\"input_ids\"]]))\r\n train_input_masks.append(np.asarray([tokens_dict[\"attention_mask\"]]))\r\n train_input_segment_ids.append(np.asarray([tokens_dict[\"token_type_ids\"]]))\r\n```\r\n\r\nand the traceback I received:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<input>\", line 104, in <module>\r\nKeyError: 'token_type_ids'\r\n```",
"Yeah I think we are talking about a completely different problem here. @amoux and @paras55 had this exception in the squad.py of this repo (Which shouldnt raise one, when used this way). Your Exception however is a KeyError in your code (line 104). Maybe try adding `return_token_type_ids=True` as argument to your tokenizer. This is still an issue, as this should be default. Maybe open a new one, as this isnt the same problem.",
"You're definitely correct. `return_token_type_ids` solved the problem. Much appreciated @mowoe!",
"\r\nAn example for question answering with DistilBERT\r\n\r\n```\r\nfrom transformers import DistilBertTokenizer, DistilBertForQuestionAnswering\r\nimport torch\r\n\r\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased',return_token_type_ids = True)\r\nmodel = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad')\r\n\r\ncontext = \"The US has passed the peak on new coronavirus cases, President Donald Trump said and predicted that some states would reopen this month.The US has over 637,000 confirmed Covid-19 cases and over 30,826 deaths, the highest for any country in the world.\"\r\nquestion = \"What was President Donald Trump's prediction?\"\r\n\r\n# question = \"How many deaths have been reported from the virus?\"\r\n\r\nencoding = tokenizer.encode_plus(question, context)\r\n\r\n\r\ninput_ids, attention_mask = encoding[\"input_ids\"], encoding[\"attention_mask\"]\r\n\r\nstart_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask]))\r\n\r\nans_tokens = input_ids[torch.argmax(start_scores) : torch.argmax(end_scores)+1]\r\nanswer_tokens = tokenizer.convert_ids_to_tokens(ans_tokens , skip_special_tokens=True)\r\n\r\nall_tokens = tokenizer.convert_ids_to_tokens(input_ids)\r\n\r\nprint (\"\\nAnswer Tokens: \")\r\nprint (answer_tokens)\r\n\r\nanswer_tokens_to_string = tokenizer.convert_tokens_to_string(answer_tokens)\r\n\r\nprint (\"\\nFinal Answer : \")\r\nprint (answer_tokens_to_string)\r\n\r\n```\r\n\r\nOutput is - \r\nAnswer Tokens:\r\n['some', 'states', 'would', 're', '##open', 'this', 'month']\r\n\r\nFinal Answer :\r\nsome states would reopen this month\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,594 | 1,594 | NONE | null | This is error while running the question answering pipeline !
convert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s]
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/usr/local/lib/python3.6/dist-packages/transformers/data/processors/squad.py", line 198, in squad_convert_example_to_features
p_mask = np.array(span["token_type_ids"])
KeyError: 'token_type_ids'
"""
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
<ipython-input-3-3c4dd3618524> in <module>()
2
3 nlp_qa = pipeline('question-answering')
----> 4 nlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?')
8 frames
/usr/local/lib/python3.6/dist-packages/transformers/data/processors/squad.py in squad_convert_example_to_features()
196 # p_mask: mask with 1 for token than cannot be in the answer (0 for token which can be in an answer)
197 # Original TF implem also keep the classification token (set to 0) (not sure why...)
--> 198 p_mask = np.array(span["token_type_ids"])
199
200 p_mask = np.minimum(p_mask, 1)
KeyError: 'token_type_ids'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3463/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3463/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3462 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3462/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3462/comments | https://api.github.com/repos/huggingface/transformers/issues/3462/events | https://github.com/huggingface/transformers/issues/3462 | 588,616,430 | MDU6SXNzdWU1ODg2MTY0MzA= | 3,462 | SyntaxError when fine-tuning ALBERT on NER | {
"login": "manueltonneau",
"id": 29440170,
"node_id": "MDQ6VXNlcjI5NDQwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/29440170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueltonneau",
"html_url": "https://github.com/manueltonneau",
"followers_url": "https://api.github.com/users/manueltonneau/followers",
"following_url": "https://api.github.com/users/manueltonneau/following{/other_user}",
"gists_url": "https://api.github.com/users/manueltonneau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueltonneau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueltonneau/subscriptions",
"organizations_url": "https://api.github.com/users/manueltonneau/orgs",
"repos_url": "https://api.github.com/users/manueltonneau/repos",
"events_url": "https://api.github.com/users/manueltonneau/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueltonneau/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834060867,
"node_id": "MDU6TGFiZWwxODM0MDYwODY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition",
"name": "Ex: Named Entity Recognition",
"color": "06FFD8",
"default": false,
"description": ""
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"`transformers` 2.6.0 has dropped support for Python 3.5: https://github.com/huggingface/transformers/releases/tag/v2.6.0",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): ALBERT
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
Running on a GCP VM:
1.
```
python3 ${REPO_DIR}/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path ${BERT_CKPT_DIR} --albert_config_file ${BERT_CKPT_DIR}/config.json --pytorch_dump_path ${BERT_CKPT_DIR}/pytorch_model.bin
```
2.
```
python3 ${REPO_DIR}/examples/ner/run_ner.py --model_type albert --model_name_or_path ${BERT_CKPT_DIR} --do_train --do_eval --data_dir ${NER_DATA_DIR} --labels ${NER_DATA_DIR}/labels.txt --max_seq_length 128 --num_train_epochs 3 --per_gpu_train_batch_size 32 --output_dir ${NER_MODEL_CKPT_DIR} --seed 1 --do_predict --save_steps 750
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 23, in <module>
from transformers import AlbertConfig, AlbertForMaskedLM, load_tf_weights_in_albert
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "/home/manuto/.local/lib/python3.5/site-packages/transformers-2.6.0-py3.5.egg/transformers/__init__.py", line 23, in <module>
from .benchmark_utils import (
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 954, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 896, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1147, in find_spec
File "<frozen importlib._bootstrap_external>", line 1123, in _get_spec
File "<frozen importlib._bootstrap_external>", line 1104, in _legacy_get_spec
File "<frozen importlib._bootstrap>", line 444, in spec_from_loader
File "<frozen importlib._bootstrap_external>", line 541, in spec_from_file_location
File "/home/manuto/.local/lib/python3.5/site-packages/transformers-2.6.0-py3.5.egg/transformers/benchmark_utils.py", line 44
filename: str
^
SyntaxError: invalid syntax
Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7fc729d8a598>
Traceback (most recent call last):
File "/usr/lib/python3.5/weakref.py", line 117, in remove
TypeError: 'NoneType' object is not callable
Traceback (most recent call last):
File "transformers/examples/ner/run_ner.py", line 33, in <module>
from transformers import (
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 664, in _load_unlocked
File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible
File "/home/manuto/.local/lib/python3.5/site-packages/transformers-2.6.0-py3.5.egg/transformers/__init__.py", line 23, in <module>
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 954, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 896, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1147, in find_spec
File "<frozen importlib._bootstrap_external>", line 1123, in _get_spec
File "<frozen importlib._bootstrap_external>", line 1104, in _legacy_get_spec
File "<frozen importlib._bootstrap>", line 444, in spec_from_loader
File "<frozen importlib._bootstrap_external>", line 541, in spec_from_file_location
File "/home/manuto/.local/lib/python3.5/site-packages/transformers-2.6.0-py3.5.egg/transformers/benchmark_utils.py", line 44
filename: str
^
SyntaxError: invalid syntax
Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7f1bd863ed08>
Traceback (most recent call last):
File "/usr/lib/python3.5/weakref.py", line 117, in remove
TypeError: 'NoneType' object is not callable
```
## Expected behavior
Convert ALBERT TF checkpoint to PyTorch `model.bin` (works in 2.5.1 version of transformers) and fine-tune the model on NER (error in 2.5.1 mentioned [here](https://github.com/huggingface/transformers/issues/3412) which is the reason why I switched to 2.6.0).
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.6.0 (installed with setup.py)
- Platform: Linux Ubuntu 18.04
- Python version: 3.5.3
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): 1.14.0
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3462/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3462/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3461 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3461/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3461/comments | https://api.github.com/repos/huggingface/transformers/issues/3461/events | https://github.com/huggingface/transformers/pull/3461 | 588,605,997 | MDExOlB1bGxSZXF1ZXN0Mzk0MzIxMzU0 | 3,461 | Add T5 to docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=h1) Report\n> Merging [#3461](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3ee431dd4c720e67e35a449b453d3dc2b15ccfff&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3461 +/- ##\n=======================================\n Coverage 77.79% 77.80% \n=======================================\n Files 100 100 \n Lines 17049 17051 +2 \n=======================================\n+ Hits 13264 13267 +3 \n+ Misses 3785 3784 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.58% <ø> (ø)` | |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.83% <ø> (ø)` | |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.29% <100.00%> (+0.08%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `94.98% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=footer). Last update [3ee431d...d57c03b](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | - [x] Copy past from bart docs
- [x] polish main docs page
- [x] improve docstring in `modeling_t5.py` and `modeling_tf_t5.py` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3461/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3461",
"html_url": "https://github.com/huggingface/transformers/pull/3461",
"diff_url": "https://github.com/huggingface/transformers/pull/3461.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3461.patch",
"merged_at": 1585321036000
} |
https://api.github.com/repos/huggingface/transformers/issues/3460 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3460/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3460/comments | https://api.github.com/repos/huggingface/transformers/issues/3460/events | https://github.com/huggingface/transformers/issues/3460 | 588,601,234 | MDU6SXNzdWU1ODg2MDEyMzQ= | 3,460 | Error : forward() got an unexpected keyword argument 'inputs_embeds' | {
"login": "CNelias",
"id": 34754896,
"node_id": "MDQ6VXNlcjM0NzU0ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/34754896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CNelias",
"html_url": "https://github.com/CNelias",
"followers_url": "https://api.github.com/users/CNelias/followers",
"following_url": "https://api.github.com/users/CNelias/following{/other_user}",
"gists_url": "https://api.github.com/users/CNelias/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CNelias/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CNelias/subscriptions",
"organizations_url": "https://api.github.com/users/CNelias/orgs",
"repos_url": "https://api.github.com/users/CNelias/repos",
"events_url": "https://api.github.com/users/CNelias/events{/privacy}",
"received_events_url": "https://api.github.com/users/CNelias/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I can run the following code succesfully: \r\n\r\n```\r\nfrom transformers import GPT2Model, GPT2Tokenizer\r\nimport torch\r\n\r\nmodel = GPT2Model.from_pretrained('gpt2')\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>')\r\n\r\ninput_ids = tokenizer.encode(\"Hello, how are you?\", return_tensors='pt')\r\ninputs_embeds = model.wte(input_ids)\r\n\r\nmodel(inputs_embeds=inputs_embeds) # runs without error\r\n```\r\n\r\nCan you update `transformers` to the most current version and verify that you can run the code snippet I posted? \r\n\r\n",
"Running ```conda update transformers``` returned that I already have the latest version.\r\nAs for you snippet, I still get the same error : \r\n```Python\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-136-d0df910b9d57>\", line 10, in <module>\r\n model(inputs_embeds=inputs_embeds) # runs without error\r\n\r\n File \"C:\\Users\\cnelias\\Anaconda3\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n\r\nTypeError: forward() got an unexpected keyword argument 'inputs_embeds'\r\n```\r\nIs this because I installed ```transformers``` with ```conda``` instead of ```pip``` ?\r\n\r\nEdit : this is indeed probably a conda issue. When I run the snippet in ```atom``` (with the python depedency and not anaconda) instead of ```spyder```, then it works.",
"I get the same error . upgrading transformers vai pip doesn't solve the problem. any solution?"
] | 1,585 | 1,597 | 1,585 | NONE | null | Hello,
I am trying to train a GPT2 from scratch using ```modeling_gpt2.py``` as a base.
I declared the model as follow :
```Python
config = GPT2Config(vocab_size = VELSIZE, n_positions = SEQLEN, n_embd = EMBEDSIZE, n_layer = NUMLAYER,n_ctx = SEQLEN, n_head = NUMHEAD)
model = GPT2Model(config)
```
I don't need to use the buid-in embeddings for my application and would like to pass my input tensor as-is, but trying ```model(inputs_embeds = test)```, ```model.forward(inputs_embeds = test)```, ```model(input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=test)``` or any other variant I can think of always results in the following error :
```Python
Traceback (most recent call last):
File "<ipython-input-52-616a2eb9b3f4>", line 7, in <module>
inputs_embeds=testx)
File "C:\Users\cnelias\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'inputs_embeds'
```
Is this a bug or am I doing it wrong ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3460/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3459 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3459/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3459/comments | https://api.github.com/repos/huggingface/transformers/issues/3459/events | https://github.com/huggingface/transformers/pull/3459 | 588,599,663 | MDExOlB1bGxSZXF1ZXN0Mzk0MzE2MDcy | 3,459 | [Docs] Add better explanation to check `docs` locally. | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3459/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3459",
"html_url": "https://github.com/huggingface/transformers/pull/3459",
"diff_url": "https://github.com/huggingface/transformers/pull/3459.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3459.patch",
"merged_at": 1585661417000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3458 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3458/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3458/comments | https://api.github.com/repos/huggingface/transformers/issues/3458/events | https://github.com/huggingface/transformers/issues/3458 | 588,570,744 | MDU6SXNzdWU1ODg1NzA3NDQ= | 3,458 | WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: True | {
"login": "keloemma",
"id": 40454218,
"node_id": "MDQ6VXNlcjQwNDU0MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keloemma",
"html_url": "https://github.com/keloemma",
"followers_url": "https://api.github.com/users/keloemma/followers",
"following_url": "https://api.github.com/users/keloemma/following{/other_user}",
"gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keloemma/subscriptions",
"organizations_url": "https://api.github.com/users/keloemma/orgs",
"repos_url": "https://api.github.com/users/keloemma/repos",
"events_url": "https://api.github.com/users/keloemma/events{/privacy}",
"received_events_url": "https://api.github.com/users/keloemma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"?how to solve it"
] | 1,585 | 1,687 | 1,585 | NONE | null | I am trying to use the script run_glue to fintenue Flaubert for french data using huggingface's transformers and I am gettinh this error :
`
03/27/2020 11:54:48 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: True
Traceback (most recent call last):
File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 693, in <module>
main()
File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 613, in main
config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type]
KeyError: 'flaubert'
`
Could you help me find how to solve it ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3458/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3458/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3457 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3457/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3457/comments | https://api.github.com/repos/huggingface/transformers/issues/3457/events | https://github.com/huggingface/transformers/issues/3457 | 588,531,060 | MDU6SXNzdWU1ODg1MzEwNjA= | 3,457 | ImportError: cannot import name 'TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING' | {
"login": "pascalhuszar",
"id": 45284935,
"node_id": "MDQ6VXNlcjQ1Mjg0OTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/45284935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pascalhuszar",
"html_url": "https://github.com/pascalhuszar",
"followers_url": "https://api.github.com/users/pascalhuszar/followers",
"following_url": "https://api.github.com/users/pascalhuszar/following{/other_user}",
"gists_url": "https://api.github.com/users/pascalhuszar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pascalhuszar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pascalhuszar/subscriptions",
"organizations_url": "https://api.github.com/users/pascalhuszar/orgs",
"repos_url": "https://api.github.com/users/pascalhuszar/repos",
"events_url": "https://api.github.com/users/pascalhuszar/events{/privacy}",
"received_events_url": "https://api.github.com/users/pascalhuszar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Importing `TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING` from `transformers` works for me. Can you update your transformers library and see whether you still get the import error.",
"Hi,\r\n\r\nI have the same issue here. I have installed transformers from source with the latest update.\r\n But still this **ImportError: cannot import name 'TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING'**",
"Somehow, on a second try `pip install transformers` worked for me. But cant tell why it didnt worked at the beginning.\r\n",
"Thanks for the quick response. I will try again by installing it not from source.\r\n...\r\nNope, still not working.\r\n",
"Make sure you really update and install from source. If in doubt, recreate your virtual env.",
"`pip uninstall transformers ` and `pip install transformers ` make sure your transformers version is update.",
"Thanks, I got around the problem by converting it to pytorch model. But now I am facing other issues ! Anyway thank you again for your helps"
] | 1,585 | 1,586 | 1,585 | NONE | null | Wanted to use the ner demo, but after running:
`!python3 ./ner/run_tf_ner.py --data_dir /data --model_type bert --labels .data/labels.txt --model_name_or_path bert-base-multilingual-cased --output_dir germeval-model --max_seq_length 128 --num_train_epochs 3 --per_device_train_batch_size 32 --save_steps 750 --seed 1 --do_train --do_eval --do_predict`
i encounter the ImportError
Any suggestions? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3457/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3456 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3456/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3456/comments | https://api.github.com/repos/huggingface/transformers/issues/3456/events | https://github.com/huggingface/transformers/issues/3456 | 588,465,885 | MDU6SXNzdWU1ODg0NjU4ODU= | 3,456 | Fine-tuning with BertForSequenceClassification on custom dataset yields a model that outputs only the label with highest support in training set | {
"login": "royeis",
"id": 55491131,
"node_id": "MDQ6VXNlcjU1NDkxMTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/55491131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royeis",
"html_url": "https://github.com/royeis",
"followers_url": "https://api.github.com/users/royeis/followers",
"following_url": "https://api.github.com/users/royeis/following{/other_user}",
"gists_url": "https://api.github.com/users/royeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royeis/subscriptions",
"organizations_url": "https://api.github.com/users/royeis/orgs",
"repos_url": "https://api.github.com/users/royeis/repos",
"events_url": "https://api.github.com/users/royeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/royeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@royeis did you find solution for this? I am facing the same issue. ",
"@SarikGhazarian \r\nEnded up solving this with a workaround. Constructed a Pytorch model that had huggingface's BertModel as a module and another linear layer that received Bert's outputs and acted as a classifier. Bert module parameters were frozen and training worked properly from there."
] | 1,585 | 1,595 | 1,591 | NONE | null | Hello!
I have a custom dataset that I wish to fine tune BERT on for classification. The examples consist of 3 sequences each and the label set is {0, 1, ..., 9}. The training data has 570 examples, and validation has 150 examples.
The encoding of the input examples is as follows:
`tmp_enc = sequence_a + ' [SEP] ' + sequence_b + ' [SEP] ' + sequence_c`
`enc = tokenizer.encode(tmp_enc, add_special_tokens=True)`
where
`tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')`
my training loop is:
```
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=0,
num_training_steps=5*len(X_train))
train_losses = []
valid_losses = []
for epoch in range(5):
#train
model.train();
epoch_train_loss = 0
for i in range(len(X_train)):
model.zero_grad()
loss, logits = model(input_ids=X_train[i], labels=y_train[i].unsqueeze(0))
loss.backward()
optimizer.step()
scheduler.step()
optimizer.zero_grad()
nn.utils.clip_grad_norm_(model.parameters(), 1.0)
epoch_train_loss += loss.item()
if i % 100 == 0:
print(f'example {i + 1}, loss: {loss.item()}')
avg_loss = epoch_train_loss / len(X_train)
train_losses.append(avg_loss)
print(f'epoch {epoch + 1}, average train loss: {avg_loss}')
#validation
model.eval();
epoch_valid_loss = 0
for i in range(len(X_test)):
with torch.no_grad():
loss, logits = model(input_ids=X_test[i], labels=y_test[i].unsqueeze(0))
epoch_valid_loss += loss.item()
avg_loss = epoch_valid_loss / len(X_test)
valid_losses.append(avg_loss)
print(f'epoch {epoch + 1} done. average valid loss: {avg_loss}')
```
where
`model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=10)`
and all layers but the classifier are frozen:
```
for param in model.bert.parameters():
param.requires_grad = False
```
My issue is that loss is not decreasing and the model eventually always outputs a single label which is the one with highest support in the data. (validated by removing the examples with highest occurring label and getting similar results). Also, the logits for all examples, passed to the model after training are quite similar. e.g.:
```
logits: tensor([[ 1.5107, -0.0595, 0.3490, -0.8669, -0.8848, -0.8097, 0.2685, 0.7246,
-0.3133, 0.4215]]), true label: 4
logits: tensor([[ 1.4187, -0.3009, 0.3776, -0.5615, -0.7881, -0.5849, 0.3391, 0.5756,
-0.3861, 0.3639]]), true label: 6
logits: tensor([[ 1.3919, -0.4227, 0.3455, -0.4626, -0.7795, -0.5608, 0.2996, 0.5791,
-0.4275, 0.3700]]), true label: 8
```
I have verified that a linear classifier can achieve better results (65% accuracy as opposed to 30%) on the embeddings from pytorch_transformers' BertModel. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3456/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3456/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3455 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3455/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3455/comments | https://api.github.com/repos/huggingface/transformers/issues/3455/events | https://github.com/huggingface/transformers/pull/3455 | 588,450,358 | MDExOlB1bGxSZXF1ZXN0Mzk0MTk0MTI0 | 3,455 | Tokenizers: Start cleaning examples a little | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging this as discussed (preliminarily) with @LysandreJik last week",
"This is great."
] | 1,585 | 1,585 | 1,585 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3455/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3455",
"html_url": "https://github.com/huggingface/transformers/pull/3455",
"diff_url": "https://github.com/huggingface/transformers/pull/3455.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3455.patch",
"merged_at": 1585739621000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3454 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3454/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3454/comments | https://api.github.com/repos/huggingface/transformers/issues/3454/events | https://github.com/huggingface/transformers/issues/3454 | 588,440,095 | MDU6SXNzdWU1ODg0NDAwOTU= | 3,454 | NER pipeline usage examples | {
"login": "jensam",
"id": 13467943,
"node_id": "MDQ6VXNlcjEzNDY3OTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/13467943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jensam",
"html_url": "https://github.com/jensam",
"followers_url": "https://api.github.com/users/jensam/followers",
"following_url": "https://api.github.com/users/jensam/following{/other_user}",
"gists_url": "https://api.github.com/users/jensam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jensam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jensam/subscriptions",
"organizations_url": "https://api.github.com/users/jensam/orgs",
"repos_url": "https://api.github.com/users/jensam/repos",
"events_url": "https://api.github.com/users/jensam/events{/privacy}",
"received_events_url": "https://api.github.com/users/jensam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The German and English CoNLL datasets use the IOB1 tagging scheme. `B-` is only used to separate two adjacent entities of the same type."
] | 1,585 | 1,585 | 1,585 | NONE | null | In the NER usage examples:
https://huggingface.co/transformers/usage.html#named-entity-recognition
Can you explain why the examples give only I- entities? For example:
('New', 'I-LOC'), ('York', 'I-LOC'), ('City', 'I-LOC')
('Hu', 'I-ORG'), ('##gging', 'I-ORG'), ('Face', 'I-ORG'), ('Inc', 'I-ORG')
Why are there no B-s? As in:
('New', 'B-LOC'), ('York', 'I-LOC'), ('City', 'I-LOC') etc | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3454/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3453 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3453/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3453/comments | https://api.github.com/repos/huggingface/transformers/issues/3453/events | https://github.com/huggingface/transformers/pull/3453 | 588,431,279 | MDExOlB1bGxSZXF1ZXN0Mzk0MTc4Njgx | 3,453 | Create card for the model: GPT-2-finetuned-covid-bio-medrxiv | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3453/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3453",
"html_url": "https://github.com/huggingface/transformers/pull/3453",
"diff_url": "https://github.com/huggingface/transformers/pull/3453.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3453.patch",
"merged_at": 1585656064000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3452 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3452/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3452/comments | https://api.github.com/repos/huggingface/transformers/issues/3452/events | https://github.com/huggingface/transformers/issues/3452 | 588,425,339 | MDU6SXNzdWU1ODg0MjUzMzk= | 3,452 | Write With Transformer returning a 502 on gpt2/xl model | {
"login": "sergalbutt",
"id": 4271534,
"node_id": "MDQ6VXNlcjQyNzE1MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4271534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sergalbutt",
"html_url": "https://github.com/sergalbutt",
"followers_url": "https://api.github.com/users/sergalbutt/followers",
"following_url": "https://api.github.com/users/sergalbutt/following{/other_user}",
"gists_url": "https://api.github.com/users/sergalbutt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sergalbutt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergalbutt/subscriptions",
"organizations_url": "https://api.github.com/users/sergalbutt/orgs",
"repos_url": "https://api.github.com/users/sergalbutt/repos",
"events_url": "https://api.github.com/users/sergalbutt/events{/privacy}",
"received_events_url": "https://api.github.com/users/sergalbutt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We had to turn that model off because it was really expensive/hard to operationalize.\r\n\r\nWe'll add a warning to that particular webpage. (cc @LysandreJik)"
] | 1,585 | 1,585 | 1,585 | NONE | null | When setting the model size to **gpt2/xl**, WwT gets stuck on loading the autocomplete.
Checking Chrome's console tells me
"Failed to load resource: the server responded with a status of 502 (Bad Gateway)"
Having a quick look through the older tickets, I saw that this has happened before. #2121 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3452/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3451 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3451/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3451/comments | https://api.github.com/repos/huggingface/transformers/issues/3451/events | https://github.com/huggingface/transformers/pull/3451 | 588,421,495 | MDExOlB1bGxSZXF1ZXN0Mzk0MTcwNjMy | 3,451 | [examples] SummarizationDataset cleanup | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@acarrera94 does this look OK?",
"This looks great! Something else we might want to figure out is the best configuration of max_seq_length and max_target_length. For example, the tesla V100 in google colab can for sure handle about max_seq_length=768 and max_target_length=56 with a batch size of 4. It can't handle max_seq_length=1028 with the same configuration, since it will run out of memory. ",
"Going to address by trimming batches so that they don't add extra padding. (EDIT: this is not enough, still need to truncate on both sides.)",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=h1) Report\n> Merging [#3451](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b0ad06951708b782e45b02a4d092f6fcde68a9b9&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3451 +/- ##\n==========================================\n+ Coverage 78.02% 78.03% +0.01% \n==========================================\n Files 104 104 \n Lines 17709 17709 \n==========================================\n+ Hits 13817 13819 +2 \n+ Misses 3892 3890 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3451/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.10% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3451/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=footer). Last update [b0ad069...94a0baa](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hard to unittest cause of the PL dependency but I verified that the script runs locally.",
"@sshleifer, in MT code it is common to group the batches together to minimize padding. Do you think this is worth implementing? One downside of this method is that is doesn't really work on TPU or architectures that expect fixed sizes. ",
"Yes, is something like [SortishSampler](https://github.com/fastai/fastai/blob/master/fastai/text/data.py#L99) the right idea?\r\n\r\nIt seems easy to implement if you only consider padding on the `source` side. Do you know of an intelligent `key_func` to sort examples that considers both sides?\r\n\r\n\r\n\r\n\r\n",
"lgtm"
] | 1,585 | 1,586 | 1,586 | CONTRIBUTOR | null | - factor out redundant tokenization logic
- For both articles and summaries, batches are "trimmed" such that no columns are full of `pad_token_id`.
- The max sizes are 1024 for source and 56 for target. This ensures that truncation never happens for summaries, and rarely happens for articles. These values are unchanged, but converted to command line arguments for ease of use.
- added small unittest (just for the dataset).
- Verified manually on GPU. Loss goes down, peak memory usage identical, speed improved by ~10\%.
Summary Statistics on summary lengths (in # tokens) for cnn_dm test data:

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3451/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3451",
"html_url": "https://github.com/huggingface/transformers/pull/3451",
"diff_url": "https://github.com/huggingface/transformers/pull/3451.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3451.patch",
"merged_at": 1586300758000
} |
https://api.github.com/repos/huggingface/transformers/issues/3450 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3450/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3450/comments | https://api.github.com/repos/huggingface/transformers/issues/3450/events | https://github.com/huggingface/transformers/pull/3450 | 588,407,405 | MDExOlB1bGxSZXF1ZXN0Mzk0MTU4OTE4 | 3,450 | Create card for model GPT-2-finetuned-CORD19 | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3450/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3450",
"html_url": "https://github.com/huggingface/transformers/pull/3450",
"diff_url": "https://github.com/huggingface/transformers/pull/3450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3450.patch",
"merged_at": 1585228210000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3449 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3449/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3449/comments | https://api.github.com/repos/huggingface/transformers/issues/3449/events | https://github.com/huggingface/transformers/pull/3449 | 588,363,258 | MDExOlB1bGxSZXF1ZXN0Mzk0MTIzMTQ3 | 3,449 | revert unpin isort commit | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This syntax is not support by PyPi unfortunately, so we had to revert this.",
"> This syntax is not support by PyPi unfortunately, so we had to revert this.\r\n\r\nI see...is there another way of getting the pinned version? \r\nIn general, do we need isort? Doesn't black also sort the import statements? ",
"Other ways are:\r\n- bug the isort maintainer to release a version containing this commit\r\n- push a new forked package to PyPI, like `isort-pvp` or `isort-black-compat` or whatever.\r\n\r\nYes we need isort.",
"I have also observed this issue with #3402"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | This PR reverts the change of
https://github.com/huggingface/transformers/commit/fbc5bf10cfe4d4ca81f8daacc148b0abd51dda5a
Using the unpinned version of `isort` makes black and `isort` disagree in some cases.
In this PR the unpinned version of `isort` leads to a falied code quality test:
https://github.com/huggingface/transformers/pull/3411
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3449/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3449",
"html_url": "https://github.com/huggingface/transformers/pull/3449",
"diff_url": "https://github.com/huggingface/transformers/pull/3449.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3449.patch",
"merged_at": 1585243159000
} |
https://api.github.com/repos/huggingface/transformers/issues/3448 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3448/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3448/comments | https://api.github.com/repos/huggingface/transformers/issues/3448/events | https://github.com/huggingface/transformers/issues/3448 | 588,350,257 | MDU6SXNzdWU1ODgzNTAyNTc= | 3,448 | Failure to load checkpoints saved during distributed training | {
"login": "glnmario",
"id": 15987282,
"node_id": "MDQ6VXNlcjE1OTg3Mjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/15987282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glnmario",
"html_url": "https://github.com/glnmario",
"followers_url": "https://api.github.com/users/glnmario/followers",
"following_url": "https://api.github.com/users/glnmario/following{/other_user}",
"gists_url": "https://api.github.com/users/glnmario/gists{/gist_id}",
"starred_url": "https://api.github.com/users/glnmario/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glnmario/subscriptions",
"organizations_url": "https://api.github.com/users/glnmario/orgs",
"repos_url": "https://api.github.com/users/glnmario/repos",
"events_url": "https://api.github.com/users/glnmario/events{/privacy}",
"received_events_url": "https://api.github.com/users/glnmario/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
},
{
"id": 1834053007,
"node_id": "MDU6TGFiZWwxODM0MDUzMDA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)",
"name": "Ex: LM (Pretraining)",
"color": "76FFAF",
"default": false,
"description": "Related to language modeling pre-training"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using: Bert
Language I am using the model on: English, German, Swedish
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) using distributed training: e.g.
`python3 -m torch.distributed.launch --nproc_per_node=4 --nnodes=1 --node_rank=0 run_language_modeling.py` --output_dir output_dir [--args...]
2. When the training is over, try to load the final model: BertModel.from_pretrained('output_dir'): this works.
3. Then, try to load a checkpoint: e.g., `BertModel.from_pretrained('output_dir/checkpoint-1000')`: this gives a runtime error:
`
Traceback (most recent call last):
File "/cluster/shared/nlpl/software/modules/in5550/202002/lib/python3.7/site-packages/transformers/modeling_utils.py", line 470, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location="cpu")
File "/cluster/shared/nlpl/software/modules/pytorch/1.4.0/lib/python3.7/site-packages/torch/serialization.py", line 529, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/cluster/shared/nlpl/software/modules/pytorch/1.4.0/lib/python3.7/site-packages/torch/serialization.py", line 709, in _legacy_load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: storage has wrong size: expected 4434893008627221919 got 2359296
`
## Expected behavior
We should be able to load from the checkpoints also when the training is distributed.
## Suggested solution
Check `if args.local_rank == -1 or torch.distributed.get_rank() == 0` on line 370 (just like on line 736).
## Environment info
- `transformers` version: 2.5.0
- Platform: UNIX
- Python version: Python 3.5.3
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed (1 node, 4 GPUs)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3448/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3447 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3447/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3447/comments | https://api.github.com/repos/huggingface/transformers/issues/3447/events | https://github.com/huggingface/transformers/issues/3447 | 588,329,149 | MDU6SXNzdWU1ODgzMjkxNDk= | 3,447 | Save models after each epoch | {
"login": "GCHQResearcher92457",
"id": 62057951,
"node_id": "MDQ6VXNlcjYyMDU3OTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/62057951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GCHQResearcher92457",
"html_url": "https://github.com/GCHQResearcher92457",
"followers_url": "https://api.github.com/users/GCHQResearcher92457/followers",
"following_url": "https://api.github.com/users/GCHQResearcher92457/following{/other_user}",
"gists_url": "https://api.github.com/users/GCHQResearcher92457/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GCHQResearcher92457/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GCHQResearcher92457/subscriptions",
"organizations_url": "https://api.github.com/users/GCHQResearcher92457/orgs",
"repos_url": "https://api.github.com/users/GCHQResearcher92457/repos",
"events_url": "https://api.github.com/users/GCHQResearcher92457/events{/privacy}",
"received_events_url": "https://api.github.com/users/GCHQResearcher92457/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I also need this. Did you figure out a better way of doing it other than counting the number of checkpoints required for an epoch manually?",
"> I also need this. Did you figure out a better way of doing it other than counting the number of checkpoints required for an epoch manually?\r\n\r\nWithout changing the code, no I don't think there's an alternative to counting manually. Looks like it would be a simple matter of repeating [this line](https://github.com/huggingface/transformers/blob/0866669e751bef636fa693b704a28c1fea9a17f3/src/transformers/trainer.py#L521) a few lines further down at the end of the epoch loop.",
"not sure what you mean by manually counting..but, if you just add this line before the start of each epoch(i.e [here](https://github.com/huggingface/transformers/blob/0866669e751bef636fa693b704a28c1fea9a17f3/src/transformers/trainer.py#L463)), you can make the model save after each epoch. The len(epoch_iterator) is the number of batches in an epoch. \r\n\r\n`self.args.save_steps = len(epoch_iterator)`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,601 | 1,601 | NONE | null | # 🚀 Feature request
In the various training scripts in `examples`, would it be better to checkpoint the model at the end of each epoch, as well as every `save_steps` iterations as specified by the user?
## Motivation
I suppose for language modelling, saving the model after each epoch is not as important, but for anything supervised (and some other applications) it seems natural to want checkpoints after each epoch. There are plenty of examples in the literature where one wants to inspect models when they have seen each training example some specific number of times. I have been doing some experiments recently where this was the case (with `run_languagemodeling.py` actually) and found myself having to manually enter the number of iterations per epoch as `save_steps to get the desired checkpoints.
## Your contribution
Would be a simple change to the various training scripts. Happy to do a PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3447/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3447/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3446 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3446/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3446/comments | https://api.github.com/repos/huggingface/transformers/issues/3446/events | https://github.com/huggingface/transformers/issues/3446 | 588,328,329 | MDU6SXNzdWU1ODgzMjgzMjk= | 3,446 | Special tokens to pre-trained BART model | {
"login": "loretoparisi",
"id": 163333,
"node_id": "MDQ6VXNlcjE2MzMzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loretoparisi",
"html_url": "https://github.com/loretoparisi",
"followers_url": "https://api.github.com/users/loretoparisi/followers",
"following_url": "https://api.github.com/users/loretoparisi/following{/other_user}",
"gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions",
"organizations_url": "https://api.github.com/users/loretoparisi/orgs",
"repos_url": "https://api.github.com/users/loretoparisi/repos",
"events_url": "https://api.github.com/users/loretoparisi/events{/privacy}",
"received_events_url": "https://api.github.com/users/loretoparisi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi!\r\nTwo points that might be helpful.\r\n- The `add_special_tokens` functionality should work the same as `RobertaTokenizer`. \r\n- `<s>` is already the `bos` token, so I don't expect it to be broken up.\r\n\r\nLet me know if that resolves your issue, thanks!",
"\r\nNot sure how to go about doing this ?\r\n@sshleifer any code example \r\n\r\ni see BartTokenizer is essentially RobertaTokenizer which is GPT2Tokenizer\r\n\r\n\r\nfor fine-tuning BART in lightning base we have\r\n\r\n ```\r\n self.tokenizer = AutoTokenizer.from_pretrained(\r\n self.hparams.tokenizer_name if self.hparams.tokenizer_name else self.hparams.model_name_or_path,\r\n cache_dir=cache_dir,\r\n )\r\n```\r\n\r\nCan we add the list of special tokens here ?\r\nIf then how ?",
"Are you trying to add tokens to the vocab and give them new ids?\r\nA specific example with what you expect the tokenizer to produce would be helpful.\r\n\r\nI tried the following and it doesn't work as OP intended, afaict.\r\n\r\n```python\r\nfrom transformers import BartTokenizer\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-large',\r\n additional_special_tokens=[\"<startoflyrics>\", 'dringus'])\r\n\r\nencoded = tokenizer.encode_plus(' <startoflyrics> dringus')['input_ids'] # [0, 3, 3, 2]\r\ntokenizer.decode(encoded) # '<s><unk><unk></s>'\r\n```\r\n\r\n",
"Yes @sshleifer i want to add new tokens to the vocab and give them new ids\r\nHow to go about doing it ?",
"@patrickvonplaten @LysandreJik what is the canonical way to add new non-special tokens? \r\n(1) Is there an easier way than making a new vocab and merges file?\r\n(2) If not, is there an example of how to do that?",
"I am only familiar with the `add_special_tokens` functionality for new tokens that get the \"special tokens\" treatment.\r\n\r\nFor normal tokens, one can use `add_tokens` as far as I know. ",
"```\r\nself.tokenizer = AutoTokenizer.from_pretrained(\r\n self.hparams.tokenizer_name if self.hparams.tokenizer_name else self.hparams.model_name_or_path,\r\n cache_dir=cache_dir,\r\n )\r\nself.model = MODEL_MODES[mode].from_pretrained(\r\n self.hparams.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in self.hparams.model_name_or_path),\r\n config=self.config,\r\n cache_dir=cache_dir,\r\n )\r\nself.tokenizer.add_tokens(['multi-sentence', ':snt1', ':snt2', ':snt3', ':snt4', ':snt5', ':snt5', ':snt6', ':snt7', ':snt8', ':snt9', ':root', ':ARG1', ':mod', ':op1', ':ARG0', ':ARG0-of', ':name', ':op2', ':ARG2', ':ARG1-of', ':purpose', ':prep-in', ':time', ':li', ':quant', ':unit', ':poss', ':ARG3', ':location', ':domain', ':part-of', ':manner', ':polarity', ':condition', ':ARG4', ':extent', ':time-of', ':location-of', ':op3', ':beneficiary', ':topic', ':degree', ':ARG2-of', ':example', ':extent-of', ':month', ':day', ':op4', ':ARG5', ':manner-of', ':concession', ':duration', ':path', ':mode', ':medium', ':ord', ':value', ':destination', ':source', ':direction', ':instrument-of', ':consist-of', ':dayperiod', ':frequency', ':year', ':quant-of', ':weekday', ':compared-to', ':prep-on', ':ARG3-of', ':degree-of', ':prep-as', ':instrument', ':op5', ':prep-from', ':prep-to', ':century', ':era', ':condition-of', ':op6', ':op7', ':concession-of', ':polite', ':age', ':prep-with', ':decade', ':poss-of', ':prep-without', ':prep-in-addition-to', ':accompanier', ':ord-of', ':direction-of', ':prep-against', ':prep-at', ':subevent-of', ':snt10', ':snt11', ':duration-of', ':prep-for', ':source-of', ':frequency-of', ':topic-of', ':season', ':path-of', ':op8', ':op9', ':prep-among', ':prep-on-behalf-of', ':subevent', ':part', ':ARG4-of', ':beneficiary-of', ':scale', ':example-of', ':prep-by', ':range', ':purpose-of', ':destination-of', ':op10', ':op1-of', ':name-of', ':medium-of', ':prep-along-with', ':conj-as-if', ':timezone', ':prep-under', ':accompanier-of', ':age-of', ':op11', ':op12', ':op13', ':op14', ':op15', ':prep-amid', ':prep-toward', ':prep-out-of', ':prep-into', ':domain-of', ':ARG7', ':quarter', ':ARG5-of', ':op16', ':op17', ':op18', ':op19', ':op20', ':ARG8', ':ARG9', ':calendar', ':year2', ':ARG6', ':subset-of', ':prep-with-of'])\r\nself.model.resize_token_embeddings(len(self.tokenizer))\r\n```\r\n\r\n\r\nThis worked for me\r\n",
"Yes, the `add_tokens` that @patrickvonplaten and @tuhinjubcse mentionned should get the job done",
"I need to add tokens that will serve as separator in the text generation. For instance:\r\n^Input:^Bitocoin price went down for 10 percent^Caption:^10% OFF^output:^10% reduce of the bitcoin price.\r\nSo in this example is ^input:^, ^Caption:^ and ^output:^. The idea is when I give to the train model the sentence:\r\n^Input:^Bitocoin price went down for 10 percent^Caption:^ it should generate text but the model should learn the static tokens in the example. Should I use add_tokens or add_special_tokens?\r\n"
] | 1,585 | 1,621 | 1,585 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
Is it possibile to add special tokens to the pre-trained BART model? My text has `<s>` as sequence separator for sentences. I would like that the encoder will handle it as a whole token, otherwise the model will break it in codes and learn like `<s` or `s>` etc. in the same we did for other tokenizers like `GPT2Tokenizer`?
```python
tokenizer = GPT2Tokenizer.from_pretrained(args.out,
unk_token="<unk>",
bos_token="<s>",
eos_token="</s>",
pad_token = "<pad>",
additional_special_tokens=["<startoflyrics>", "<endoflyrics>", "<nl>"])
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3446/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3445/comments | https://api.github.com/repos/huggingface/transformers/issues/3445/events | https://github.com/huggingface/transformers/issues/3445 | 588,279,314 | MDU6SXNzdWU1ODgyNzkzMTQ= | 3,445 | run_lm_finetuning on multiple training files | {
"login": "ghtaro",
"id": 24857936,
"node_id": "MDQ6VXNlcjI0ODU3OTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/24857936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghtaro",
"html_url": "https://github.com/ghtaro",
"followers_url": "https://api.github.com/users/ghtaro/followers",
"following_url": "https://api.github.com/users/ghtaro/following{/other_user}",
"gists_url": "https://api.github.com/users/ghtaro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghtaro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghtaro/subscriptions",
"organizations_url": "https://api.github.com/users/ghtaro/orgs",
"repos_url": "https://api.github.com/users/ghtaro/repos",
"events_url": "https://api.github.com/users/ghtaro/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghtaro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, this is not currently supported, you would need to implement this yourself. Feel free to open a PR if you do"
] | 1,585 | 1,585 | 1,585 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
Hi,
I would like to fine-tune huggingface's pre-trained BERT model on a relatively big text data.
I split the data into multiple files (~10k files) in .raw format in the same folder.
First, I succeeded to run run_lm_finetuning.py on one of the raw files I generated.
Now, I would like to run that in my 10k raw files.
I realised that that train_data_file argument does not accept a folder name (only the path to a single raw file).
I believe that we can do a loop over the 10k train files and incrementally fine-tuning the model, but it does not look like a best solution for me...
Could you please tell me if it exists a simple way to achieve the above?
Thank you very much for your help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3445/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3445/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3444/comments | https://api.github.com/repos/huggingface/transformers/issues/3444/events | https://github.com/huggingface/transformers/issues/3444 | 588,181,401 | MDU6SXNzdWU1ODgxODE0MDE= | 3,444 | Import error in example script `run_language_modeling.py` | {
"login": "xiaolul",
"id": 8575650,
"node_id": "MDQ6VXNlcjg1NzU2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8575650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaolul",
"html_url": "https://github.com/xiaolul",
"followers_url": "https://api.github.com/users/xiaolul/followers",
"following_url": "https://api.github.com/users/xiaolul/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaolul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaolul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaolul/subscriptions",
"organizations_url": "https://api.github.com/users/xiaolul/orgs",
"repos_url": "https://api.github.com/users/xiaolul/repos",
"events_url": "https://api.github.com/users/xiaolul/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaolul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You need to upgrade your version of transformers (to 2.6), or better, to [install from source](https://github.com/huggingface/transformers#run-the-examples).",
"I just pulled the `huggingface/transformers-tensorflow-gpu:2.10.0` docker image, went to the `examples/language-modeling/` folder and ran the following, and I got the same error:\r\n```\r\npython3 run_language_modeling.py --output_dir=/app/data --model_type=distilbert --model_name_or_path=distilbert-base-uncased --do_train --train_data_file=/app/data/train_data.txt --do_eval --eval_data_file=/app/data/eval_data.txt --mlm\r\n```\r\nHaven't tried the workaround above yet.\r\n\r\n\r\nSteps:\r\n- `docker run -it -v `pwd`/data:/app/data huggingface/transformers-tensorflow-gpu:2.10.0`\r\n- `cd workspace/examples/language-modeling/`\r\n- try to run example command using `python3`\r\n\r\n`python3 -m pip show transformers` reports `2.10.0` is installed.\r\n",
"I get the issue (the `master` branch being checked out in the docker build) it just seems like it'd be cool for there to be a simpler way to run the examples in docker. If you wanted to use the `2.9.0` image, you'd have to pull the image and have your script first check out master as of the tag `2.9.0` then install from source, right?\r\n\r\nIt'd be a nice feature if the docker images could run the examples without modification",
"I get the same issue when I `pip install transformers`. When I downgrade to `2.6.0`, it can't import `CONFIG_MAPPING`. Anything from `2.7.0` to `2.10.0` up I get the `MODEL_WITH_LM_HEAD_MAPPING` error",
"Okay, I got it to work for `2.10.0`. I just had to reinstall PyTorch\r\n```\r\npip3 install torch\r\n```"
] | 1,585 | 1,590 | 1,585 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): RobertaForMaskedLM
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. pip install transformers
2. run `run_language_modeling.py`, which is the example script
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Error message:
> Traceback (most recent call last): File "run_language_modeling.py", line 42, in <module> from transformers import (
ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING'
## Expected behavior
The script should run..
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): na
- Using GPU in script?: y
- Using distributed or parallel set-up in script?: n
## Note
The work around is to use
> from transformers.modeling_auto import MODEL_WITH_LM_HEAD_MAPPING
> from transformers.file_utils import WEIGHTS_NAME
Can you please update the example script? It is confusing ...
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3444/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3443/comments | https://api.github.com/repos/huggingface/transformers/issues/3443/events | https://github.com/huggingface/transformers/issues/3443 | 588,119,550 | MDU6SXNzdWU1ODgxMTk1NTA= | 3,443 | `run_language_modeling` fails with community model (BioClinicalBERT) | {
"login": "xhluca",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhluca",
"html_url": "https://github.com/xhluca",
"followers_url": "https://api.github.com/users/xhluca/followers",
"following_url": "https://api.github.com/users/xhluca/following{/other_user}",
"gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhluca/subscriptions",
"organizations_url": "https://api.github.com/users/xhluca/orgs",
"repos_url": "https://api.github.com/users/xhluca/repos",
"events_url": "https://api.github.com/users/xhluca/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhluca/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"Seems like scibert is also broken. When I run\r\n\r\n```\r\npython run_language_modeling.py \\\r\n --output_dir=output \\\r\n --model_type=scibert \\\r\n --model_name_or_path=allenai/scibert_scivocab_cased \\\r\n --output_dir=/kaggle/working/model \\\r\n --do_train \\\r\n --line_by_line \\\r\n --train_data_file=wikitext-2-raw/wiki.train.raw \\\r\n --do_eval \\\r\n --eval_data_file=wikitext-2-raw/wiki.valid.raw \\\r\n --num_train_epochs=4 \\\r\n --mlm\r\n```\r\n\r\nI get this:\r\n```\r\nEpoch: 0%| | 0/4 [00:00<?, ?it/s]\r\nIteration: 0%| | 0/5942 [00:00<?, ?it/s]\r\nIteration: 0%| | 1/5942 [00:00<33:48, 2.93it/s]\r\nIteration: 0%| | 2/5942 [00:00<29:32, 3.35it/s]\r\nIteration: 0%| | 3/5942 [00:00<25:44, 3.84it/s]\r\nIteration: 0%| | 4/5942 [00:00<24:19, 4.07it/s]\r\nIteration: 0%| | 5/5942 [00:01<22:20, 4.43it/s]Traceback (most recent call last):\r\n File \"run_language_modeling.py\", line 782, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 732, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_language_modeling.py\", line 345, in train\r\n loss.backward()\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/tensor.py\", line 195, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py\", line 99, in backward\r\n allow_unreachable=True) # allow_unreachable flag\r\nRuntimeError: CUDA error: device-side assert triggered\r\n```",
"This is because the tokenizers you mention do not have a `tokenizer_config.json` file on the S3. There should be one limiting the maximum length to 512 tokens.\r\n\r\nHere it is set to 1e12 because it doesn't detect a maximum length in the configuration.\r\n\r\ncc @julien-c ",
"Yes, in that case you would need to pass a `max_block` arg to the script.\r\n\r\nLet us know if it fixes your issue.",
"I see, thanks a lot! Here it would be `max_block=512` right (or whatever is the maximum length supported by that model)?",
"Yes, that's right!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I haven't had the chance to test `max_block=512`, but I hope this solves the problem. If it still persists, I'll re-open this issue."
] | 1,585 | 1,591 | 1,591 | CONTRIBUTOR | null | # 🐛 Bug
The `run_language_modeling.py` fails when using community models.
## Information
Model I am using (Bert, XLNet ...): BioClinical_BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. download wikitext-2 raw
2. run exactly the same
3.
This code command line yields the following error:
```
python run_language_modeling.py \
--output_dir=output \
--model_type=BioClinicalBERT \
--model_name_or_path=emilyalsentzer/Bio_ClinicalBERT \
--output_dir=/kaggle/working/model \
--do_train \
--line_by_line \
--train_data_file=wikitext-2-raw/wiki.train.raw \
--do_eval \
--eval_data_file=wikitext-2-raw/wiki.valid.raw \
--num_train_epochs=4 \
--mlm
```
the following error will appear:
```
File "run_language_modeling.py", line 782, in <module>
main()
File "run_language_modeling.py", line 732, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_language_modeling.py", line 333, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 987, in forward
encoder_attention_mask=encoder_attention_mask,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 790, in forward
encoder_attention_mask=encoder_extended_attention_mask,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 407, in forward
hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 368, in forward
self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 314, in forward
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 234, in forward
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1579022034529/work/aten/src/THC/THCBlas.cu:368
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
transformers v2.6.0
- `transformers` version:
- Platform: Kaggle
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3443/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3442/comments | https://api.github.com/repos/huggingface/transformers/issues/3442/events | https://github.com/huggingface/transformers/issues/3442 | 588,094,829 | MDU6SXNzdWU1ODgwOTQ4Mjk= | 3,442 | can't import TFBertModel from transformers | {
"login": "xiongma",
"id": 30991932,
"node_id": "MDQ6VXNlcjMwOTkxOTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30991932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiongma",
"html_url": "https://github.com/xiongma",
"followers_url": "https://api.github.com/users/xiongma/followers",
"following_url": "https://api.github.com/users/xiongma/following{/other_user}",
"gists_url": "https://api.github.com/users/xiongma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiongma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiongma/subscriptions",
"organizations_url": "https://api.github.com/users/xiongma/orgs",
"repos_url": "https://api.github.com/users/xiongma/repos",
"events_url": "https://api.github.com/users/xiongma/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiongma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I try it in Mac, it had same error\r\nmy enviroment is transformers 2.4 mac python 3.7",
"That's because you don't have TensorFlow 2 installed.",
"Conda users:\r\n\r\nFor whom the problem is not solved with re/installing TensorFlow, update the modules in conda.\r\n`conda install tensorflow`",
"You might want to try this:\r\n\r\n`conda install -c huggingface transformers`\r\n\r\n(ref: https://pypi.org/project/transformers/)",
"Thank you @mzackaria for the answer. The problem is solved (a long time ago). as mentioned above :)",
"I'm on Tensorflow 1.x and can't upgrade to 2.x\r\nInstalling an older version of transformers worked for me.\r\n`pip install transformers==4.2.2`"
] | 1,585 | 1,629 | 1,585 | NONE | null | this is the log when I imported the TFBertModel from transformers
```
from transformers import TFBertModel
ImportError: cannot import name 'TFBertModel' from 'transformers' (/home/cally/.local/lib/python3.7/site-packages/transformers/__init__.py)
```
my enviroment is transformers 2.4 linux python 3.7 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3442/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3441/comments | https://api.github.com/repos/huggingface/transformers/issues/3441/events | https://github.com/huggingface/transformers/pull/3441 | 588,085,081 | MDExOlB1bGxSZXF1ZXN0MzkzOTAzMDY3 | 3,441 | Add support for the null answer in `QuestionAnsweringPipeline` | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=h1) Report\n> Merging [#3441](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/010e0460b22ddd7f74e31163f69ab3da2e9741ba&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `66.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3441 +/- ##\n==========================================\n- Coverage 77.61% 77.60% -0.01% \n==========================================\n Files 100 100 \n Lines 16972 16978 +6 \n==========================================\n+ Hits 13172 13175 +3 \n- Misses 3800 3803 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `72.44% <66.66%> (-0.09%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `83.94% <0.00%> (-0.18%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=footer). Last update [010e046...8671bc3](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @bryant1410, thanks for pushing this into the QA pipeline.\r\n\r\nMy only concern is about the name of the parameter `version_2_with_negative` introduced in this PR. It seems very tight up to SQuAD2 and might be hard for newcomers to directly understand what it does.\r\n\r\nWould you mind changing the name of the parameter `version_2_with_negative` to `handle_impossible_answer` ? ",
"Sure. Btw, is it okay for it to default to `False`?",
"LGTM too, thanks @bryant1410 "
] | 1,585 | 1,587 | 1,587 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3441/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3441",
"html_url": "https://github.com/huggingface/transformers/pull/3441",
"diff_url": "https://github.com/huggingface/transformers/pull/3441.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3441.patch",
"merged_at": 1587136642000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3440/comments | https://api.github.com/repos/huggingface/transformers/issues/3440/events | https://github.com/huggingface/transformers/pull/3440 | 588,074,571 | MDExOlB1bGxSZXF1ZXN0MzkzODk1MDA3 | 3,440 | feat: config what's trainable in Bert layers | {
"login": "gthb",
"id": 153580,
"node_id": "MDQ6VXNlcjE1MzU4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/153580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gthb",
"html_url": "https://github.com/gthb",
"followers_url": "https://api.github.com/users/gthb/followers",
"following_url": "https://api.github.com/users/gthb/following{/other_user}",
"gists_url": "https://api.github.com/users/gthb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gthb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gthb/subscriptions",
"organizations_url": "https://api.github.com/users/gthb/orgs",
"repos_url": "https://api.github.com/users/gthb/repos",
"events_url": "https://api.github.com/users/gthb/events{/privacy}",
"received_events_url": "https://api.github.com/users/gthb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=h1) Report\n> Merging [#3440](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/010e0460b22ddd7f74e31163f69ab3da2e9741ba&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3440 +/- ##\n==========================================\n+ Coverage 77.61% 77.62% +0.01% \n==========================================\n Files 100 100 \n Lines 16972 16985 +13 \n==========================================\n+ Hits 13172 13185 +13 \n Misses 3800 3800 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.16% <100.00%> (+0.08%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=footer). Last update [010e046...2597d20](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,594 | 1,594 | CONTRIBUTOR | null | Make it possible to configure trainability for specific subparts of a `TFBertMainLayer`.
This is minimal and Bert-only. Really should be added in more model types, not just Bert, but it probably necessarily differs quite a bit between model types, so probably can't be done in a very “DRY” way.
See https://github.com/tensorflow/tensorflow/issues/37541 which makes this necessary: if we set `l.trainable = False` on a layer _after_ initializing it, then training and serialization proceeds without apparent problems but then deserialization will fail, because:
* the `trainable` attribute doesn't get persisted
* parameter values are deserialized and batch-assigned to model parameters _in order_ — with the implicit assumption that the ordering of parameters is the same as in the model before serialization
* that assumption doesn't hold if some layers were not trainable before serialization, because the ordering of parameters depends on the `trainable` attribute, which is `True` by default because it wasn't persisted | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3440/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3440",
"html_url": "https://github.com/huggingface/transformers/pull/3440",
"diff_url": "https://github.com/huggingface/transformers/pull/3440.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3440.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3439/comments | https://api.github.com/repos/huggingface/transformers/issues/3439/events | https://github.com/huggingface/transformers/pull/3439 | 588,010,074 | MDExOlB1bGxSZXF1ZXN0MzkzODQzNjIx | 3,439 | Force the return of token type IDs | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | close #3313
close #3227
These two issues happen because the `token_type_ids` are not generated by the `encode_plus` method when the SQuAD and multiple choice scripts expect it. The GLUE script was patched by #3240 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3439/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3439/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3439",
"html_url": "https://github.com/huggingface/transformers/pull/3439",
"diff_url": "https://github.com/huggingface/transformers/pull/3439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3439.patch",
"merged_at": 1585212096000
} |
https://api.github.com/repos/huggingface/transformers/issues/3438 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3438/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3438/comments | https://api.github.com/repos/huggingface/transformers/issues/3438/events | https://github.com/huggingface/transformers/issues/3438 | 587,998,589 | MDU6SXNzdWU1ODc5OTg1ODk= | 3,438 | Same probability from fine-tuning custom pre-trained LM | {
"login": "yuanbit",
"id": 12972261,
"node_id": "MDQ6VXNlcjEyOTcyMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/12972261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanbit",
"html_url": "https://github.com/yuanbit",
"followers_url": "https://api.github.com/users/yuanbit/followers",
"following_url": "https://api.github.com/users/yuanbit/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanbit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanbit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanbit/subscriptions",
"organizations_url": "https://api.github.com/users/yuanbit/orgs",
"repos_url": "https://api.github.com/users/yuanbit/repos",
"events_url": "https://api.github.com/users/yuanbit/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanbit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,585 | 1,619 | 1,619 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I have a dataset with QA sentence pairs. I pre-trained a bert model from scractch using run_language_modeling.py where I concatenated the QA pairs and trained the dataset line_by_line.
I tried to fine-tune the custom pre-trained model, but am getting the same probability output for different inputs. I also tried to reduce the learning rate, but the constant probability problem remains.
What could I be doing wrong?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3438/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3437 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3437/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3437/comments | https://api.github.com/repos/huggingface/transformers/issues/3437/events | https://github.com/huggingface/transformers/pull/3437 | 587,967,854 | MDExOlB1bGxSZXF1ZXN0MzkzODA4ODU2 | 3,437 | [Bug fix] Using loaded checkpoint with --do_predict (instead of random init) | {
"login": "ethanjperez",
"id": 6402205,
"node_id": "MDQ6VXNlcjY0MDIyMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6402205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethanjperez",
"html_url": "https://github.com/ethanjperez",
"followers_url": "https://api.github.com/users/ethanjperez/followers",
"following_url": "https://api.github.com/users/ethanjperez/following{/other_user}",
"gists_url": "https://api.github.com/users/ethanjperez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethanjperez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethanjperez/subscriptions",
"organizations_url": "https://api.github.com/users/ethanjperez/orgs",
"repos_url": "https://api.github.com/users/ethanjperez/repos",
"events_url": "https://api.github.com/users/ethanjperez/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethanjperez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Tagging @srush @nateraw from the original [Lightning GLUE PR](https://github.com/huggingface/transformers/pull/3290) to check I'm not missing something?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=h1) Report\n> Merging [#3437](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ccbe839ee0b78a17e74dab218bfae7efe904ac3b&el=desc) will **increase** coverage by `0.04%`.\n> The diff coverage is `88.88%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3437 +/- ##\n==========================================\n+ Coverage 77.56% 77.60% +0.04% \n==========================================\n Files 100 100 \n Lines 16970 16967 -3 \n==========================================\n+ Hits 13162 13167 +5 \n+ Misses 3808 3800 -8 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/3437/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `24.68% <88.88%> (+2.94%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3437/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.85% <0.00%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=footer). Last update [83272a3...f12d585](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I'll check this out later tonight! I'm on mobile so I've just looked at your commit quickly...looks like you're right. I know in the past I've instantiated the model then called `model.load_from_checkpoint(loaded_ckpt)` so what you've got probably gets the same job done. The benefit of doin it the way I just mentioned is that if you already have a model object available from training, you can just load the best ckpt into that. Either way works though! ",
"That was fast :smile: Looks good to me!",
"Thanks for checking :) I'm still not able to reproduce my in-training validation performance though with the --do_predict flag, any ideas? I'm getting identical validation accuracy on different runs now, but the accuracy is still near random",
"@ethanjperez I just [checked the docs](https://pytorch-lightning.readthedocs.io/en/latest/weights_loading.html), and it looks like the way we were doing it originally was correct.\r\n\r\n```python\r\nmodel = MyLightingModule.load_from_checkpoint(PATH)\r\nmodel.eval()\r\ny_hat = model(x)\r\n```\r\n\r\nThe way that I was explaining to do it would require you to use `torch.load` on the checkpoint path, which you would then pass to `model.load_state_dict`. The above method (what we had originally) is probably supposed to do that for you.\r\n\r\nI haven't had the chance to recreate the issue, so I'll have to take a look.",
"Cool thanks! Even with the original way, I was still not able to reproduce my in-training validation performance (just something to look out for when you try) - In particular, I'm loading/running an already trained model with the `--do_predict` flag without using the `--do_train` flag (I don't think you'd see the issue if you use both `--do_predict` and `--do_train`)",
"@nateraw @sshleifer Are you guys able to load a trained model successfully with the pytorch-lightning scripts? Even after this patch, I am having issues loading an already trained model, i.e., if I just use `--do_eval` without also using `--do_train`",
"Sorry for taking so long. I will try to reproduce this today if there is no update on your end!\r\n\r\nFiling an issue with what you ran/expected would help :) @ethanjperez ",
"@sshleifer Just seeing this - were you able to reproduce the issue? I can't remember what exact command I ran, but it was a standard evaluation command (the same as the training command I used, but with a few flags tweaked, e.g. drop the `--do-train` flag and add the `--do-eval` flag)",
"This is fixed now."
] | 1,585 | 1,592 | 1,585 | CONTRIBUTOR | null | Without this fix, I'm getting near-random validation performance for a trained model, and the validation performance differs per validation run. I think this happens since the `model` variable isn't set with the loaded checkpoint, so I'm using a randomly initialized model. Looking at the model activations, they differ each time I run evaluation (but they don't with this fix). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3437/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3437",
"html_url": "https://github.com/huggingface/transformers/pull/3437",
"diff_url": "https://github.com/huggingface/transformers/pull/3437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3437.patch",
"merged_at": 1585602369000
} |
https://api.github.com/repos/huggingface/transformers/issues/3436 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3436/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3436/comments | https://api.github.com/repos/huggingface/transformers/issues/3436/events | https://github.com/huggingface/transformers/issues/3436 | 587,957,792 | MDU6SXNzdWU1ODc5NTc3OTI= | 3,436 | TFXLMRoberta impossible to load base and large model with pretrained weight ? | {
"login": "Shiro-LK",
"id": 26505641,
"node_id": "MDQ6VXNlcjI2NTA1NjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shiro-LK",
"html_url": "https://github.com/Shiro-LK",
"followers_url": "https://api.github.com/users/Shiro-LK/followers",
"following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}",
"gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions",
"organizations_url": "https://api.github.com/users/Shiro-LK/orgs",
"repos_url": "https://api.github.com/users/Shiro-LK/repos",
"events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shiro-LK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"maybe u can use from_pt=True\r\nTFXLMRobertaForSequenceClassification.from_pretrained(pretrained_model_name_or_path=\"xlm-roberta-large\", **_from_pt=True_**)",
"Hi, thanks for your answer.\r\nI got exactly the same message with \"from_pt=True\"",
"I am suspecting that the file does not exist.\r\nI succeed to load with TFXLMRoberta doing that :\r\n\r\n```\r\nm = XLMRobertaForSequenceClassification.from_pretrained(\"xlm-roberta-base\", num_labels=1)\r\nm.save_pretrained(\"./\")\r\ndel m\r\nmodel = TFXLMRobertaForSequenceClassification.from_pretrained(\"./\", , num_labels=1, from_pt=True)\r\n```",
"Hello, indeed there is no official XLM-R checkpoint. You can use @jplu's [checkpoint from the modelhub](https://huggingface.co/models/?search=jplu%2Ftf-xlm):\r\n\r\n```py\r\nm = TFXLMRobertaForSequenceClassification.from_pretrained(\"jplu/tf-xlm-roberta-base\", num_labels=1)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
XLMRoberta
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ x] the official example scripts: (give details below)
The tasks I am working on is:
my own task or dataset: (give details below)
## To reproduce
launch the command :
`model = TFXLMRobertaForSequenceClassification.from_pretrained(pretrained_model_name_or_path="xlm-roberta-large" )`
Steps to reproduce the behavior:
1. from transformers import*
2. launch the command above
3. error :
> TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
## Expected behavior
model should be load.
## Environment info
- `transformers` version: 2.6.0
- Platform: colab
- Tensorflow version (GPU?):2.1 gpu
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3436/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3435 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3435/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3435/comments | https://api.github.com/repos/huggingface/transformers/issues/3435/events | https://github.com/huggingface/transformers/pull/3435 | 587,930,786 | MDExOlB1bGxSZXF1ZXN0MzkzNzc4NTA5 | 3,435 | Updated/added model cards | {
"login": "traviemcg",
"id": 37486396,
"node_id": "MDQ6VXNlcjM3NDg2Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/37486396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/traviemcg",
"html_url": "https://github.com/traviemcg",
"followers_url": "https://api.github.com/users/traviemcg/followers",
"following_url": "https://api.github.com/users/traviemcg/following{/other_user}",
"gists_url": "https://api.github.com/users/traviemcg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/traviemcg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/traviemcg/subscriptions",
"organizations_url": "https://api.github.com/users/traviemcg/orgs",
"repos_url": "https://api.github.com/users/traviemcg/repos",
"events_url": "https://api.github.com/users/traviemcg/events{/privacy}",
"received_events_url": "https://api.github.com/users/traviemcg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=h1) Report\n> Merging [#3435](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ccbe839ee0b78a17e74dab218bfae7efe904ac3b&el=desc) will **increase** coverage by `0.04%`.\n> The diff coverage is `88.88%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3435 +/- ##\n==========================================\n+ Coverage 77.56% 77.60% +0.04% \n==========================================\n Files 100 100 \n Lines 16970 16967 -3 \n==========================================\n+ Hits 13162 13167 +5 \n+ Misses 3808 3800 -8 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/3435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `24.68% <88.88%> (+2.94%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.85% <0.00%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=footer). Last update [83272a3...d486459](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Trained four models, adding/updating model cards to make consistent! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3435/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3435",
"html_url": "https://github.com/huggingface/transformers/pull/3435",
"diff_url": "https://github.com/huggingface/transformers/pull/3435.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3435.patch",
"merged_at": 1585168804000
} |
https://api.github.com/repos/huggingface/transformers/issues/3434 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3434/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3434/comments | https://api.github.com/repos/huggingface/transformers/issues/3434/events | https://github.com/huggingface/transformers/issues/3434 | 587,757,698 | MDU6SXNzdWU1ODc3NTc2OTg= | 3,434 | How to detokenize a BertTokenizer output? | {
"login": "al-yakubovich",
"id": 12928778,
"node_id": "MDQ6VXNlcjEyOTI4Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/12928778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/al-yakubovich",
"html_url": "https://github.com/al-yakubovich",
"followers_url": "https://api.github.com/users/al-yakubovich/followers",
"following_url": "https://api.github.com/users/al-yakubovich/following{/other_user}",
"gists_url": "https://api.github.com/users/al-yakubovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/al-yakubovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/al-yakubovich/subscriptions",
"organizations_url": "https://api.github.com/users/al-yakubovich/orgs",
"repos_url": "https://api.github.com/users/al-yakubovich/repos",
"events_url": "https://api.github.com/users/al-yakubovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/al-yakubovich/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"I came across the same problem some days ago. I like your code, however it can be faster:\r\n\r\n```python\r\n\r\ndef is_subtoken(word):\r\n if word[:2] == \"##\":\r\n return True\r\n else:\r\n return False\r\n\r\ntokens = ['why', 'isn', \"##'\", '##t', 'Alex', \"##'\", 'text', 'token', '##izing']\r\nrestored_text = []\r\nfor i in range(len(tokens)):\r\n if not is_subtoken(tokens[i]) and (i+1)<len(tokens) and is_subtoken(tokens[i+1]):\r\n restored_text.append(tokens[i] + tokens[i+1][2:])\r\n if (i+2)<len(tokens) and is_subtoken(tokens[i+2]):\r\n restored_text[-1] = restored_text[-1] + tokens[i+2][2:]\r\n elif not is_subtoken(tokens[i]):\r\n restored_text.append(tokens[i])\r\n```",
"@GuillemGSubies Did you solve this problem in your task?",
"> @GuillemGSubies Did you solve this problem in your task?\r\n\r\nI used a modification of the code I posted above (in my use case I needed to interact with some NER labels also). However if you want to detokenize I think your code works perfectly.",
"@GuillemGSubies \r\nI just want to clarify something in my question.\r\n\r\nLet's consider two sentences:\r\n\r\n \"why isn't Alex's text tokenizing? The house on the left is the Smiths' house\"\r\n\r\nNow let's tokenize and decode:\r\n\r\n from transformers import BertTokenizer\r\n tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)\r\n tokenizer.decode(tokenizer.convert_tokens_to_ids(tokenizer.tokenize(\"why isn't Alex's text tokenizing? The house on the left is the Smiths' house\")))\r\n\r\nWe get:\r\n\r\n \"why isn't alex's text tokenizing? the house on the left is the smiths'house\"\r\n\r\n\r\n\r\n**My question is how dealing with missing space in some possessives like *smiths'house*?**\r\n\r\n\r\nFor me, it seems that the process of tokenization in Transformers is done not right. Let's consider output of\r\n\r\n tokenizer.tokenize(\"why isn't Alex's text tokenizing? The house on the left is the Smiths' house\")\r\n\r\nwe get:\r\n\r\n ['why', 'isn', \"'\", 't', 'alex', \"'\", 's', 'text', 'token', '##izing', '?', 'the', 'house', 'on', 'the', 'left', 'is', 'the', 'smith', '##s', \"'\", 'house']\r\n\r\nSo in this step, we already have lost important information about the last apostrophe. It would be much better if tokenization was done in the another way:\r\n\r\n ['why', 'isn', \"##'\", '##t', 'alex', \"##'\", '##s', 'text', 'token', '##izing', '?', 'the', 'house', 'on', 'the', 'left', 'is', 'the', 'smith', '##s', \"##'\", 'house']\r\n\r\nIn this way, tokenization keeps all information about apostrophes, and we will not have problems with possessives.\r\n",
"This goes beyond of my understanding of the library, I am not a dev, sorry",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | For example, let's tokenize a sentece "why isn't Alex' text tokenizing":
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
tokens = tokenizer.tokenize("why isn't Alex' text tokenizing")
We are getting the next output:
['why', 'isn', "'", 't', 'Alex', "'", 'text', 'token', '##izing']
I want to convert it to:
why isn't Alex' text tokenizing
It seems the tokenizer doesn't do its job in the best way. If after tokenization we would have something like that:
['why', 'isn', "##'", '##t', 'Alex', "##'", 'text', 'token', '##izing']
It would be easy to convert:
tokens = ['why', 'isn', "##'", '##t', 'Alex', "##'", 'text', 'token', '##izing']
restored_text = [None] * len(tokens)
tokens.extend(['#', '#'])
for i in range(len(tokens )-2):
if re.findall("#{2}", tokens[i+1]):
restored_text [i] = tokens[i] + tokens[i+1].replace('##', '')
if re.findall("#{2}", tokens [i+2]):
restored_text[i] = restored_text[i] + tokens[i+2].replace('##', '')
else:
restored_text[i] = tokens[i]
restored_text_without_masks = []
for i in range(len(restored_text)):
if not restored_text[i].startswith('#'):
restored_text_without_masks.append(restored_text[i]) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3434/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3434/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3433 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3433/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3433/comments | https://api.github.com/repos/huggingface/transformers/issues/3433/events | https://github.com/huggingface/transformers/pull/3433 | 587,747,120 | MDExOlB1bGxSZXF1ZXN0MzkzNjI4NzMx | 3,433 | Extend config with task specific configs. | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> I think this makes sense, giving where we've been going up to now.\r\n> \r\n> I would like to understand what is our philosophy with the growing size of the configuration files; for example the `bert-base-cased` configuration on S3 looks like this:\r\n> \r\n> ```\r\n> {\r\n> \"architectures\": [\r\n> \"BertForMaskedLM\"\r\n> ],\r\n> \"attention_probs_dropout_prob\": 0.1,\r\n> \"hidden_act\": \"gelu\",\r\n> \"hidden_dropout_prob\": 0.1,\r\n> \"hidden_size\": 768,\r\n> \"initializer_range\": 0.02,\r\n> \"intermediate_size\": 3072,\r\n> \"max_position_embeddings\": 512,\r\n> \"num_attention_heads\": 12,\r\n> \"num_hidden_layers\": 12,\r\n> \"type_vocab_size\": 2,\r\n> \"vocab_size\": 28996\r\n> }\r\n> ```\r\n> \r\n> (which is readable imo) and once it's saved it now looks like this:\r\n> \r\n> ```\r\n> {\r\n> \"_num_labels\": 2,\r\n> \"architectures\": [\r\n> \"BertForMaskedLM\"\r\n> ],\r\n> \"attention_probs_dropout_prob\": 0.1,\r\n> \"bos_token_id\": null,\r\n> \"do_sample\": false,\r\n> \"early_stopping\": false,\r\n> \"eos_token_id\": null,\r\n> \"finetuning_task\": null,\r\n> \"hidden_act\": \"gelu\",\r\n> \"hidden_dropout_prob\": 0.1,\r\n> \"hidden_size\": 768,\r\n> \"id2label\": {\r\n> \"0\": \"LABEL_0\",\r\n> \"1\": \"LABEL_1\"\r\n> },\r\n> \"initializer_range\": 0.02,\r\n> \"intermediate_size\": 3072,\r\n> \"is_decoder\": false,\r\n> \"is_encoder_decoder\": false,\r\n> \"label2id\": {\r\n> \"LABEL_0\": 0,\r\n> \"LABEL_1\": 1\r\n> },\r\n> \"layer_norm_eps\": 1e-12,\r\n> \"length_penalty\": 1.0,\r\n> \"max_length\": 20,\r\n> \"max_position_embeddings\": 512,\r\n> \"min_length\": 0,\r\n> \"model_type\": \"bert\",\r\n> \"no_repeat_ngram_size\": 0,\r\n> \"num_attention_heads\": 12,\r\n> \"num_beams\": 1,\r\n> \"num_hidden_layers\": 12,\r\n> \"num_return_sequences\": 1,\r\n> \"output_attentions\": false,\r\n> \"output_hidden_states\": false,\r\n> \"output_past\": true,\r\n> \"pad_token_id\": 0,\r\n> \"pruned_heads\": {},\r\n> \"repetition_penalty\": 1.0,\r\n> \"temperature\": 1.0,\r\n> \"top_k\": 50,\r\n> \"top_p\": 1.0,\r\n> \"torchscript\": false,\r\n> \"type_vocab_size\": 2,\r\n> \"use_bfloat16\": false,\r\n> \"vocab_size\": 28996\r\n> }\r\n> ```\r\n> \r\n> (which is less readable), are we planning to keep them growing as the tokenizer and model configurations are merged? I feel like adding all those attributes to the configuration saves an \"experiment\" more than a \"model\". Is this something we're aiming for?\r\n\r\nMight it be possible to only save parameters that are different from the default config of the corresponding model? This would keep it readable. ",
"LGTM and I agree with what @LysandreJik and you just said above. Serialized `config.json` should be more minimal. \r\n\r\nFor instance I've always disliked the `id2label` and `label2id` being serialized even for models that don't have a classification head.",
"After this is merged I can open a new PR that serializes only the non-default values.",
"I agree with what @LysandreJik and @julien-c says about serializing only non-default values by the way."
] | 1,585 | 1,585 | 1,585 | MEMBER | null | As discussed and proposed by @thomwolf in PR #3413, another step towards a combined tokenizer/model config is this PR. It extends the normal config with the following parameters:
```
{
....
prefix = "", # generic generation HP
max_length: 100,
length_penalty: 1.0,
task_specific_params: {
"summarization": { # task id (e.g. name of the pipeline?)
max_length: 140,
length_penalty: 2.0
},
"translation_en_to_de": {
prefix: "translate English to German: "
max_length: 160,
length_penalty: 3.0
},
},
}
```
In terms of hierarchy for a task-specific generation it would go as follows:
1) Is the parameter provided as an argument to the `generate` method ? Yes use these. No - go to 2.
2) Is the parameter provided in the `task_specific_params dict` ? Yes use these. No - go to 3.
3) Is the parameter provided in the default `config dict`? Yes use these. No - go to 4.
4) Is the parameter provided hard-coded in the model's config file? Yes use these. No - use the very default parameters of `PretrainedConfig`
These were our arguments in favor of this:
- This removes a lot of hard coded parameters in pipelines and examples
- Another step towards a combined tokenizer / model config
- A lot of weird if-else statements can be saved ("If task is en-de translation then do X" won't be necessary as the en-de specific parameters will override the default ones)
### TODO
If you guys are fine with this structure:
- [ ] I will add the `task_specific_params` for Bart and T5s configs on S3
- [ ] clean up the examples and pipelines.
- [ ] rebase all the T5s PRs: #3428, #3419, #3413, #3411 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3433/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3433",
"html_url": "https://github.com/huggingface/transformers/pull/3433",
"diff_url": "https://github.com/huggingface/transformers/pull/3433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3433.patch",
"merged_at": 1585168325000
} |
https://api.github.com/repos/huggingface/transformers/issues/3432 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3432/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3432/comments | https://api.github.com/repos/huggingface/transformers/issues/3432/events | https://github.com/huggingface/transformers/issues/3432 | 587,708,041 | MDU6SXNzdWU1ODc3MDgwNDE= | 3,432 | how to use transformers to get all pretraining model names in transformers hub | {
"login": "xiongma",
"id": 30991932,
"node_id": "MDQ6VXNlcjMwOTkxOTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30991932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiongma",
"html_url": "https://github.com/xiongma",
"followers_url": "https://api.github.com/users/xiongma/followers",
"following_url": "https://api.github.com/users/xiongma/following{/other_user}",
"gists_url": "https://api.github.com/users/xiongma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiongma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiongma/subscriptions",
"organizations_url": "https://api.github.com/users/xiongma/orgs",
"repos_url": "https://api.github.com/users/xiongma/repos",
"events_url": "https://api.github.com/users/xiongma/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiongma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"```python\r\nfrom transformers.hf_api import HfApi\r\n\r\napi = HfApi()\r\nmodels = api.list_models()\r\n```\r\n\r\nThis is not very well documented yet so feel free to add to the doc."
] | 1,585 | 1,585 | 1,585 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3432/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/3431 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3431/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3431/comments | https://api.github.com/repos/huggingface/transformers/issues/3431/events | https://github.com/huggingface/transformers/issues/3431 | 587,651,470 | MDU6SXNzdWU1ODc2NTE0NzA= | 3,431 | Error ImportError: cannot import name 'MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING' from 'transformers' (C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\transformers\__init__.py) | {
"login": "kennysmith12",
"id": 61227472,
"node_id": "MDQ6VXNlcjYxMjI3NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/61227472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kennysmith12",
"html_url": "https://github.com/kennysmith12",
"followers_url": "https://api.github.com/users/kennysmith12/followers",
"following_url": "https://api.github.com/users/kennysmith12/following{/other_user}",
"gists_url": "https://api.github.com/users/kennysmith12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kennysmith12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennysmith12/subscriptions",
"organizations_url": "https://api.github.com/users/kennysmith12/orgs",
"repos_url": "https://api.github.com/users/kennysmith12/repos",
"events_url": "https://api.github.com/users/kennysmith12/events{/privacy}",
"received_events_url": "https://api.github.com/users/kennysmith12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You're not running the latest version of transformers.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
I am doing the tutorial ["Quick tour of the fine-tuning/usage scripts"](https://github.com/huggingface/transformers#quick-tour-of-the-fine-tuningusage-scripts)
I downloaded the Glue dataset.
When I try to run this command from pytorch
```
python ./examples/run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name MRPC \
--do_train \
--do_eval \
--do_lower_case \
--data_dir C:/Git/RemoteDGX/MRPC/glue_data/MRPC \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/MRPC/
```
I am getting this error:
ImportError: cannot import name 'MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING' from 'transformers' (C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\transformers\__init__.py)
:
## To reproduce
Steps to reproduce the behavior:
1.download the Glue database
2. execute the script run_glue.py
this is the stacktrace:
```
2020-03-25 14:09:19.698135: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
Traceback (most recent call last):
File "C:/Git/RemoteDGX/transformers/examples/run_glue.py", line 32, in <module>
from transformers import (
ImportError: cannot import name 'MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING' from 'transformers' (C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\transformers\__init__.py)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: windows
- Python version: 3.7
- PyTorch version 1.4.0 without GPU:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3431/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3430/comments | https://api.github.com/repos/huggingface/transformers/issues/3430/events | https://github.com/huggingface/transformers/issues/3430 | 587,624,157 | MDU6SXNzdWU1ODc2MjQxNTc= | 3,430 | Problem saving and/or loading fine-tuned model | {
"login": "ant-louis",
"id": 32681432,
"node_id": "MDQ6VXNlcjMyNjgxNDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/32681432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ant-louis",
"html_url": "https://github.com/ant-louis",
"followers_url": "https://api.github.com/users/ant-louis/followers",
"following_url": "https://api.github.com/users/ant-louis/following{/other_user}",
"gists_url": "https://api.github.com/users/ant-louis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ant-louis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ant-louis/subscriptions",
"organizations_url": "https://api.github.com/users/ant-louis/orgs",
"repos_url": "https://api.github.com/users/ant-louis/repos",
"events_url": "https://api.github.com/users/ant-louis/events{/privacy}",
"received_events_url": "https://api.github.com/users/ant-louis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @antoilouis,\r\n\r\nThanks for posting this. \r\nA quick wild guess might be that after training you save the model in `args.output_dir` but then load it from a different `args.model_name_or_path` (also since you said that if you evaluate the model right after training you get good results. \r\n\r\nMy advice to solve this wolud be the following: \r\nTrain for 1 epoch and a tiny part of the dataset. Print out / Save some model weights, then save load it again and check whether the weights are equal. They should be equal. ",
"Oh and also please post your environment information here. Make sure that you have the newest version of `transformers`. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I meet the same question . do you have any advice now? Thanks!",
"I am currently facing the same issue with the `BertForTokenClassification` model, the model behaves differently each time it is loaded. Are there solutions?"
] | 1,585 | 1,666 | 1,591 | CONTRIBUTOR | null | # ❓ Questions & Help
I have finetuned bert-base-cased for multi-class classification with BertForSequenceClassification and got great results (about 88% accuracy on my dev data). However, when I save the finetuned model, load it and run the evaluation on the exact same dev data, I got awful results (about 0.17 accuracy).
At first glance, it seems that either I am wrongly saving the fine-tuned model OR wrongly loading it after training. Would it be possible that save_pretrained only save the weights of the BERT model without the ones of the classifier above ? @patrickvonplaten
## Details
Here is how I save the fine-tuned model after training:
`model_to_save = model.module if hasattr(model, "module") else model # Take care of distributed/parallel training`
`model_to_save.save_pretrained(args.output_dir)`
`tokenizer.save_pretrained(args.output_dir)`
And here is how I load the fine-tuned model for running evaluation:
`model = BertForSequenceClassification.from_pretrained(
args.model_name_or_path,
num_labels = args.num_labels,
output_attentions = False,
output_hidden_states = False,
cache_dir = args.cache_dir)`
where `args.model_name_or_path` is the path of my .bin checkpoint, and args.num_labels stays unchanged during all process.
## Full code
```python
def train(args, model, tokenizer, dataset, tb_writer, categories):
# Load training dataset.
if args.do_eval and args.eval_filepath is None:
print("No validation file given: splitting dataset to train/test datasets...\n")
train_dataset, validation_dataset = split_data(dataset, args.test_percent, args.seed)
else:
train_dataset = dataset
print("Creating training dataloader...\n")
train_data, train_sampler, train_dataloader = create_dataloader(train_dataset, args.batch_size, training_data=True)
# Setting up Optimizer & Learning Rate Scheduler.
optimizer = AdamW(model.parameters(),
lr = args.learning_rate,
eps = args.adam_epsilon
)
total_steps = len(train_dataloader) * args.num_epochs
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps = 0, # Default value in run_glue.py
num_training_steps = total_steps)
# Init some useful variables.
global_step = 0
tr_loss, logging_loss = 0.0, 0.0
# For each epoch...
t = time.time()
for epoch_i in range(0, args.num_epochs):
# Perform one full pass over the training set.
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, args.num_epochs))
print('Training...')
# Measure how long the training epoch takes.
t0 = time.time()
# Put the model into training mode. Don't be mislead--the call to
# `train` just changes the *mode*, it doesn't *perform* the training.
# `dropout` and `batchnorm` layers behave differently during training
# vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch)
model.train()
# For each batch of training data...
for step, batch in enumerate(train_dataloader):
# Unpack this training batch from our dataloader.
# As we unpack the batch, we'll also copy each tensor to the GPU using the `to` method.
# `batch` contains three pytorch tensors:
# [0]: input ids
# [1]: attention masks
# [2]: labels
b_input_ids = batch[0].to(args.device)
b_input_mask = batch[1].to(args.device)
b_labels = batch[2].to(args.device)
# Always clear any previously calculated gradients before performing a backward pass.
# PyTorch doesn't do this automatically because accumulating the gradients is "convenient while training RNNs".
# (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch)
model.zero_grad()
# Perform a forward pass (evaluate the model on this training batch).
# This will return the loss (rather than the model output) because we have provided the `labels`.
# The documentation for this `model` function is here:
# https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
# The call to `model` always returns a tuple, so we need to pull the loss value out of the tuple.
loss = outputs[0]
if args.n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu parallel training
# Accumulate the training loss over all of the batches so that we can calculate the average loss at the end.
# `loss` is a Tensor containing a single value; the `.item()` function just returns the Python value from the tensor.
tr_loss += loss.item()
# Perform a backward pass to calculate the gradients.
loss.backward()
# Clip the norm of the gradients to 1.0. This is to help prevent the "exploding gradients" problem.
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# Update parameters and take a step using the computed gradient.
# The optimizer dictates the "update rule"--how the parameters are modified based on their gradients, the learning rate, etc.
optimizer.step()
# Update the learning rate.
scheduler.step()
# Update global step.
global_step += 1
# Progress update every 'logging_steps' batches.
if args.logging_steps > 0 and step != 0 and step % args.logging_steps == 0:
# Calculate elapsed time in minutes.
elapsed = format_time(time.time() - t0)
# Compute average training loss over the last 'logging_steps'. Write it to Tensorboard.
loss_scalar = (tr_loss - logging_loss) / args.logging_steps
tb_writer.add_scalar('Train/Loss', loss_scalar, global_step)
logging_loss = tr_loss
# Print the log.
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}. Training loss: {:.2f}'.format(step, len(train_dataloader), elapsed, loss_scalar))
print(" Training epoch took: {:}\n".format(format_time(time.time() - t0)))
if args.do_eval and args.eval_filepath is None:
print("Running Validation...")
# After the completion of each training epoch, measure our performance on our validation set.
t0 = time.time()
result, df_wrong, df_right = evaluate(args, model, validation_dataset, categories)
# Write results to tensorboard.
tb_writer.add_scalar('Test/Accuracy', result[0], epoch_i + 1)
tb_writer.add_scalar('Test/Recall', result[1], epoch_i + 1)
tb_writer.add_scalar('Test/Precision', result[2], epoch_i + 1)
tb_writer.add_scalar('Test/F1 score', result[3], epoch_i + 1)
tb_writer.add_scalar('Test/MCC', result[4], epoch_i + 1)
# Plot confusion matrix.
plot_confusion_matrix(result[5], categories, args.output_dir)
# Save dataframes of wrong and right predictions for further analysis.
df_wrong.to_csv(os.path.join(args.output_dir, 'preds_wrong.csv'))
df_right.to_csv(os.path.join(args.output_dir, 'preds_right.csv'))
print(" Validation took: {:}\n".format(format_time(time.time() - t0)))
print("Training complete! Took: {}\n".format(format_time(time.time() - t)))
print("Saving model to {}...\n.".format(args.output_dir))
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
model_to_save.save_pretrained(args.output_dir)
tokenizer.save_pretrained(args.output_dir)
torch.save(args, os.path.join(args.output_dir, 'training_args.bin')) # Good practice: save your training arguments together with the trained model
return
def evaluate(args, model, validation_dataset, categories):
# Creating validation dataloader.
validation_data, validation_sampler, validation_dataloader = create_dataloader(validation_dataset, args.batch_size, training_data=False)
# Get validation sentences.
validation_sentences = validation_dataset[3]
# Tracking variables
nb_eval_steps = 0
preds = None
out_label_ids = None
# Put the model in evaluation mode--the dropout layers behave differently during evaluation.
model.eval()
# Evaluate data for one epoch
for batch in validation_dataloader:
# Add batch to GPU.
b_input_ids, b_input_mask, b_labels = tuple(t.to(args.device) for t in batch)
# Telling the model not to compute or store gradients, saving memory and speeding up validation
with torch.no_grad():
# Forward pass, calculate logit predictions.
# This will return the logits rather than the loss because we have not provided labels.
# token_type_ids is the same as the "segment ids", which differentiates sentence 1 and 2 in 2-sentence tasks.
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask)
# Get the "logits" output by the model. The "logits" are the output values prior to applying an activation function like the softmax.
logits = outputs[0]
# Move logits and labels to CPU and store them.
if preds is None:
preds = logits.detach().cpu().numpy()
out_label_ids = b_labels.detach().cpu().numpy()
else:
preds = np.append(preds, logits.detach().cpu().numpy(), axis=0)
out_label_ids = np.append(out_label_ids, b_labels.detach().cpu().numpy(), axis=0)
# Track the number of batches
nb_eval_steps += 1
# Take the max predicitions.
preds = np.argmax(preds, axis=1)
# Report results.
result = compute_metrics(preds, out_label_ids, categories)
print(" * Accuracy: {0:.4f}".format(result[0]))
print(" * Recall: {0:.4f}".format(result[1]))
print(" * Precision: {0:.4f}".format(result[2]))
print(" * F1 score: {0:.4f}".format(result[3]))
print(" * MCC: {0:.4f}".format(result[4]))
# Get wrong and right predictions.
df_wrong, df_right = analyze_predictions(preds, out_label_ids, validation_sentences)
return result, df_wrong, df_right
def main(args):
# Create tensorboard summarywriter.
tb_writer = SummaryWriter()
# Create output dir if none mentioned.
if args.output_dir is None:
model_name = os.path.splitext(os.path.basename(args.model_name_or_path))[0]
args.output_dir = "./output/" + model_name + '/'
if not os.path.exists(args.output_dir):
os.makedirs(args.output_dir)
# Set the seed value all over the place to make this reproducible.
set_seed(args.seed)
print("\n========================================")
print(' Load model ')
print("========================================\n")
print("Loading BertForSequenceClassification model...\n")
model = BertForSequenceClassification.from_pretrained(
args.model_name_or_path, # Use the 12-layer BERT model, with an cased vocab.
num_labels = args.num_labels, # The number of output labels
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
cache_dir = args.cache_dir,
)
#model = BertForSequenceClassification.from_pretrained(args.model_name_or_path)
print('Loading BertTokenizer...\n')
tokenizer = BertTokenizer.from_pretrained(args.model_name_or_path, do_lower_case=False)
print("Setting up CUDA & GPU...")
if torch.cuda.is_available():
if args.gpu_id:
torch.cuda.set_device(args.gpu_id)
args.n_gpu = 1
print("-> GPU training available! As '--gpu_id' was set, only GPU {} {} will be used (no parallel training).\n".format(torch.cuda.get_device_name(args.gpu_id), args.gpu_id))
else:
args.n_gpu = torch.cuda.device_count()
gpu_ids = list(range(0, args.n_gpu))
if args.n_gpu > 1:
model = torch.nn.DataParallel(model, device_ids=gpu_ids, output_device=gpu_ids[-1])
print("-> GPU training available! Training will use GPU(s) {}\n".format(gpu_ids))
args.device = torch.device("cuda")
else:
args.device = torch.device("cpu")
args.n_gpu = 0
print("-> No GPU available, using the CPU instead.\n")
model.to(args.device) # Tell pytorch to run the model on the device.
print("\n========================================")
print(' Processing data ')
print("========================================\n")
df, categories = load_data(args)
print("Tokenizing sentences...")
tokenized = tokenize_sentences(tokenizer, df)
attention_masks = create_masks(tokenized)
dataset = (tokenized, df.Class_id.values, attention_masks, df.Sentence.values)
if args.do_train:
print("\n========================================")
print(' Launching training ')
print("========================================\n")
train(args, model, tokenizer, dataset, tb_writer, categories)
elif args.do_eval and args.eval_filepath is not None:
print("\n========================================")
print(' Launching validation ')
print("========================================\n")
result, df_wrong, df_right = evaluate(args, model, dataset, categories)
# Save dataframes of wrong and right predictions for further analysis.
df_wrong.to_csv(os.path.join(args.output_dir, 'wrong_preds.csv'))
df_right.to_csv(os.path.join(args.output_dir, 'right_preds.csv'))
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3430/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/3430/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3429/comments | https://api.github.com/repos/huggingface/transformers/issues/3429/events | https://github.com/huggingface/transformers/issues/3429 | 587,612,101 | MDU6SXNzdWU1ODc2MTIxMDE= | 3,429 | Confusion in understanding the output of BERTforTokenClassification class from Transformers library | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"AFAIK, `12` does",
"For detailed explanation for this, refer\r\nhttps://stackoverflow.com/questions/60847291/confusion-in-understanding-the-output-of-bertfortokenclassification-class-from-t"
] | 1,585 | 1,585 | 1,585 | NONE | null | It is the example given in the documentation of transformers pytorch library
```
from transformers import BertTokenizer, BertForTokenClassification
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForTokenClassification.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, scores, hidden_states,attentions = outputs
```
Here hidden_states is a tuple of length 13 and contains hidden-states of the model at the output of each layer plus the initial embedding outputs. **I would like to know, whether hidden_states[0] or hidden_states[12] represent the final hidden state vectors**?
Thanks in advance @thomwolf @nreimers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3429/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3428/comments | https://api.github.com/repos/huggingface/transformers/issues/3428/events | https://github.com/huggingface/transformers/pull/3428 | 587,605,933 | MDExOlB1bGxSZXF1ZXN0MzkzNTE0NzU4 | 3,428 | Add wmt translation example | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> long\r\n\r\nYeah we should definitely try to run it on a GPU - will take a look at that :-) ",
"Not sure whether we need fp16 and multi-gpu training. I think single GPU training is enough and t5 + wmt does not take much memory. But happy to take a look into it if you guys think it's worth it :-) @thomwolf @LysandreJik @julien-c ",
"Code quality test fails because of unpinned isort library (see https://github.com/huggingface/transformers/pull/3449)",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=h1) Report\n> Merging [#3428](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b4fb94fe6d831b17c0df364b2848c80ef3add154?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3428 +/- ##\n=======================================\n Coverage 52.51% 52.51% \n=======================================\n Files 100 100 \n Lines 17051 17051 \n=======================================\n Hits 8954 8954 \n Misses 8097 8097\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=footer). Last update [b4fb94f...713524e](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | PR adds translation example for T5.
It uses the `sacrebleu` BLEU scorer.
I adapted the README.md a bit so that users are aware that models in official paper were attained with finetuned T5 @craffel | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3428/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3428/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3428",
"html_url": "https://github.com/huggingface/transformers/pull/3428",
"diff_url": "https://github.com/huggingface/transformers/pull/3428.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3428.patch",
"merged_at": 1585246080000
} |
https://api.github.com/repos/huggingface/transformers/issues/3427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3427/comments | https://api.github.com/repos/huggingface/transformers/issues/3427/events | https://github.com/huggingface/transformers/issues/3427 | 587,596,866 | MDU6SXNzdWU1ODc1OTY4NjY= | 3,427 | I want to create a tokenizer which the vocab file in my computer | {
"login": "xiongma",
"id": 30991932,
"node_id": "MDQ6VXNlcjMwOTkxOTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30991932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiongma",
"html_url": "https://github.com/xiongma",
"followers_url": "https://api.github.com/users/xiongma/followers",
"following_url": "https://api.github.com/users/xiongma/following{/other_user}",
"gists_url": "https://api.github.com/users/xiongma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiongma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiongma/subscriptions",
"organizations_url": "https://api.github.com/users/xiongma/orgs",
"repos_url": "https://api.github.com/users/xiongma/repos",
"events_url": "https://api.github.com/users/xiongma/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiongma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Your config.json file should be named `config.json`, and then you should be able to do:\r\n```\r\nself.model_config = AutoConfig.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/')\r\nself.tokenizer = AutoTokenizer.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/')\r\n```\r\n\r\n(just the folder name).\r\n\r\nYou might have to add `model_type: \"bert\"` to your config.json.",
"> \r\n> \r\n> Your config.json file should be named `config.json`, and then you should be able to do:\r\n> \r\n> ```\r\n> self.model_config = AutoConfig.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/')\r\n> self.tokenizer = AutoTokenizer.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/')\r\n> ```\r\n> \r\n> (just the folder name).\r\n> \r\n> You might have to add `model_type: \"bert\"` to your config.json.\r\n\r\nI encountered the same problem and sorry to bother, but where should i add this line to?"
] | 1,585 | 1,620 | 1,585 | NONE | null | I want to create a tokenizer in my computer, which the vocab file is below, my code as below
```
self.model_config = AutoConfig.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/bert_config.json')
self.tokenizer = AutoTokenizer.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/vocab.txt')
```
it appeared this error, what's wrong?, my enviroment is linux python 3.7 transformer 2.6
```
OSError: Couldn't reach server at '/home/cally/Awake/Code/bert/pre_model/vocab.txt' to download configuration file or configuration file is not a valid JSON file. Please check network or file content here: /home/cally/Awake/Code/bert/pre_model/vocab.txt.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3427/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3426 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3426/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3426/comments | https://api.github.com/repos/huggingface/transformers/issues/3426/events | https://github.com/huggingface/transformers/issues/3426 | 587,449,620 | MDU6SXNzdWU1ODc0NDk2MjA= | 3,426 | Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_selec | {
"login": "ankit-ai",
"id": 44282943,
"node_id": "MDQ6VXNlcjQ0MjgyOTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/44282943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankit-ai",
"html_url": "https://github.com/ankit-ai",
"followers_url": "https://api.github.com/users/ankit-ai/followers",
"following_url": "https://api.github.com/users/ankit-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/ankit-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankit-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankit-ai/subscriptions",
"organizations_url": "https://api.github.com/users/ankit-ai/orgs",
"repos_url": "https://api.github.com/users/ankit-ai/repos",
"events_url": "https://api.github.com/users/ankit-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankit-ai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Are the inputs also being cast to GPU?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | I wrapped the code base into a flask container and tried to run it on a GPU and I am running into a GPU device copy issue. Looking for pointers.
I have verified that the model actually gets moved to the GPU:
` self.model.to(torch.device("cuda" if torch.cuda.is_available() and not "store_true" else "cpu"))
self.model.eval() # TO HERE`
```
backend_1 | File "/app/run_generation.py", line 285, in hook
backend_1 | num_return_sequences=args.num_return_sequences,
backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
backend_1 | return func(*args, **kwargs)
backend_1 | File "/app/transformers/modeling_utils.py", line 979, in generate
backend_1 | attention_mask=attention_mask,
backend_1 | File "/app/transformers/modeling_utils.py", line 1016, in _generate_no_beam_search
backend_1 | outputs = self(**model_inputs)
backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
backend_1 | result = self.forward(*input, **kwargs)
backend_1 | File "/app/transformers/modeling_gpt2.py", line 599, in forward
backend_1 | inputs_embeds=inputs_embeds,
backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
backend_1 | result = self.forward(*input, **kwargs)
backend_1 | File "/app/transformers/modeling_gpt2.py", line 465, in forward
backend_1 | inputs_embeds = self.wte(input_ids)
backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
backend_1 | result = self.forward(*input, **kwargs)
backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py", line 114, in forward
backend_1 | self.norm_type, self.scale_grad_by_freq, self.sparse)
backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1484, in embedding
backend_1 | return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
backend_1 | RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_selec
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3426/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3425/comments | https://api.github.com/repos/huggingface/transformers/issues/3425/events | https://github.com/huggingface/transformers/pull/3425 | 587,431,688 | MDExOlB1bGxSZXF1ZXN0MzkzMzc1Njc1 | 3,425 | Update model card huseinzol05/bert-base-bahasa-cased | {
"login": "huseinzol05",
"id": 19810909,
"node_id": "MDQ6VXNlcjE5ODEwOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19810909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huseinzol05",
"html_url": "https://github.com/huseinzol05",
"followers_url": "https://api.github.com/users/huseinzol05/followers",
"following_url": "https://api.github.com/users/huseinzol05/following{/other_user}",
"gists_url": "https://api.github.com/users/huseinzol05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huseinzol05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huseinzol05/subscriptions",
"organizations_url": "https://api.github.com/users/huseinzol05/orgs",
"repos_url": "https://api.github.com/users/huseinzol05/repos",
"events_url": "https://api.github.com/users/huseinzol05/events{/privacy}",
"received_events_url": "https://api.github.com/users/huseinzol05/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Hi @huseinzol05! can you rebase on master so that it's easy to merge?",
"@julien-c , done! thank u very much!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=h1) Report\n> Merging [#3425](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9c683ef01e19c4dc1216dcd1ae3c8e7c44d7b2b9&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3425 +/- ##\n=======================================\n Coverage 77.76% 77.76% \n=======================================\n Files 100 100 \n Lines 16995 16995 \n=======================================\n+ Hits 13216 13217 +1 \n+ Misses 3779 3778 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3425/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=footer). Last update [9c683ef...33d0e10](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@julien-c , added xlnet-base README"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3425/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3425",
"html_url": "https://github.com/huggingface/transformers/pull/3425",
"diff_url": "https://github.com/huggingface/transformers/pull/3425.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3425.patch",
"merged_at": 1585223428000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3424/comments | https://api.github.com/repos/huggingface/transformers/issues/3424/events | https://github.com/huggingface/transformers/issues/3424 | 587,387,008 | MDU6SXNzdWU1ODczODcwMDg= | 3,424 | Where is the code of Bart fine-tuning?Thanks | {
"login": "qiunlp",
"id": 24563279,
"node_id": "MDQ6VXNlcjI0NTYzMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/24563279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qiunlp",
"html_url": "https://github.com/qiunlp",
"followers_url": "https://api.github.com/users/qiunlp/followers",
"following_url": "https://api.github.com/users/qiunlp/following{/other_user}",
"gists_url": "https://api.github.com/users/qiunlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qiunlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qiunlp/subscriptions",
"organizations_url": "https://api.github.com/users/qiunlp/orgs",
"repos_url": "https://api.github.com/users/qiunlp/repos",
"events_url": "https://api.github.com/users/qiunlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/qiunlp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi,\r\nthe code is in `transformers/examples/summarization/bart/`. Read the `README.md` file.\r\n\r\n> To use your own data, copy that files format. Each article to be summarized is on its own line.\r\n\r\nOr look at issue #3672"
] | 1,585 | 1,587 | 1,587 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3424/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3424/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/3423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3423/comments | https://api.github.com/repos/huggingface/transformers/issues/3423/events | https://github.com/huggingface/transformers/pull/3423 | 587,356,095 | MDExOlB1bGxSZXF1ZXN0MzkzMzE3ODYz | 3,423 | Experiment w/ dataclasses (including Py36) | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3423/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3423",
"html_url": "https://github.com/huggingface/transformers/pull/3423",
"diff_url": "https://github.com/huggingface/transformers/pull/3423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3423.patch",
"merged_at": 1585149021000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3422/comments | https://api.github.com/repos/huggingface/transformers/issues/3422/events | https://github.com/huggingface/transformers/pull/3422 | 587,319,727 | MDExOlB1bGxSZXF1ZXN0MzkzMjg4Njc1 | 3,422 | [BART] add bart-large-xsum weights | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=h1) Report\n> Merging [#3422](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/17dceae7a1de5577cd0c07a97dcd5821a08af07c&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3422 +/- ##\n==========================================\n- Coverage 77.80% 77.79% -0.01% \n==========================================\n Files 100 100 \n Lines 17051 17051 \n==========================================\n- Hits 13266 13265 -1 \n- Misses 3785 3786 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.58% <ø> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0.00%> (-0.14%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=footer). Last update [17dceae...71fcbc9](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | - conversion script can now take a path, which is required since this model is not on `torch.hub`. Finetuning with fairseq and then converting to huggingface should work. I also cleaned it up a bit
- Config in S3 is already updated with author-recommended generation parameters:
`(num_beams=6, length_penalty=1., min_length=11, max_length=62`
Context:
These weights are from bart finetuned on the XSum abstractive summarization challenge, which encourages shorter (more abstractive) summaries. It achieves state of the art.
Discussion:
- I propose changing the SummarizationPipeline default to this model in a separate PR, since the summarizations are shorter (and high quality)! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3422",
"html_url": "https://github.com/huggingface/transformers/pull/3422",
"diff_url": "https://github.com/huggingface/transformers/pull/3422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3422.patch",
"merged_at": 1585493473000
} |
https://api.github.com/repos/huggingface/transformers/issues/3421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3421/comments | https://api.github.com/repos/huggingface/transformers/issues/3421/events | https://github.com/huggingface/transformers/pull/3421 | 587,315,408 | MDExOlB1bGxSZXF1ZXN0MzkzMjg1MTU3 | 3,421 | Added BioBERT-NLI model card | {
"login": "gsarti",
"id": 16674069,
"node_id": "MDQ6VXNlcjE2Njc0MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsarti",
"html_url": "https://github.com/gsarti",
"followers_url": "https://api.github.com/users/gsarti/followers",
"following_url": "https://api.github.com/users/gsarti/following{/other_user}",
"gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsarti/subscriptions",
"organizations_url": "https://api.github.com/users/gsarti/orgs",
"repos_url": "https://api.github.com/users/gsarti/repos",
"events_url": "https://api.github.com/users/gsarti/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsarti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=h1) Report\n> Merging [#3421](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d0c36a7b7270f114322c191866d29abea383e5da&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3421 +/- ##\n==========================================\n+ Coverage 77.55% 77.56% +0.01% \n==========================================\n Files 100 100 \n Lines 16970 16970 \n==========================================\n+ Hits 13161 13163 +2 \n+ Misses 3809 3807 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3421/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.85% <0.00%> (+0.27%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=footer). Last update [d0c36a7...e17e0d2](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3421/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3421",
"html_url": "https://github.com/huggingface/transformers/pull/3421",
"diff_url": "https://github.com/huggingface/transformers/pull/3421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3421.patch",
"merged_at": 1585098956000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3420/comments | https://api.github.com/repos/huggingface/transformers/issues/3420/events | https://github.com/huggingface/transformers/issues/3420 | 587,277,985 | MDU6SXNzdWU1ODcyNzc5ODU= | 3,420 | Reading files takes for ever in language modeling | {
"login": "abdallah197",
"id": 28394606,
"node_id": "MDQ6VXNlcjI4Mzk0NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/28394606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abdallah197",
"html_url": "https://github.com/abdallah197",
"followers_url": "https://api.github.com/users/abdallah197/followers",
"following_url": "https://api.github.com/users/abdallah197/following{/other_user}",
"gists_url": "https://api.github.com/users/abdallah197/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abdallah197/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdallah197/subscriptions",
"organizations_url": "https://api.github.com/users/abdallah197/orgs",
"repos_url": "https://api.github.com/users/abdallah197/repos",
"events_url": "https://api.github.com/users/abdallah197/events{/privacy}",
"received_events_url": "https://api.github.com/users/abdallah197/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"the same issue happens with me when running on 30 GB English Text, with the same parameters given to the script",
"@julien-c any recommendations on this issue?",
"Maybe launch on a single node and in a debugger, to see what's happening?",
"I have run the debugging both with pre-trained tokenizer using --model_name_or_path flag and with a trained tokenizer and config\r\nIn both cases, this line is where the conde hangs\r\n`self.examples = tokenizer.batch_encode_plus(lines, add_special_tokens=True, max_length=block_size)[\"input_ids\"]`\r\n",
"One suggestion is that you could create the cached dataset file once locally and then copy it over to wherever (cluster etc..) you're gonna be using it to train.",
"I have the same issue. I was training XML-Roberta and the training gets stuck in the step of creating features. Does anyone have a solution? Thanks!",
"@Genius1237 the cluster that I am using have better specs than my local machine. but also, I have tried this before and ended up in the same problem",
"@abdallah197 Can you do something like this (https://github.com/Microsoft/ptvsd/issues/1354#issuecomment-487289774). It basically allows you to debug installed packages. This way, you can debug into the `batch_encode_plus` function and see if it's failing for one particular example or just slow in general.",
"Visual Studio Code also has a pretty neat and easy-to-use debugger that you can even run on a remote machine.\r\n\r\nLet us know if you find root causes for your issue.",
"I am facing the same issue, may I ask if anyone got a solution? Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using (GPT2):
Language I am using the model on (Music Midifiles tokens):
The problem arises when using:
* [ ] the official example scripts: (give details below)
The tasks I am working on is:
* language modeling, running the script run_langauge_modling.py
## To reproduce
Steps to reproduce the behavior:
```
python -m torch.distributed.launch \
--nproc_per_node 4 run_language_modeling.py \
--train_data_file /nethome/abashir/data/train.txt \
--output_dir /data/users/abashir/model \
--model_type gpt2 --tokenizer_name /nethome/abashir/data/PianoAI \
--do_train --line_by_line --learning_rate 1e-4 --num_train_epochs 5 \
--save_total_limit 2 --save_steps 1000 --per_gpu_train_batch_size 8 \
--seed 42 --overwrite_cache --block_size 128
```
The output freeze at this stage for more than a day. train file size is less than 1 GB:
`03/23/2020 19:43:11 - INFO - __main__ - Creating features from dataset file at /nethome/abashir/data/train.txt`
## Expected behavior
Start the training right away after adding --line_by_line
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-4.4.0-45-generic-x86_64-with-debian-jessie-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3420/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3419/comments | https://api.github.com/repos/huggingface/transformers/issues/3419/events | https://github.com/huggingface/transformers/pull/3419 | 587,267,666 | MDExOlB1bGxSZXF1ZXN0MzkzMjQ2MzY3 | 3,419 | Adds translation pipeline | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=h1) Report\n> Merging [#3419](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9c683ef01e19c4dc1216dcd1ae3c8e7c44d7b2b9&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `94.44%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3419 +/- ##\n==========================================\n+ Coverage 77.76% 77.80% +0.03% \n==========================================\n Files 100 100 \n Lines 16995 17025 +30 \n==========================================\n+ Hits 13216 13246 +30 \n Misses 3779 3779 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3419/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.92% <ø> (ø)` | |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3419/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `74.78% <94.44%> (+1.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3419/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.86% <0.00%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=footer). Last update [9c683ef...a5160a6](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | The API for translation is as follows:
```
en_fr_translation = pipeline("translation_en_to_fr")
en_fr_translation("How old are you?")
```
for English to French translation.
PR adds tests and gives an example in the docstring.
PR builds on #3413 and should be merged after this one.
Example:

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3419/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3419",
"html_url": "https://github.com/huggingface/transformers/pull/3419",
"diff_url": "https://github.com/huggingface/transformers/pull/3419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3419.patch",
"merged_at": 1585227058000
} |
https://api.github.com/repos/huggingface/transformers/issues/3418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3418/comments | https://api.github.com/repos/huggingface/transformers/issues/3418/events | https://github.com/huggingface/transformers/issues/3418 | 587,242,349 | MDU6SXNzdWU1ODcyNDIzNDk= | 3,418 | Unused function in squad metrics | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,590 | 1,590 | CONTRIBUTOR | null | In the squad metrics file, I noticed an unused function `find_all_best_thresh_v2`
https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py#L167
It looks to be pertaining to the squad v2.0 dataset type, which has 'impossible' questions for contexts.
It looks like it was meant to be used something like
```
if no_answer_probs:
if version_2_with_negative:
find_all_best_thresh_v2(evaluation, preds, exact, f1, no_answer_probs, qas_id_to_has_answer)
else:
find_all_best_thresh(evaluation, preds, exact, f1, no_answer_probs, qas_id_to_has_answer)
```
here
https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py#L236 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3418/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3417/comments | https://api.github.com/repos/huggingface/transformers/issues/3417/events | https://github.com/huggingface/transformers/pull/3417 | 587,234,634 | MDExOlB1bGxSZXF1ZXN0MzkzMjE5NTAz | 3,417 | Fix XLNet batch generation bug | {
"login": "neonbjb",
"id": 833082,
"node_id": "MDQ6VXNlcjgzMzA4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/833082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neonbjb",
"html_url": "https://github.com/neonbjb",
"followers_url": "https://api.github.com/users/neonbjb/followers",
"following_url": "https://api.github.com/users/neonbjb/following{/other_user}",
"gists_url": "https://api.github.com/users/neonbjb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neonbjb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neonbjb/subscriptions",
"organizations_url": "https://api.github.com/users/neonbjb/orgs",
"repos_url": "https://api.github.com/users/neonbjb/repos",
"events_url": "https://api.github.com/users/neonbjb/events{/privacy}",
"received_events_url": "https://api.github.com/users/neonbjb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @neonbjb,\r\n\r\nsorry to answer waaaay to late.\r\nBatch generation currently does not work. See #3021\r\n\r\nWe are not sure yet when and how to add this feature.",
"Uhhhh.. but it could with this PR? I have it working on my cloned repo and used it in this writeup. Your call though.\r\nhttps://nonint.com/2020/03/27/fine-tuning-xlnet-for-generation-tasks/"
] | 1,585 | 1,591 | 1,591 | CONTRIBUTOR | null | When doing batch generation with XLNet, only the first element in the batch contains any predictions. This seems to be caused by the target_mapping being improperly initialized in prepare_inputs_for_generation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3417/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3417/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3417",
"html_url": "https://github.com/huggingface/transformers/pull/3417",
"diff_url": "https://github.com/huggingface/transformers/pull/3417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3417.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3416/comments | https://api.github.com/repos/huggingface/transformers/issues/3416/events | https://github.com/huggingface/transformers/issues/3416 | 587,227,546 | MDU6SXNzdWU1ODcyMjc1NDY= | 3,416 | XLNet model on S3 not set up correctly? | {
"login": "martinralfreindl",
"id": 61122332,
"node_id": "MDQ6VXNlcjYxMTIyMzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/61122332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/martinralfreindl",
"html_url": "https://github.com/martinralfreindl",
"followers_url": "https://api.github.com/users/martinralfreindl/followers",
"following_url": "https://api.github.com/users/martinralfreindl/following{/other_user}",
"gists_url": "https://api.github.com/users/martinralfreindl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/martinralfreindl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martinralfreindl/subscriptions",
"organizations_url": "https://api.github.com/users/martinralfreindl/orgs",
"repos_url": "https://api.github.com/users/martinralfreindl/repos",
"events_url": "https://api.github.com/users/martinralfreindl/events{/privacy}",
"received_events_url": "https://api.github.com/users/martinralfreindl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After looking into this some more. It looks like my issue was arising from the fact that the XLNet authors published both a 'base' and 'large' version of their model: \r\n\r\n\r\n\r\nIt seems like the config defaults to the 'large' version, but I am loading the 'base' version. Adjustments to the parameters as outlined make sense in this light. ",
"There are simpler ways to do what you describe; first of all, you don't have to specify the configuration file, the model will load it automatically:\r\n\r\n```py\r\nfrom transformers import XLNetForSequenceClassification, XLNetConfig\r\n\r\nmodel = XLNetForSequenceClassification.from_pretrained(\"xlnet-base-cased\")\r\n```\r\n\r\nSecondly, you can also instantiate a configuration from a pre-trained checkpoint:\r\n\r\n```py\r\nfrom transformers import XLNetForSequenceClassification, XLNetConfig\r\nconfiguration = XLNetConfig.from_pretrained(\"xlnet-base-cased\")\r\nmodel = XLNetForSequenceClassification.from_pretrained(\"xlnet-base-cased\", config=configuration)\r\n```"
] | 1,585 | 1,585 | 1,585 | NONE | null | Hi all,
**I'm trying to initialize an XLNetForSequenceClassification model as follows:**
```
from transformers import XLNetForSequenceClassification, XLNetConfig
configuration = XLNetConfig()
model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased", config=configuration)
```
**However I'm getting an error:**
> RuntimeError: Error(s) in loading state_dict for XLNetForSequenceClassification:
> size mismatch for transformer.mask_emb: copying a param with shape torch.Size([1, 1, 768]) from checkpoint, the shape in current model is torch.Size([1, 1, 1024]).
> size mismatch for transformer.word_embedding.weight: copying a param with shape torch.Size([32000, 768]) from checkpoint, the shape in current model is torch.Size([32000, 1024]).
> size mismatch for transformer.layer.0.rel_attn.q: copying a param with shape torch.Size([768, 12, 64]) from checkpoint, the shape in current model is torch.Size([1024, 16, 64]).
> ...
**I can get my code to run by specifying in my config:**
```
configuration.d_model = 768 # hidden size --> should be 1024
configuration.n_head = 12 # number of attention heads --> should be 16
configuration.d_inner = 3072 # FFN inner hidden size --> should be 4096
```
But I am somewhat confused: These adjustments to the model are not in line with XLNet as introduced in the [XLNet paper](https://arxiv.org/pdf/1906.08237.pdf) (page 13). Am I understanding something wrong here, or is the XLNet model on S3 not set up correctly? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3416/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3415/comments | https://api.github.com/repos/huggingface/transformers/issues/3415/events | https://github.com/huggingface/transformers/issues/3415 | 587,127,797 | MDU6SXNzdWU1ODcxMjc3OTc= | 3,415 | Problem with running Transformer Notebook: How to train a language model | {
"login": "giulianobertoti",
"id": 2041679,
"node_id": "MDQ6VXNlcjIwNDE2Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2041679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giulianobertoti",
"html_url": "https://github.com/giulianobertoti",
"followers_url": "https://api.github.com/users/giulianobertoti/followers",
"following_url": "https://api.github.com/users/giulianobertoti/following{/other_user}",
"gists_url": "https://api.github.com/users/giulianobertoti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/giulianobertoti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giulianobertoti/subscriptions",
"organizations_url": "https://api.github.com/users/giulianobertoti/orgs",
"repos_url": "https://api.github.com/users/giulianobertoti/repos",
"events_url": "https://api.github.com/users/giulianobertoti/events{/privacy}",
"received_events_url": "https://api.github.com/users/giulianobertoti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"Importing CONFIG_MAPPING from https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_auto.py to https://github.com/huggingface/transformers/blob/master/src/transformers/__init__.py fixes the problem (for the last version of the package)",
"This should be fixed on master by f8823bad9a23f6623e91e71719e65342de877cb9. Can you please try again, and re-open if necessary?\r\n\r\n(in a Colab notebook, you'll need to re-download the `run_language_modeling.py` script using `!wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/run_language_modeling.py`)",
"It is now showing the following error,\r\n\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 782, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 677, in main\r\n config = AutoConfig.from_pretrained(args.config_name, cache_dir=args.cache_dir)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py\", line 198, in from_pretrained\r\n \"in its name: {}\".format(pretrained_model_name_or_path, \", \".join(CONFIG_MAPPING.keys()))\r\nValueError: Unrecognized model in /content/models/RoBERTa_GPT/. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: t5, distilbert, albert, camembert, xlm-roberta, bart, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl\r\nCPU times: user 41.1 ms, sys: 8.81 ms, total: 49.9 ms\r\nWall time: 6.8 s",
"@Yamantaka01 Your config.json in `/content/models/RoBERTa_GPT/config.json` should contain a model_type key"
] | 1,585 | 1,585 | 1,585 | NONE | null | Hello,
in the page:
https://github.com/huggingface/transformers/blob/master/notebooks/README.md
I clicked "open in colab" on the notebook "How to train a language model".
When the last run cell was run:
%%time
!{cmd}
The following error was presented:
Traceback (most recent call last):
File "run_language_modeling.py", line 40, in <module>
from transformers import (
ImportError: cannot import name 'CONFIG_MAPPING'
CPU times: user 42.2 ms, sys: 18 ms, total: 60.2 ms
Wall time: 5.82 s
Any suggestion?
Thank you!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3415/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3414/comments | https://api.github.com/repos/huggingface/transformers/issues/3414/events | https://github.com/huggingface/transformers/issues/3414 | 587,056,829 | MDU6SXNzdWU1ODcwNTY4Mjk= | 3,414 | Add custom rules for sampling from GPT-2 Generator | {
"login": "simonefrancia",
"id": 7140210,
"node_id": "MDQ6VXNlcjcxNDAyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7140210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonefrancia",
"html_url": "https://github.com/simonefrancia",
"followers_url": "https://api.github.com/users/simonefrancia/followers",
"following_url": "https://api.github.com/users/simonefrancia/following{/other_user}",
"gists_url": "https://api.github.com/users/simonefrancia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonefrancia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonefrancia/subscriptions",
"organizations_url": "https://api.github.com/users/simonefrancia/orgs",
"repos_url": "https://api.github.com/users/simonefrancia/repos",
"events_url": "https://api.github.com/users/simonefrancia/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonefrancia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @simonefrancia happy that you liked the blog post :-) \r\n\r\nThis sounds like quite a special sampling function, so the best you can do would be to fork/clone the repo and add this functionality yourself. If the sampling function is too special we probably will not include it into the master branch. \r\n\r\nBut feel free to open a PR if you think it adds value and is quite general."
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Hi @patrickvonplaten,
I've read your [blogpost](https://huggingface.co/blog/how-to-generate) and it's really interesting. Thanks! I have a question for that.
We have recently trained a GPT-2 generator with HF on general text and it works well.
But to have better results on my custom task, I would like to add a custom function of sampling that is able to maximize a certain behaviour of the text I am generating.
I would like to add a custom sampling applied to top-k words that are choosen by GPT-2 in order to maximize , for example, the fact that in the text I am generating there must be the max number of vocals.
Could you help me to think about this solution?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3414/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3414/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3413/comments | https://api.github.com/repos/huggingface/transformers/issues/3413/events | https://github.com/huggingface/transformers/pull/3413 | 586,999,880 | MDExOlB1bGxSZXF1ZXN0MzkzMDIzNjQ0 | 3,413 | Add t5 to pipeline(task='summarization') | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=h1) Report\n> Merging [#3413](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e392ba6938f50655a195ea7ec8a260b1e9fc6058&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `93.75%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3413 +/- ##\n==========================================\n+ Coverage 77.56% 77.58% +0.02% \n==========================================\n Files 100 100 \n Lines 16970 16993 +23 \n==========================================\n+ Hits 13162 13184 +22 \n- Misses 3808 3809 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.71% <ø> (-0.02%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `73.05% <93.10%> (+0.52%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.44% <100.00%> (+0.52%)` | :arrow_up: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.89% <100.00%> (+0.05%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=footer). Last update [e392ba6...23778d1](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | This PR:
- adds T5 to summarization piplines.
- adds warnings and better defaults to Bart/T5 summarization
- removes unnecessary assert in generate() function
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3413/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3413",
"html_url": "https://github.com/huggingface/transformers/pull/3413",
"diff_url": "https://github.com/huggingface/transformers/pull/3413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3413.patch",
"merged_at": 1585216994000
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.