url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/7321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7321/comments
https://api.github.com/repos/huggingface/transformers/issues/7321/events
https://github.com/huggingface/transformers/issues/7321
706,477,063
MDU6SXNzdWU3MDY0NzcwNjM=
7,321
Example Format of Data for token classification
{ "login": "Michael95-m", "id": 64765786, "node_id": "MDQ6VXNlcjY0NzY1Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/64765786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Michael95-m", "html_url": "https://github.com/Michael95-m", "followers_url": "https://api.github.com/users/Michael95-m/followers", "following_url": "https://api.github.com/users/Michael95-m/following{/other_user}", "gists_url": "https://api.github.com/users/Michael95-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/Michael95-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Michael95-m/subscriptions", "organizations_url": "https://api.github.com/users/Michael95-m/orgs", "repos_url": "https://api.github.com/users/Michael95-m/repos", "events_url": "https://api.github.com/users/Michael95-m/events{/privacy}", "received_events_url": "https://api.github.com/users/Michael95-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Michael95-m ,\r\n\r\nthe dataset format for e.g. the normal NER task is relatively simple: one token - label pair per line and an empty line specified a new sentence.\r\n\r\nSo here's a good example from Spanish CoNLL dataset for NER:\r\n\r\n```bash\r\nMelbourne B-LOC\r\n( O\r\nAustralia B-LOC\r\n) O\r\n, O\r\n25 O\r\nmay O\r\n( O\r\nEFE B-ORG\r\n) O\r\n. O\r\n\r\n- O\r\n\r\nEl O\r\nAbogado B-PER\r\nGeneral I-PER\r\ndel I-PER\r\nEstado I-PER\r\n, O\r\nDaryl B-PER\r\nWilliams I-PER\r\n, O\r\nsubrayó O\r\nhoy O\r\nla O\r\nnecesidad O\r\nde O\r\ntomar O\r\nmedidas O\r\npara O\r\nproteger O\r\nal O\r\nsistema O\r\njudicial O\r\naustraliano O\r\nfrente O\r\na O\r\nuna O\r\npágina O\r\nde O\r\ninternet O\r\nque O\r\nimposibilita O\r\nel O\r\ncumplimiento O\r\nde O\r\nlos O\r\nprincipios O\r\nbásicos O\r\nde O\r\nla O\r\nLey B-MISC\r\n. O\r\n\r\n```\r\n\r\nEach line consists of token/word and its corresponding label, delimited by a space. An empty line denotes a new sentence.\r\n\r\nTechnically, the parsing of your input file is done here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/f5518e56318a79056ba3c80cbece29d9fe98558c/examples/token-classification/tasks.py#L18-L44\r\n\r\nI hope this helps :)\r\n\r\n", "Thanks for your kind answer !!!" ]
1,600
1,601
1,601
NONE
null
Hi!! I'd like to train the token classification model but I don't know what is the right format of data for token classification training. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7321/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7320/comments
https://api.github.com/repos/huggingface/transformers/issues/7320/events
https://github.com/huggingface/transformers/pull/7320
706,464,175
MDExOlB1bGxSZXF1ZXN0NDkxMDA2MTA3
7,320
Test CI with higher timeout
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
Test.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7320/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7320", "html_url": "https://github.com/huggingface/transformers/pull/7320", "diff_url": "https://github.com/huggingface/transformers/pull/7320.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7320.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7319/comments
https://api.github.com/repos/huggingface/transformers/issues/7319/events
https://github.com/huggingface/transformers/pull/7319
706,439,962
MDExOlB1bGxSZXF1ZXN0NDkwOTg1ODI1
7,319
[Bug Fix] Fix run_squad.py evaluation code doesn't use probabilities
{ "login": "elronbandel", "id": 23455264, "node_id": "MDQ6VXNlcjIzNDU1MjY0", "avatar_url": "https://avatars.githubusercontent.com/u/23455264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elronbandel", "html_url": "https://github.com/elronbandel", "followers_url": "https://api.github.com/users/elronbandel/followers", "following_url": "https://api.github.com/users/elronbandel/following{/other_user}", "gists_url": "https://api.github.com/users/elronbandel/gists{/gist_id}", "starred_url": "https://api.github.com/users/elronbandel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elronbandel/subscriptions", "organizations_url": "https://api.github.com/users/elronbandel/orgs", "repos_url": "https://api.github.com/users/elronbandel/repos", "events_url": "https://api.github.com/users/elronbandel/events{/privacy}", "received_events_url": "https://api.github.com/users/elronbandel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=h1) Report\n> Merging [#7319](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e4b94d8e581e547eaf9e47b76fd1a6497e911905?el=desc) will **decrease** coverage by `2.73%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7319/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7319 +/- ##\n==========================================\n- Coverage 81.59% 78.85% -2.74% \n==========================================\n Files 174 174 \n Lines 33671 33671 \n==========================================\n- Hits 27474 26552 -922 \n- Misses 6197 7119 +922 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-51.90%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.17% <0.00%> (-15.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.88% <0.00%> (+0.38%)` | :arrow_up: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=footer). Last update [e4b94d8...be00f57](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Opened Issue: [[BUG] Wrong Scores for many SQUAD models ](https://github.com/huggingface/transformers/issues/8710) #8710", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,600
1,619
1,619
NONE
null
Modification of run_squad.py fine tuning example so it will use the answer correctness probabilities the models produce while evaluating the model and calculating the best thresholds. Evaluation was done without the evaluated model probabilities but rather with default zero values. It corrupted the evaluation results and the best thresholds (which evaluated always as 0.0). **Notice: many squad models were evaluated without the probabilities, therefore, the results published in their model cards are possibly wrong.** Example: [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) The results with current evaluation script: ``` "exact": 87.09677419354838, "f1": 89.98343832723452, "total": 11873, "HasAns_exact": 84.66599190283401, "HasAns_f1": 90.44759839056285, "HasAns_total": 5928, "NoAns_exact": 89.52060555088309, "NoAns_f1": 89.52060555088309, "NoAns_total": 5945, "best_exact": 87.09677419354838, "best_exact_thresh": 0.0, "best_f1": 89.98343832723432, "best_f1_thresh": 0.0 ``` The results after the fix: ``` 'exact': 87.00412701086499, 'f1': 89.77725380276271, 'total': 11873, 'HasAns_exact': 83.80566801619433, 'HasAns_f1': 89.35987422405582, 'HasAns_total': 5928, 'NoAns_exact': 90.19343986543313, 'NoAns_f1': 90.19343986543313, 'NoAns_total': 5945, 'best_exact': 87.34102585698643, 'best_exact_thresh': 0.09882385462915344, 'best_f1': 90.07804792988485, 'best_f1_thresh': 0.09882385462915344 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7319/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7319", "html_url": "https://github.com/huggingface/transformers/pull/7319", "diff_url": "https://github.com/huggingface/transformers/pull/7319.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7319.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7318/comments
https://api.github.com/repos/huggingface/transformers/issues/7318/events
https://github.com/huggingface/transformers/pull/7318
706,437,638
MDExOlB1bGxSZXF1ZXN0NDkwOTgzODY3
7,318
Fixes for LayoutLM
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
COLLABORATOR
null
Adds the commands from the new script to check for model copies and clean up a bit the docstrings.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7318/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7318", "html_url": "https://github.com/huggingface/transformers/pull/7318", "diff_url": "https://github.com/huggingface/transformers/pull/7318.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7318.patch", "merged_at": 1600785431000 }
https://api.github.com/repos/huggingface/transformers/issues/7317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7317/comments
https://api.github.com/repos/huggingface/transformers/issues/7317/events
https://github.com/huggingface/transformers/pull/7317
706,390,104
MDExOlB1bGxSZXF1ZXN0NDkwOTQ0NTA1
7,317
Create README.md
{ "login": "blinovpd", "id": 64527177, "node_id": "MDQ6VXNlcjY0NTI3MTc3", "avatar_url": "https://avatars.githubusercontent.com/u/64527177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/blinovpd", "html_url": "https://github.com/blinovpd", "followers_url": "https://api.github.com/users/blinovpd/followers", "following_url": "https://api.github.com/users/blinovpd/following{/other_user}", "gists_url": "https://api.github.com/users/blinovpd/gists{/gist_id}", "starred_url": "https://api.github.com/users/blinovpd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blinovpd/subscriptions", "organizations_url": "https://api.github.com/users/blinovpd/orgs", "repos_url": "https://api.github.com/users/blinovpd/repos", "events_url": "https://api.github.com/users/blinovpd/events{/privacy}", "received_events_url": "https://api.github.com/users/blinovpd/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- add model card to blinoff/roberta-base-russian-v0 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7317/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7317", "html_url": "https://github.com/huggingface/transformers/pull/7317", "diff_url": "https://github.com/huggingface/transformers/pull/7317.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7317.patch", "merged_at": 1600813573000 }
https://api.github.com/repos/huggingface/transformers/issues/7316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7316/comments
https://api.github.com/repos/huggingface/transformers/issues/7316/events
https://github.com/huggingface/transformers/pull/7316
706,371,594
MDExOlB1bGxSZXF1ZXN0NDkwOTMwMTI0
7,316
Support for Windows in check_copies
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=h1) Report\n> Merging [#7316](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e46108817e13f5612cfe798570d38a44a9e65ba0?el=desc) will **decrease** coverage by `1.78%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7316/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7316 +/- ##\n==========================================\n- Coverage 81.46% 79.67% -1.79% \n==========================================\n Files 174 174 \n Lines 33670 33670 \n==========================================\n- Hits 27428 26828 -600 \n- Misses 6242 6842 +600 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.40% <0.00%> (-42.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.96% <0.00%> (-30.18%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.97% <0.00%> (-24.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `40.00% <0.00%> (-18.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.17% <0.00%> (-15.39%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=footer). Last update [e461088...5c2a962](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "It works! No issue at all." ]
1,600
1,600
1,600
COLLABORATOR
null
This is (hopefully) all what is necessary to make the script `check_copies.py` work on Windows. @jplu if you can checkout this PR locally and confirm, that would be great!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7316/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7316", "html_url": "https://github.com/huggingface/transformers/pull/7316", "diff_url": "https://github.com/huggingface/transformers/pull/7316.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7316.patch", "merged_at": 1600784269000 }
https://api.github.com/repos/huggingface/transformers/issues/7315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7315/comments
https://api.github.com/repos/huggingface/transformers/issues/7315/events
https://github.com/huggingface/transformers/issues/7315
706,280,596
MDU6SXNzdWU3MDYyODA1OTY=
7,315
Memory leak
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This looks to be a duplicate of #7169", "But I think my problem is run out of the cpu memory, not the GPU memory", "Ah my bad, I misread one letter ;-)\r\nTo fully understand your error, what's the dataset (particularly its size) you are training on?", "The size of dataset (indonesian Wikipedia) is around 522MB.", "just additional info, running the script in single process doesn't have this issue. In my case, the memory usage is stable, and stay at 16GB after few epochs. \r\nBut I want to run it in multiple GPU, it is just too slow with only one :-)", "#6999 ", "I tried the fix from #6999 manually (which is just a one liner `return loss` to `return loss.detach()`, and it seems to solve my memory leak issue. The fix is actually available since version 3.2.0, but when I used the version 3.2.0 with multi gpu, the process just stuck after the 500 steps, maybe there is deadlock among processes? Maybe I will write another ticket regarding this issue. " ]
1,600
1,601
1,601
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-debian-buster-sid - Python version: 3.7.0 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: distributed ### Who can help @LysandreJik, @sgugger, @patrickvonplaten ## Information Model I am using (Bert, GPT2): The problem arises when using: * [ X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X ] my own task or dataset: (give details below) ## To reproduce When I pretrain or fine tune a model (in my case BERT and GPT2) using torch.distributed.launch, the CPU memory usage will grow up to the memory limit (>500GB) until the first process is killed due to this issue. If I train bert-base, it takes around 30 epochs until the first process is killed, but when I train gpt-large, it just need 3 epochs until it is killed. Following is the command line I run to train/fine tune the bert-base (similar with gpt2). The script run_language_modeling.py is a copy of transformers/examples/language-modeling/run_language_modeling.py (vers. 3.1.0) python -m torch.distributed.launch --nproc_per_node=8 \ ../run_language_modeling.py \ --output_dir $model_target \ --model_name_or_path $model_source \ --config_name $model_source \ --tokenizer_name $model_source \ --train_data_file $target_train \ --eval_data_file $target_test \ --save_total_limit 5 \ --block_size 128 \ --overwrite_output_dir \ --fp16 \ --num_train_epochs 50 \ --do_train --do_eval \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 4 \ --mlm ## Expected behavior I would expect that the distributed training run until it is done without any memory issue. Thanks for checking it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7315/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7314/comments
https://api.github.com/repos/huggingface/transformers/issues/7314/events
https://github.com/huggingface/transformers/issues/7314
706,269,746
MDU6SXNzdWU3MDYyNjk3NDY=
7,314
Text generation with xlnet
{ "login": "matin-amiri", "id": 25244125, "node_id": "MDQ6VXNlcjI1MjQ0MTI1", "avatar_url": "https://avatars.githubusercontent.com/u/25244125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/matin-amiri", "html_url": "https://github.com/matin-amiri", "followers_url": "https://api.github.com/users/matin-amiri/followers", "following_url": "https://api.github.com/users/matin-amiri/following{/other_user}", "gists_url": "https://api.github.com/users/matin-amiri/gists{/gist_id}", "starred_url": "https://api.github.com/users/matin-amiri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matin-amiri/subscriptions", "organizations_url": "https://api.github.com/users/matin-amiri/orgs", "repos_url": "https://api.github.com/users/matin-amiri/repos", "events_url": "https://api.github.com/users/matin-amiri/events{/privacy}", "received_events_url": "https://api.github.com/users/matin-amiri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "```python \r\nfrom transformers import pipeline\r\nxlnet_generator = pipeline(\"text-generation\", model=\"xlnet-base-cased\", tokenizer=\"xlnet-base-cased\")\r\nprint(xlnet_generator(\"Today is a nice day and\"))\r\n```\r\n\r\nAlso this should help: https://huggingface.co/transformers/task_summary.html#text-generation" ]
1,600
1,600
1,600
NONE
null
how to use xlnet for text generation?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7314/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7313/comments
https://api.github.com/repos/huggingface/transformers/issues/7313/events
https://github.com/huggingface/transformers/pull/7313
706,263,860
MDExOlB1bGxSZXF1ZXN0NDkwODQ0Nzgy
7,313
Fixed results of SQuAD-FR evaluation
{ "login": "psorianom", "id": 1085210, "node_id": "MDQ6VXNlcjEwODUyMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1085210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/psorianom", "html_url": "https://github.com/psorianom", "followers_url": "https://api.github.com/users/psorianom/followers", "following_url": "https://api.github.com/users/psorianom/following{/other_user}", "gists_url": "https://api.github.com/users/psorianom/gists{/gist_id}", "starred_url": "https://api.github.com/users/psorianom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/psorianom/subscriptions", "organizations_url": "https://api.github.com/users/psorianom/orgs", "repos_url": "https://api.github.com/users/psorianom/repos", "events_url": "https://api.github.com/users/psorianom/events{/privacy}", "received_events_url": "https://api.github.com/users/psorianom/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=h1) Report\n> Merging [#7313](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e2964b8a190a8852e54ef07e03cc491cd570d0d1?el=desc) will **decrease** coverage by `1.16%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7313/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7313 +/- ##\n==========================================\n- Coverage 79.70% 78.54% -1.17% \n==========================================\n Files 174 174 \n Lines 33670 33670 \n==========================================\n- Hits 26837 26445 -392 \n- Misses 6833 7225 +392 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.58% <0.00%> (+2.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.32% <0.00%> (+3.03%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=footer). Last update [e2964b8...5b6e2e6](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
The score for the F1 metric was reported as the Exact Match and vice-versa. <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7313/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7313", "html_url": "https://github.com/huggingface/transformers/pull/7313", "diff_url": "https://github.com/huggingface/transformers/pull/7313.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7313.patch", "merged_at": 1600792748000 }
https://api.github.com/repos/huggingface/transformers/issues/7312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7312/comments
https://api.github.com/repos/huggingface/transformers/issues/7312/events
https://github.com/huggingface/transformers/pull/7312
706,261,528
MDExOlB1bGxSZXF1ZXN0NDkwODQyODk3
7,312
Adds FSMT to LM head AutoModel
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
MEMBER
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7312/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7312", "html_url": "https://github.com/huggingface/transformers/pull/7312", "diff_url": "https://github.com/huggingface/transformers/pull/7312.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7312.patch", "merged_at": 1600770952000 }
https://api.github.com/repos/huggingface/transformers/issues/7311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7311/comments
https://api.github.com/repos/huggingface/transformers/issues/7311/events
https://github.com/huggingface/transformers/pull/7311
706,215,997
MDExOlB1bGxSZXF1ZXN0NDkwODA2MDIz
7,311
Create an XLA parameter and fix the mixed precision
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=h1) Report\n> Merging [#7311](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/656c27c3a3345d0d2cf31c16f780b573c3dea09a?el=desc) will **increase** coverage by `0.31%`.\n> The diff coverage is `11.11%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7311/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7311 +/- ##\n==========================================\n+ Coverage 81.43% 81.75% +0.31% \n==========================================\n Files 174 174 \n Lines 33452 33458 +6 \n==========================================\n+ Hits 27243 27353 +110 \n+ Misses 6209 6105 -104 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.12% <ø> (+0.10%)` | :arrow_up: |\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `42.64% <11.11%> (-4.82%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `76.00% <0.00%> (-21.10%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.87% <0.00%> (-7.18%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=footer). Last update [656c27c...f9c67b1](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
This PR adds a new `XLA` parameter to activate/deactivate the XLA compilation and a bug in the mixed precision. These have to be set before the creation of the strategy and float16 is not compliant with TPU, where only bfloat16 is available.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7311/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7311", "html_url": "https://github.com/huggingface/transformers/pull/7311", "diff_url": "https://github.com/huggingface/transformers/pull/7311.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7311.patch", "merged_at": 1600784374000 }
https://api.github.com/repos/huggingface/transformers/issues/7310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7310/comments
https://api.github.com/repos/huggingface/transformers/issues/7310/events
https://github.com/huggingface/transformers/pull/7310
706,082,241
MDExOlB1bGxSZXF1ZXN0NDkwNjk0Mzg1
7,310
[code quality] new make target that combines style and quality targets
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm in favor. @sgugger @LysandreJik should this be a third target or should we just remove `make quality`?", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=h1) Report\n> Merging [#7310](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0804d077c634b2149b833ecc7897959cab8bf650?el=desc) will **decrease** coverage by `1.53%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7310/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7310 +/- ##\n==========================================\n- Coverage 78.14% 76.61% -1.54% \n==========================================\n Files 181 181 \n Lines 35759 35759 \n==========================================\n- Hits 27945 27396 -549 \n- Misses 7814 8363 +549 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `83.58% <0.00%> (-8.96%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.67%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=footer). Last update [0804d07...3c59813](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I like having a command that does the same check as the CI without changing anything so I'd leave `make quality` (basically some docstring formatting is sometimes necessary to have beautiful docs but against what black wants so `make style` can be destructive, it's not jsut styling).\r\n\r\nI have no objection with making `make style` do the checks too so that in most cases, we can just do `make style`.", "OK, @sgugger - I changed it to `style` and re-used the same intermediary make target so it's one source to change.\r\n\r\nBTW, the newly introduced `utils/check_copies.py` takes forever to run :( So perhaps I will still need some custom alias, as it is too slow for a quick check and push cycle.\r\n\r\nIt appears that the optimal target for quick check-n-push at the moment is:\r\n\r\n```\r\n black examples templates tests src utils\r\n isort examples templates tests src utils\r\n flake8 examples templates tests src utils\r\n```\r\n\r\nand then rely on CI to do the slow check.\r\n", "Oh? On my setup it's faster than flake8 so didn't try to optimize. But I can try to make some speedups to that script (basically one regex on the whole content to check whether the for loop is necessary and quickly dimiss files with no copies). It would still open all files if that's where the slowdown comes from though.", "It's 2-3 times slower on my machine:\r\n```\r\n$ time python utils/check_copies.py\r\n\r\nreal 0m26.997s\r\nuser 0m24.928s\r\nsys 0m2.052s\r\n\r\n$ time flake8 examples templates tests src utils\r\n\r\nreal 0m11.735s\r\nuser 1m47.922s\r\nsys 0m1.051s\r\n```\r\nflake is slow, and the new script is **very** slow", "So, I'm not sure how welcome the change to `make style` will be if it's going to be 10 times slower.\r\n\r\nHere an alt solution with a new 3rd target `fixup`:\r\n```\r\nquality_checks:\r\n\tflake8 examples templates tests src utils\r\n\tpython utils/check_copies.py\r\n\tpython utils/check_repo.py\r\n\r\nquality:\r\n\tblack --check examples templates tests src utils\r\n\tisort --check-only examples templates tests src utils\r\n\t${MAKE} quality_checks\r\n\r\n# Format source code automatically and check is there are any problems left that need manual fixing\r\n\r\nstyle:\r\n\tblack examples templates tests src utils\r\n\tisort examples templates tests src utils\r\n\r\nfixup: style quality_checks\r\n```\r\n\r\nI'm not attached to the name - just looking for something short and intuitive", "I don't have a very strong opinion on either adding flake8 to style or having fixup, as long as we keep make quality as a script that does not make any change itself.", "And I'm observing the same behaviour with the `utils/check_copies.py`. It takes a while now.", "Will speed up the `utils/check_copies.py` today. The lag might be due to the fact we have more copies to check now.", "OK, so I went with adding a new target `fixup` that performs automatic fixes and manual checks where automation is not possible. That will not take away the much quicker `make style` from those who don't make coding errors and want just the quick autoformatter.\r\n\r\nAdded documentation." ]
1,600
1,601
1,601
CONTRIBUTOR
null
**edit**: this post has been edited to reflect the outcome of the discussion. Any reason why we don't run `flake8` in `make style`? I find myself needing to run `make style` followed by `make quality` all the time, but I need the latter just for the last 2 checks. Since we have no control over the source code why bother with separating checking and fixing - let's just have one target that fixes and then performs the remaining checks, as we know the first two have been done already. This PR suggests to create a new target `fixup` that combines the 2 separate fix and check functions into one efficient target, I will edit the docs if this change resonates with the team. p.s. if it feels wrong to merge fixing and checking, can we add a 3rd target that is a merged one? `make best` p.p.s. I know I can make my own alias, I love `make`!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7310/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7310", "html_url": "https://github.com/huggingface/transformers/pull/7310", "diff_url": "https://github.com/huggingface/transformers/pull/7310.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7310.patch", "merged_at": 1601048260000 }
https://api.github.com/repos/huggingface/transformers/issues/7309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7309/comments
https://api.github.com/repos/huggingface/transformers/issues/7309/events
https://github.com/huggingface/transformers/pull/7309
706,079,680
MDExOlB1bGxSZXF1ZXN0NDkwNjkyMjI2
7,309
[code quality] fix confused flake8
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=h1) Report\n> Merging [#7309](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/656c27c3a3345d0d2cf31c16f780b573c3dea09a?el=desc) will **increase** coverage by `0.49%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7309/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7309 +/- ##\n==========================================\n+ Coverage 81.43% 81.93% +0.49% \n==========================================\n Files 174 174 \n Lines 33452 33452 \n==========================================\n+ Hits 27243 27410 +167 \n+ Misses 6209 6042 -167 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.31% <0.00%> (-14.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `91.89% <0.00%> (-5.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.28% <0.00%> (-2.50%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=footer). Last update [656c27c...e06bd68](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,603
1,600
CONTRIBUTOR
null
We run `black --target-version py35 ...` but flake8 doesn't know that we want this specific `target-version`, so currently with py38 flake8 fails suggesting that black should have reformatted 63 files. Indeed if I run: ``` black --line-length 119 --target-version py38 examples templates tests src utils ``` it indeed reformats 63 files. The only solution I found is to create a black config file as explained at https://github.com/psf/black#configuration-format, which is what this PR adds. Now flake8 knows that py35 is the standard and no longer gets confused regardless of the user's python version. We can now edit out `--line-length 119 --target-version py35` from Makefile and the CI jobs, so that we have one config to rule them all. I pushed that change as well. @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7309/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7309", "html_url": "https://github.com/huggingface/transformers/pull/7309", "diff_url": "https://github.com/huggingface/transformers/pull/7309.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7309.patch", "merged_at": 1600827157000 }
https://api.github.com/repos/huggingface/transformers/issues/7308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7308/comments
https://api.github.com/repos/huggingface/transformers/issues/7308/events
https://github.com/huggingface/transformers/issues/7308
706,069,708
MDU6SXNzdWU3MDYwNjk3MDg=
7,308
[s2s] metrics.json is wrong on multigpu
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
CONTRIBUTOR
null
overwritten by last rank to save it. @nateraw is there a way to check if my module `is_rank_zero` or some such?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7308/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7307/comments
https://api.github.com/repos/huggingface/transformers/issues/7307/events
https://github.com/huggingface/transformers/issues/7307
706,030,635
MDU6SXNzdWU3MDYwMzA2MzU=
7,307
Cuda OOM training gpt2-xl with Trainer in multi-GPUs
{ "login": "fumpe", "id": 37223285, "node_id": "MDQ6VXNlcjM3MjIzMjg1", "avatar_url": "https://avatars.githubusercontent.com/u/37223285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fumpe", "html_url": "https://github.com/fumpe", "followers_url": "https://api.github.com/users/fumpe/followers", "following_url": "https://api.github.com/users/fumpe/following{/other_user}", "gists_url": "https://api.github.com/users/fumpe/gists{/gist_id}", "starred_url": "https://api.github.com/users/fumpe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fumpe/subscriptions", "organizations_url": "https://api.github.com/users/fumpe/orgs", "repos_url": "https://api.github.com/users/fumpe/repos", "events_url": "https://api.github.com/users/fumpe/events{/privacy}", "received_events_url": "https://api.github.com/users/fumpe/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 2107554019, "node_id": "MDU6TGFiZWwyMTA3NTU0MDE5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models", "name": "Distributed Training / Models", "color": "fef2c0", "default": false, "description": "" }, { "id": 2209491906, "node_id": "MDU6TGFiZWwyMjA5NDkxOTA2", "url": "https://api.github.com/repos/huggingface/transformers/labels/gpt2", "name": "gpt2", "color": "45cca5", "default": false, "description": "" } ]
closed
false
null
[]
[ "I want to make an update. I thought it might be possible that gpt2-xl was impossible to finu-tune, so I tested it with gpt2-large, but I got the same result: \"CUDA out of memory\".", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@fumpe Did you find a way around to this?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,600
1,619
1,619
NONE
null
# ❓ Questions & Help I am currently trying to finetune the gpt2-xl. I have 2 tesla T4 GPUs. However, I get the CUDA OOM error... when I look at the use of the gpus I see that the first one is full, but the second one still has enough space. Here is my code: ``` from transformers import TextDataset,DataCollatorForLanguageModeling, AutoTokenizer import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') from transformers import GPT2LMHeadModel, Trainer, TrainingArguments model = GPT2LMHeadModel.from_pretrained("gpt2-xl").to(device) from transformers import TextDataset,DataCollatorForLanguageModeling, AutoTokenizer, TrainingArguments, Trainer tokenizer = AutoTokenizer.from_pretrained("gpt2-xl") train_dataset = TextDataset( tokenizer=tokenizer, file_path='dataset_training.txt', block_size=128) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False, ) training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=2, # total # of training epochs per_device_train_batch_size=1, # batch size per device during training warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, prediction_loss_only=True, ) trainer.train() ``` I get "CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 14.73 GiB total capacity; 13.61 GiB already allocated; 31.88 MiB free; 13.98 GiB reserved in total by PyTorch)" When I run nvidia-smi I see: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+================ | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 75C P0 34W / 70W | 15047MiB / 15079MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla T4 Off | 00000000:00:05.0 Off | 0 | | N/A 56C P0 29W / 70W | 9479MiB / 15079MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================== | 0 1756 C /opt/conda/bin/python 15037MiB | | 1 1756 C /opt/conda/bin/python 9469MiB | +-----------------------------------------------------------------------------+ ``` **My question is:** Am I making a mistake? or how can a large model be trained with multiple GPUs?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7307/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7307/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7306/comments
https://api.github.com/repos/huggingface/transformers/issues/7306/events
https://github.com/huggingface/transformers/issues/7306
706,020,845
MDU6SXNzdWU3MDYwMjA4NDU=
7,306
BertModel for 2 category classification - How to evaluate the performance
{ "login": "Backpackerice", "id": 7083541, "node_id": "MDQ6VXNlcjcwODM1NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/7083541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Backpackerice", "html_url": "https://github.com/Backpackerice", "followers_url": "https://api.github.com/users/Backpackerice/followers", "following_url": "https://api.github.com/users/Backpackerice/following{/other_user}", "gists_url": "https://api.github.com/users/Backpackerice/gists{/gist_id}", "starred_url": "https://api.github.com/users/Backpackerice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Backpackerice/subscriptions", "organizations_url": "https://api.github.com/users/Backpackerice/orgs", "repos_url": "https://api.github.com/users/Backpackerice/repos", "events_url": "https://api.github.com/users/Backpackerice/events{/privacy}", "received_events_url": "https://api.github.com/users/Backpackerice/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
Hello there, I am building a fine-tuned BERT model for classification (with a linear layer in the end). The prediction should just be 1/0 (Yes, No). When I am writing the evaluation part, I saw some people online did a F.log_softmax for the logits then use np.argmax to get the predicted label. However, I also saw people directly apply np.argmax on logits without the softmax activation. I am wondering which one should I follow and how to decide that? Here's my model definition: ```python class ReviewClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = 2 self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) embedding_size = config.hidden_size self.classifier = nn.Linear(embedding_size, len(LABEL_NAME)) self.init_weights() def forward( self, review_input_ids=None, review_attention_mask=None, review_token_type_ids=None, agent_input_ids=None, agent_attention_mask=None, agent_token_type_ids=None, labels=None, ): review_outputs = self.bert( review_input_ids, attention_mask=review_attention_mask, token_type_ids=review_token_type_ids, position_ids=None, head_mask=None, inputs_embeds=None, ) feature = review_outputs[1] # (batch_size, seq_len) -? Should it be (batch_size, hidden_size) # nn.CrossEntropyLoss applies F.log_softmax and nn.NLLLoss internally on your input, # so you should pass the raw logits to it. logits = self.classifier(feature) outputs = (logits,) # + outputs[2:] # add hidden states and attention if they are here if labels is not None: loss_fct = nn.CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) outputs = (loss,) + outputs return outputs # (loss, logits, hidden_states, attentions) ``` Then this is my validation code ``` def model_validate(model, data_loader): # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device) if torch.cuda.device_count() > 1: model = nn.DataParallel(model) label_prop = data_loader.dataset.dataset.label_prop() total_valid_loss = 0 batch_size = data_loader.batch_size num_batch = len(data_loader) y_pred, y_true = [], [] # Evaluate data for step, batch in tqdm(enumerate(data_loader), desc="Validation...", total=num_batch): b_review_input_ids = batch["review_input_ids"].to(device) b_review_attention_mask = batch["review_attention_mask"].to(device) b_review_token_type_ids = batch["review_token_type_ids"].to(device) b_binarized_label = batch["binarized_label"].to(device) # Tell pytorch not to bother with constructing the compute graph during # the forward pass, since this is only needed for backprop (training). with torch.no_grad(): (loss, logits,) = model(review_input_ids=b_review_input_ids, review_attention_mask=b_review_attention_mask, review_token_type_ids=b_review_token_type_ids, labels=b_binarized_label) total_valid_loss += loss.item() numpy_probas = logits.detach().cpu().numpy() y_pred.extend(np.argmax(numpy_probas, axis=1).flatten()) y_true.extend(b_binarized_label.cpu().numpy()) # End of an epoch of validation # put model to train mode again. model.train() ave_loss = total_valid_loss / (num_batch * batch_size) # compute the various f1 score for each label report = classification_report(y_true, y_pred, output_dict=True) metrics_df = pd.DataFrame(report).transpose() metrics_df = metrics_df.sort_index() weighted_f1_score = metrics_df.loc['weighted avg', 'f1-score'] averaged_f1_score = metrics_df.loc['macro avg', 'f1-score'] return ave_loss, metrics_df, { "weighted": weighted_f1_score, "averaged": averaged_f1_score, } ``` The other version I was trying is: ``` transfored_logits = F.log_softmax(logits,dim=1) numpy_probas = transfored_logits.detach().cpu().numpy() y_pred.extend(np.argmax(numpy_probas, axis=1).flatten()) y_true.extend(b_binarized_label.cpu().numpy()) ``` The third version I was trying is: ``` transfored_logits = torch.sigmoid(logits) numpy_probas = transfored_logits.detach().cpu().numpy() y_pred.extend(np.argmax(numpy_probas, axis=1).flatten()) y_true.extend(b_binarized_label.cpu().numpy()) ``` I also don't know how to understand the result. When I see online, people say if I set dim = 1 for log_softmax, the sum of probability for all feature (categories) should = 1. However, giving some examples below: This is logits output: (for one batch - batch size = 16, num_labels = 2) tensor([[ 1.1261, -1.8547], [ 0.6066, -1.1498], [ 1.3667, -2.0078], [ 2.0652, -2.6669], [ 1.0388, -1.7555], [ 0.6801, -1.1652], [ 0.8315, -1.3860], [ 1.5685, -2.2362], [ 0.1150, -0.3344], [ 2.0751, -2.6166], [ 1.5033, -2.1702], [ 0.1115, -0.3096], [ 0.8610, -1.4834], [ 1.5544, -2.2773], [ 2.1014, -2.6533], [ 0.7789, -1.3748]], device='cuda:0') If I apply softmax first, F.log_softmax(logits,dim=1), I get: tensor([[-0.0495, -3.0302], [-0.1593, -1.9157], [-0.0337, -3.4082], [-0.0088, -4.7409], [-0.0594, -2.8537], [-0.1467, -1.9920], [-0.1033, -2.3209], [-0.0220, -3.8267], [-0.4935, -0.9429], [-0.0091, -4.7008], [-0.0251, -3.6985], [-0.5046, -0.9257], [-0.0916, -2.4360], [-0.0214, -3.8531], [-0.0086, -4.7632], [-0.1098, -2.2635]], device='cuda:0') The sum per row doesn't sum up to 1 and doesn't look like probability to me. If I apply sigmoid, torch.sigmoid(logits) tensor([[0.7551, 0.1353], [0.6472, 0.2405], [0.7969, 0.1184], [0.8875, 0.0650], [0.7386, 0.1474], [0.6638, 0.2377], [0.6967, 0.2000], [0.8276, 0.0965], [0.5287, 0.4172], [0.8885, 0.0681], [0.8181, 0.1025], [0.5278, 0.4232], [0.7029, 0.1849], [0.8255, 0.0930], [0.8910, 0.0658], [0.6854, 0.2018]], device='cuda:0') It does look like probability more, although it still doesn't sum up to 1. No matter which version I use, the predicted result is always the same in this case (since my 1 (Yes) Label has a really low incidence rate) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7306/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7305/comments
https://api.github.com/repos/huggingface/transformers/issues/7305/events
https://github.com/huggingface/transformers/pull/7305
705,999,912
MDExOlB1bGxSZXF1ZXN0NDkwNjI3NTQ0
7,305
Fix #7304
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=h1) Report\n> Merging [#7305](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/656c27c3a3345d0d2cf31c16f780b573c3dea09a?el=desc) will **decrease** coverage by `3.11%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7305/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7305 +/- ##\n==========================================\n- Coverage 81.43% 78.32% -3.12% \n==========================================\n Files 174 174 \n Lines 33452 33452 \n==========================================\n- Hits 27243 26200 -1043 \n- Misses 6209 7252 +1043 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.72% <0.00%> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=footer). Last update [656c27c...0aedf4d](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
COLLABORATOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #7304 Correct order is tensors, name @LysandreJik you need to teach me how to check the CI on TPU so we catch those rookie mistakes before merging :-)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7305/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7305", "html_url": "https://github.com/huggingface/transformers/pull/7305", "diff_url": "https://github.com/huggingface/transformers/pull/7305.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7305.patch", "merged_at": 1600780804000 }
https://api.github.com/repos/huggingface/transformers/issues/7304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7304/comments
https://api.github.com/repos/huggingface/transformers/issues/7304/events
https://github.com/huggingface/transformers/issues/7304
705,960,013
MDU6SXNzdWU3MDU5NjAwMTM=
7,304
Wrong arg order for `nested_xla_mesh_reduce` in trainer.py
{ "login": "allenwang28", "id": 9057208, "node_id": "MDQ6VXNlcjkwNTcyMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9057208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/allenwang28", "html_url": "https://github.com/allenwang28", "followers_url": "https://api.github.com/users/allenwang28/followers", "following_url": "https://api.github.com/users/allenwang28/following{/other_user}", "gists_url": "https://api.github.com/users/allenwang28/gists{/gist_id}", "starred_url": "https://api.github.com/users/allenwang28/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/allenwang28/subscriptions", "organizations_url": "https://api.github.com/users/allenwang28/orgs", "repos_url": "https://api.github.com/users/allenwang28/repos", "events_url": "https://api.github.com/users/allenwang28/events{/privacy}", "received_events_url": "https://api.github.com/users/allenwang28/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, I switched the args, sorry about that. Will make a PR to fix this tomorrow morning." ]
1,600
1,600
1,600
CONTRIBUTOR
null
## Environment info Python 3.7 on Google Cloud TPUs ### Who can help @sgugger ## Information When training examples from [run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py) using Cloud TPUs, we run into this error: ``` TypeError: _xla_rendezvous(): incompatible function arguments. The following argument types are supported: 1. (arg0: int, arg1: str, arg2: str, arg3: List[int]) -> List[bytes] ``` This issue is just due to the wrong arg order in [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1337) where the [args should be switched](https://github.com/huggingface/transformers/blob/656c27c3a3345d0d2cf31c16f780b573c3dea09a/src/transformers/trainer_utils.py#L162). This was introduced in: https://github.com/huggingface/transformers/commit/492bb6aa486856f8243dfeb533ed1b23e996e403 ## To reproduce Steps to reproduce the behavior: 1. Running the provided example on Cloud TPU. ## Expected behavior Should not fail with `TypeError`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7304/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7303/comments
https://api.github.com/repos/huggingface/transformers/issues/7303/events
https://github.com/huggingface/transformers/issues/7303
705,957,742
MDU6SXNzdWU3MDU5NTc3NDI=
7,303
BART metrics.json and validation checkpoint metrics seem to disagree
{ "login": "vikigenius", "id": 12724810, "node_id": "MDQ6VXNlcjEyNzI0ODEw", "avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vikigenius", "html_url": "https://github.com/vikigenius", "followers_url": "https://api.github.com/users/vikigenius/followers", "following_url": "https://api.github.com/users/vikigenius/following{/other_user}", "gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}", "starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions", "organizations_url": "https://api.github.com/users/vikigenius/orgs", "repos_url": "https://api.github.com/users/vikigenius/repos", "events_url": "https://api.github.com/users/vikigenius/events{/privacy}", "received_events_url": "https://api.github.com/users/vikigenius/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I can replicate in pl 0.8.5 and pl 0.9.0, great catch.\r\n\r\nSmaller command to replicate:\r\n\r\n```\r\nexport MAX_LEN=128\r\nexport m=export m=sshleifer/student_marian_en_ro_6_3\r\npython finetune.py \\\r\n --learning_rate=3e-4 \\\r\n --do_train \\\r\n --do_predict \\\r\n --fp16 \\\r\n --val_check_interval 0.25 \\\r\n --data_dir $ENRO_DIR \\\r\n --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \\\r\n --freeze_encoder --freeze_embeds \\\r\n --train_batch_size=64 --eval_batch_size=64 \\\r\n --tokenizer_name $m --model_name_or_path $m \\\r\n --warmup_steps 500 --sortish_sampler --logger_name wandb \\\r\n --fp16_opt_level=O1 --task translation --num_sanity_val_steps=0 \\\r\n --model_name_or_path $m --gpus 8 --num_train_epochs=1 \\\r\n --data_dir wmt_en_ro --output_dir dmar_pl_only_v2 --save_top_k=10\r\n```\r\n\r\nYou will only call have 4-5 entries in metrics, but 10 checkpoints.\r\n\r\n\r\n", "Every single rank is saving checkpoints.", "@sshleifer Wow, so is that like a race condition where the last RANK \"wins\"? What about the weights of model? Would they be same across all the ranks, or would they be a problem as well?", "\r\nPosted here https://github.com/PyTorchLightning/pytorch-lightning/issues/3597\r\n\r\nWeights will be the same across all ranks. You just have suboptimal checkpoint saving logic. You can kind of workaround by passing --save_top_k=5 and then manually picking which one you like by looking at metrics.json.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux 4.14 - Python version:3.7.9 - PyTorch version (GPU?):1.6.0 (True) - Using GPU in script?: True - Using distributed or parallel set-up in script?: True ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> examples/seq2seq: @sshleifer ## Information Model I am using (BART): The problem arises when using: * [*] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [*] my own task or dataset: (give details below) I am using a small subset of https://github.com/pubmedqa/pubmedqa to generate questions from the given passage. ## To reproduce Steps to reproduce the behavior: 1. Do distributed training 2. Run the finetune script with appropriate arguments. The saved checkpoint says: val_avg_rouge2=13.0975-step_count=4.ckpt However the metrics.json file says that val_avg_rouge2=6.46015 and there are better step_counts in comparison. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The best metric in metrics.json should be chosen for saving checkpoints.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7303/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7302/comments
https://api.github.com/repos/huggingface/transformers/issues/7302/events
https://github.com/huggingface/transformers/pull/7302
705,923,089
MDExOlB1bGxSZXF1ZXN0NDkwNTY1Mjk4
7,302
Add possibility to evaluate every epoch
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7302?src=pr&el=h1) Report\n> Merging [#7302](https://codecov.io/gh/huggingface/transformers/pull/7302?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/34a1b75f01667cc176304d7594245a7c308855df?el=desc) will **decrease** coverage by `1.08%`.\n> The diff coverage is `85.18%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7302/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7302?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7302 +/- ##\n==========================================\n- Coverage 81.35% 80.26% -1.09% \n==========================================\n Files 174 174 \n Lines 33452 33475 +23 \n==========================================\n- Hits 27215 26870 -345 \n- Misses 6237 6605 +368 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7302?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.62% <50.00%> (-0.11%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.20% <89.47%> (-0.55%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.63% <100.00%> (+1.74%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-51.90%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.26% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `94.11% <0.00%> (+1.17%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7302/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7302?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7302?src=pr&el=footer). Last update [34a1b75...6a28276](https://codecov.io/gh/huggingface/transformers/pull/7302?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This change breaks trainer api when people use _evaluate_during_training_ before version 3.2.0. It'll be nice to mention it in the release notes.", "There is no breaking change, the argument is just deprecated. It won't be removed before the next major release." ]
1,600
1,600
1,600
COLLABORATOR
null
This PR deprecates the argument `evaluate_during_training` and replaces it by `evaluation_strategy` (which supports more than just two values). The goal is to add support to easily evaluate the model every epoch (currently evaluation is done every n steps). Evaluating every epoch is the most commonly used evaluation strategy, taught in pretty much every course and done in every basic training loop. While it's possible to do that right now, it require a bit of gymnastic (by building the dataloader and grabbing its length or dividing the number of samples in the dataset by the effective batch size). There have been two issues opened about that (but I'm too lazy to track their numbers ^^). In passing, I made `eval_steps` default to the same as `logging_steps` because it seemed more natural.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7302", "html_url": "https://github.com/huggingface/transformers/pull/7302", "diff_url": "https://github.com/huggingface/transformers/pull/7302.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7302.patch", "merged_at": 1600782750000 }
https://api.github.com/repos/huggingface/transformers/issues/7301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7301/comments
https://api.github.com/repos/huggingface/transformers/issues/7301/events
https://github.com/huggingface/transformers/pull/7301
705,912,157
MDExOlB1bGxSZXF1ZXN0NDkwNTU2MjUw
7,301
[s2s] save hostname with repo info
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7301?src=pr&el=h1) Report\n> Merging [#7301](https://codecov.io/gh/huggingface/transformers/pull/7301?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8d562a2d1a79487aa8d9f2f63e92cf4e47be8c46?el=desc) will **increase** coverage by `2.70%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7301/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7301?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7301 +/- ##\n==========================================\n+ Coverage 78.39% 81.10% +2.70% \n==========================================\n Files 174 174 \n Lines 33452 33452 \n==========================================\n+ Hits 26224 27130 +906 \n+ Misses 7228 6322 -906 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7301?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `20.38% <0.00%> (-67.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.26% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.04% <0.00%> (+20.43%)` | :arrow_up: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `75.00% <0.00%> (+20.83%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/7301/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7301?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7301?src=pr&el=footer). Last update [8d562a2...648083d](https://codecov.io/gh/huggingface/transformers/pull/7301?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7301/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7301", "html_url": "https://github.com/huggingface/transformers/pull/7301", "diff_url": "https://github.com/huggingface/transformers/pull/7301.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7301.patch", "merged_at": 1600723584000 }
https://api.github.com/repos/huggingface/transformers/issues/7300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7300/comments
https://api.github.com/repos/huggingface/transformers/issues/7300/events
https://github.com/huggingface/transformers/pull/7300
705,892,154
MDExOlB1bGxSZXF1ZXN0NDkwNTM5NjMx
7,300
[s2s] add src_lang kwarg for distributed eval
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7300?src=pr&el=h1) Report\n> Merging [#7300](https://codecov.io/gh/huggingface/transformers/pull/7300?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6bc72c469c38a611fb99c3d61807f59b43fe2c9?el=desc) will **increase** coverage by `1.40%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7300/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7300?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7300 +/- ##\n==========================================\n+ Coverage 77.40% 78.81% +1.40% \n==========================================\n Files 181 181 \n Lines 34827 34827 \n==========================================\n+ Hits 26958 27448 +490 \n+ Misses 7869 7379 -490 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7300?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `96.82% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.62% <0.00%> (-69.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7300/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7300?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7300?src=pr&el=footer). Last update [d6bc72c...b3d1136](https://codecov.io/gh/huggingface/transformers/pull/7300?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7300/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7300", "html_url": "https://github.com/huggingface/transformers/pull/7300", "diff_url": "https://github.com/huggingface/transformers/pull/7300.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7300.patch", "merged_at": 1600813597000 }
https://api.github.com/repos/huggingface/transformers/issues/7299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7299/comments
https://api.github.com/repos/huggingface/transformers/issues/7299/events
https://github.com/huggingface/transformers/pull/7299
705,827,150
MDExOlB1bGxSZXF1ZXN0NDkwNDg2MDA5
7,299
Create README.md
{ "login": "thedarkzeno", "id": 45200346, "node_id": "MDQ6VXNlcjQ1MjAwMzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/45200346?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thedarkzeno", "html_url": "https://github.com/thedarkzeno", "followers_url": "https://api.github.com/users/thedarkzeno/followers", "following_url": "https://api.github.com/users/thedarkzeno/following{/other_user}", "gists_url": "https://api.github.com/users/thedarkzeno/gists{/gist_id}", "starred_url": "https://api.github.com/users/thedarkzeno/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thedarkzeno/subscriptions", "organizations_url": "https://api.github.com/users/thedarkzeno/orgs", "repos_url": "https://api.github.com/users/thedarkzeno/repos", "events_url": "https://api.github.com/users/thedarkzeno/events{/privacy}", "received_events_url": "https://api.github.com/users/thedarkzeno/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "@thedarkzeno If you'd like, it'd be awesome if you could add default input texts in portuguese for https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts (you can open a PR)\r\n\r\nso the inference widget on your model pages is correctly populated\r\n\r\n" ]
1,600
1,601
1,601
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7299/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7299", "html_url": "https://github.com/huggingface/transformers/pull/7299", "diff_url": "https://github.com/huggingface/transformers/pull/7299.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7299.patch", "merged_at": 1601556749000 }
https://api.github.com/repos/huggingface/transformers/issues/7298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7298/comments
https://api.github.com/repos/huggingface/transformers/issues/7298/events
https://github.com/huggingface/transformers/pull/7298
705,804,119
MDExOlB1bGxSZXF1ZXN0NDkwNDY2ODQ4
7,298
[s2s] s/alpha_loss_encoder/alpha_encoder_loss/
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7298?src=pr&el=h1) Report\n> Merging [#7298](https://codecov.io/gh/huggingface/transformers/pull/7298?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7a88ed6c2a740c45cafb2009a124ba056506d6a1?el=desc) will **increase** coverage by `3.52%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7298/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7298?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7298 +/- ##\n==========================================\n+ Coverage 78.00% 81.52% +3.52% \n==========================================\n Files 174 174 \n Lines 33452 33452 \n==========================================\n+ Hits 26095 27273 +1178 \n+ Misses 7357 6179 -1178 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7298?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.52% <0.00%> (-34.77%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.87% <0.00%> (-7.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.78% <0.00%> (-0.54%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.96% <0.00%> (-0.45%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7298/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7298?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7298?src=pr&el=footer). Last update [7a88ed6...a612697](https://codecov.io/gh/huggingface/transformers/pull/7298?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
fix to match `distillation.py: self.alpha_encoder_loss` @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7298/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7298", "html_url": "https://github.com/huggingface/transformers/pull/7298", "diff_url": "https://github.com/huggingface/transformers/pull/7298.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7298.patch", "merged_at": 1600712066000 }
https://api.github.com/repos/huggingface/transformers/issues/7297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7297/comments
https://api.github.com/repos/huggingface/transformers/issues/7297/events
https://github.com/huggingface/transformers/pull/7297
705,795,864
MDExOlB1bGxSZXF1ZXN0NDkwNDYwMTE0
7,297
[s2s tests] fix test_run_eval_search
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7297?src=pr&el=h1) Report\n> Merging [#7297](https://codecov.io/gh/huggingface/transformers/pull/7297?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7a88ed6c2a740c45cafb2009a124ba056506d6a1?el=desc) will **increase** coverage by `3.79%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7297/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7297?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7297 +/- ##\n==========================================\n+ Coverage 78.00% 81.80% +3.79% \n==========================================\n Files 174 174 \n Lines 33452 33452 \n==========================================\n+ Hits 26095 27366 +1271 \n+ Misses 7357 6086 -1271 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7297?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.52% <0.00%> (-34.77%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.78% <0.00%> (-0.54%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.26% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/7297/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7297?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7297?src=pr&el=footer). Last update [7a88ed6...65e0194](https://codecov.io/gh/huggingface/transformers/pull/7297?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks!" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Fix problems with the new test <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #7295 @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7297/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7297", "html_url": "https://github.com/huggingface/transformers/pull/7297", "diff_url": "https://github.com/huggingface/transformers/pull/7297.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7297.patch", "merged_at": 1600711241000 }
https://api.github.com/repos/huggingface/transformers/issues/7296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7296/comments
https://api.github.com/repos/huggingface/transformers/issues/7296/events
https://github.com/huggingface/transformers/issues/7296
705,791,574
MDU6SXNzdWU3MDU3OTE1NzQ=
7,296
Marian/MBart should not save static position embeddings
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "Any interest @stas00 ?", "Yes, please" ]
1,600
1,603
1,603
CONTRIBUTOR
null
Add ``` keys_to_never_save = [ "model.encoder.embed_positions.weight", "model.decoder.embed_positions.weight", ] ``` Probably also applies to XLM, T5 (I think) and a few others.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7296/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7295/comments
https://api.github.com/repos/huggingface/transformers/issues/7295/events
https://github.com/huggingface/transformers/issues/7295
705,787,302
MDU6SXNzdWU3MDU3ODczMDI=
7,295
test_run_eval_search SLOW failure
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "@stas00 \r\nDefinitely need this change\r\n![image](https://user-images.githubusercontent.com/6045025/93801070-753f1980-fc0f-11ea-83c6-b544fa4042cf.png)\r\n\r\n\r\nBut then the parsing of the search arg seems to break.\r\nCould you take a look?", "oh, yes, thank you - will fix shortly", "Fixed: https://github.com/huggingface/transformers/pull/7297\r\n\r\nApologies for pushing in a broken test :( I think I was testing the wrong branch", "No worries!" ]
1,600
1,600
1,600
CONTRIBUTOR
null
```bash RUN_SLOW=1 pytest examples/seq2seq/test_seq2seq_examples.py -k search ``` ``` FAILED examples/seq2seq/test_seq2seq_examples.py::test_run_eval_search[patrickvonplaten/t5-tiny-random] - ValueError: could not convert string to float: '--data_dir' FAILED examples/seq2seq/test_seq2seq_examples.py::test_run_eval_search[sshleifer/tiny-mbart] - ValueError: could not convert string to float: '--data_dir' ``` Investigating
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7295/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7294/comments
https://api.github.com/repos/huggingface/transformers/issues/7294/events
https://github.com/huggingface/transformers/issues/7294
705,771,510
MDU6SXNzdWU3MDU3NzE1MTA=
7,294
Bert Fine-Tuning on SQuAD with native TF2
{ "login": "srcarroll", "id": 50210727, "node_id": "MDQ6VXNlcjUwMjEwNzI3", "avatar_url": "https://avatars.githubusercontent.com/u/50210727?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srcarroll", "html_url": "https://github.com/srcarroll", "followers_url": "https://api.github.com/users/srcarroll/followers", "following_url": "https://api.github.com/users/srcarroll/following{/other_user}", "gists_url": "https://api.github.com/users/srcarroll/gists{/gist_id}", "starred_url": "https://api.github.com/users/srcarroll/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srcarroll/subscriptions", "organizations_url": "https://api.github.com/users/srcarroll/orgs", "repos_url": "https://api.github.com/users/srcarroll/repos", "events_url": "https://api.github.com/users/srcarroll/events{/privacy}", "received_events_url": "https://api.github.com/users/srcarroll/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello !\r\n\r\nYou have an example here https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb", "Perfect! Thank you, I did not see that. I think I'm all good now.", "I want to have a fine tuning on my own dataset, but I don't know how to deal with the data format. Any suggestion will be appreciated. ", "@mug2mag this post is closed, please open another one with more detail on what you would like to do. Thanks." ]
1,600
1,603
1,600
NONE
null
I'm trying to run Bert fine tuning on SQuAD with Tensorflow's model.fit function. I was able to run the tutorial on sequence classification from here https://huggingface.co/transformers/training.html#tensorflow, but a similar set up does not seem to work for question answering. I am able to get it working with synthetic data, but not real data. I can never seem to get the data in the right format. Here is the script I am trying to use tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = TFBertForQuestionAnswering.from_pretrained('bert-base-uncased') data = tfds.load('squad') examples = SquadV1Processor().get_examples_from_dataset(data) train_dataset = squad_convert_examples_to_features(examples, tokenizer, max_seq_length=328, doc_stride=128, max_query_length=32, is_training=True, return_dataset='tf') optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer=optimizer, loss=loss) model.fit(train_dataset, epochs=2, steps_per_epoch=1) I've tried many other things, but this is the closest I got to something working. The problem is that the dataset has keys that aren't recognized. Specifically, I get this error ValueError: Found unexpected keys that do not correspond to any Model output: dict_keys(['start_positions', 'end_positions', 'cls_index', 'p_mask', 'is_impossible']). Expected: ['output_1', 'output_2'] I might be able to find someway to manually remove these keys, but I feel like there should be some canonical way of getting this to work correctly. I would appreciate any suggestions. Thanks in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7294/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7293/comments
https://api.github.com/repos/huggingface/transformers/issues/7293/events
https://github.com/huggingface/transformers/issues/7293
705,759,298
MDU6SXNzdWU3MDU3NTkyOTg=
7,293
Support serialized tokenizer in AutoTokenizer
{ "login": "djstrong", "id": 1849959, "node_id": "MDQ6VXNlcjE4NDk5NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/1849959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/djstrong", "html_url": "https://github.com/djstrong", "followers_url": "https://api.github.com/users/djstrong/followers", "following_url": "https://api.github.com/users/djstrong/following{/other_user}", "gists_url": "https://api.github.com/users/djstrong/gists{/gist_id}", "starred_url": "https://api.github.com/users/djstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/djstrong/subscriptions", "organizations_url": "https://api.github.com/users/djstrong/orgs", "repos_url": "https://api.github.com/users/djstrong/repos", "events_url": "https://api.github.com/users/djstrong/events{/privacy}", "received_events_url": "https://api.github.com/users/djstrong/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
# 🚀 Feature request Support loading serialized tokenizer (using `Tokenizer.from_file`) in `AutoTokenizer`. ## Motivation Some standard models use different tokenizers, e.g. `SentencePieceBPETokenizer`. So, there would be no need for implementing new models in `transformers`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7293/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7292/comments
https://api.github.com/repos/huggingface/transformers/issues/7292/events
https://github.com/huggingface/transformers/pull/7292
705,722,074
MDExOlB1bGxSZXF1ZXN0NDkwMzk5MDg0
7,292
[fsmt] SinusoidalPositionalEmbedding no need to pass device
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7292?src=pr&el=h1) Report\n> Merging [#7292](https://codecov.io/gh/huggingface/transformers/pull/7292?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/63276b76d4fb54d096b491e89632859aed6b4364?el=desc) will **decrease** coverage by `0.37%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7292/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7292?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7292 +/- ##\n==========================================\n- Coverage 79.72% 79.34% -0.38% \n==========================================\n Files 174 174 \n Lines 33452 33451 -1 \n==========================================\n- Hits 26668 26542 -126 \n- Misses 6784 6909 +125 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7292?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mc210LnB5) | `93.97% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.96% <0.00%> (-30.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.64% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.61% <0.00%> (+0.70%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/7292/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7292?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7292?src=pr&el=footer). Last update [63276b7...378a1f4](https://codecov.io/gh/huggingface/transformers/pull/7292?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Just realized there is no need to pass `device` in forward as we already have it in `self.weight`, so simplifying the code a bit.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7292/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7292", "html_url": "https://github.com/huggingface/transformers/pull/7292", "diff_url": "https://github.com/huggingface/transformers/pull/7292.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7292.patch", "merged_at": 1600767547000 }
https://api.github.com/repos/huggingface/transformers/issues/7291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7291/comments
https://api.github.com/repos/huggingface/transformers/issues/7291/events
https://github.com/huggingface/transformers/pull/7291
705,699,566
MDExOlB1bGxSZXF1ZXN0NDkwMzgxMDE0
7,291
Fix saving TF custom models
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7291?src=pr&el=h1) Report\n> Merging [#7291](https://codecov.io/gh/huggingface/transformers/pull/7291?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cbf0f722d23440f3342aafc27697b50ead5996b?el=desc) will **increase** coverage by `0.88%`.\n> The diff coverage is `88.88%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7291/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7291?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7291 +/- ##\n==========================================\n+ Coverage 80.32% 81.21% +0.88% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n+ Hits 26867 27163 +296 \n+ Misses 6579 6283 -296 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7291?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.03% <88.88%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.52% <0.00%> (-34.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.17% <0.00%> (-3.06%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/7291/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7291?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7291?src=pr&el=footer). Last update [7cbf0f7...199c743](https://codecov.io/gh/huggingface/transformers/pull/7291?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
This PR fixes #7277. It was currently not possible to create a custom model that uses our TF main layers and to save it. A Keras Layer must have a `config` parameter whereas `keras_serializable` was creating a `transformers_config` parameter in the layer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7291/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7291", "html_url": "https://github.com/huggingface/transformers/pull/7291", "diff_url": "https://github.com/huggingface/transformers/pull/7291.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7291.patch", "merged_at": 1600781474000 }
https://api.github.com/repos/huggingface/transformers/issues/7290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7290/comments
https://api.github.com/repos/huggingface/transformers/issues/7290/events
https://github.com/huggingface/transformers/pull/7290
705,683,127
MDExOlB1bGxSZXF1ZXN0NDkwMzY3NzYy
7,290
[s2s] add create student script
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "What do you think @patil-suraj ? \r\n+ Added t5 support, renamed create_bart_student -> make_student\r\n+ made the code return which layers it copied so that distillation.py can use it.\r\n+ Deleted a bunch of broken stuff!\r\n+ added `save_randomly_initialized_model.py` script.\r\n", "LGTM, much cleaner now. \r\nJust left couple of nits, hope you don't mind me doing this :)", "Love the nits!" ]
1,600
1,601
1,601
MEMBER
null
This PR adds `create_student.py` script to create the student models for distillbart/pegasus/marian and t5. ```bash python make_student.py \ facebook/bart-large-cnn \ --e 12 \ --d 6 \ --save_path student-bart-cnn-12-6 \ ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7290/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7290", "html_url": "https://github.com/huggingface/transformers/pull/7290", "diff_url": "https://github.com/huggingface/transformers/pull/7290.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7290.patch", "merged_at": 1601233846000 }
https://api.github.com/repos/huggingface/transformers/issues/7289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7289/comments
https://api.github.com/repos/huggingface/transformers/issues/7289/events
https://github.com/huggingface/transformers/pull/7289
705,644,032
MDExOlB1bGxSZXF1ZXN0NDkwMzM1NDcx
7,289
Fix #7284
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7289?src=pr&el=h1) Report\n> Merging [#7289](https://codecov.io/gh/huggingface/transformers/pull/7289?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8d464374ba0a8322e87d7a326e7325fcaa2ff695?el=desc) will **increase** coverage by `2.57%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7289/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7289?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7289 +/- ##\n==========================================\n+ Coverage 79.29% 81.87% +2.57% \n==========================================\n Files 174 174 \n Lines 33449 33452 +3 \n==========================================\n+ Hits 26524 27388 +864 \n+ Misses 6925 6064 -861 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7289?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.26% <100.00%> (+0.43%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.52% <0.00%> (-34.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.31% <0.00%> (-14.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `91.89% <0.00%> (-5.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.28% <0.00%> (-3.04%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7289/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7289?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7289?src=pr&el=footer). Last update [8d46437...cbeab70](https://codecov.io/gh/huggingface/transformers/pull/7289?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
COLLABORATOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #7284 Don't add a keyword argument `"masked_lm_labels"` when`self.mlm` is False, to avoid an input error of the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7289/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7289", "html_url": "https://github.com/huggingface/transformers/pull/7289", "diff_url": "https://github.com/huggingface/transformers/pull/7289.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7289.patch", "merged_at": 1600698686000 }
https://api.github.com/repos/huggingface/transformers/issues/7288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7288/comments
https://api.github.com/repos/huggingface/transformers/issues/7288/events
https://github.com/huggingface/transformers/issues/7288
705,635,306
MDU6SXNzdWU3MDU2MzUzMDY=
7,288
Error importing MBart from transformers
{ "login": "MemduhG", "id": 5941210, "node_id": "MDQ6VXNlcjU5NDEyMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/5941210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MemduhG", "html_url": "https://github.com/MemduhG", "followers_url": "https://api.github.com/users/MemduhG/followers", "following_url": "https://api.github.com/users/MemduhG/following{/other_user}", "gists_url": "https://api.github.com/users/MemduhG/gists{/gist_id}", "starred_url": "https://api.github.com/users/MemduhG/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MemduhG/subscriptions", "organizations_url": "https://api.github.com/users/MemduhG/orgs", "repos_url": "https://api.github.com/users/MemduhG/repos", "events_url": "https://api.github.com/users/MemduhG/events{/privacy}", "received_events_url": "https://api.github.com/users/MemduhG/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note: I created a new venv and installed torch before transformers. That way I'm not getting this problem. When I had this problem I had installed transformers first, and I assume it tries to use tensorflow by default when you do that.", "It assumes that you have already installed one of them, your fix is 👍 " ]
1,600
1,600
1,600
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-47-generic-x86_64-with-glibc2.29 - Python version: 3.8.2 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Bart: @sshleifer ## Information Model I am using: MBart, Bart has the same problem The problem arises when using: * [X] the official example scripts: Just copied it from the documentation here: https://huggingface.co/transformers/master/model_doc/mbart.html The tasks I am working on is: * [X] my own task or dataset ## To reproduce Steps to reproduce the behavior: 1. `from transformers import MBartForConditionalGeneration, MBartTokenizer` ## Expected behavior I would expect MBart to be imported so I could use it, but I get this error instead: ``` ImportError Traceback (most recent call last) <ipython-input-23-20acbef89a41> in <module> ----> 1 from transformers import MBartForConditionalGeneration, MBartTokenizer ImportError: cannot import name 'MBartForConditionalGeneration' from 'transformers' (/home/memduh/hf/venv/lib/python3.8/site-packages/transformers/__init__.py) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7288/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7287/comments
https://api.github.com/repos/huggingface/transformers/issues/7287/events
https://github.com/huggingface/transformers/issues/7287
705,589,909
MDU6SXNzdWU3MDU1ODk5MDk=
7,287
"index out of range in self" when calling BertForTokenClassification
{ "login": "joawar", "id": 46854160, "node_id": "MDQ6VXNlcjQ2ODU0MTYw", "avatar_url": "https://avatars.githubusercontent.com/u/46854160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joawar", "html_url": "https://github.com/joawar", "followers_url": "https://api.github.com/users/joawar/followers", "following_url": "https://api.github.com/users/joawar/following{/other_user}", "gists_url": "https://api.github.com/users/joawar/gists{/gist_id}", "starred_url": "https://api.github.com/users/joawar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joawar/subscriptions", "organizations_url": "https://api.github.com/users/joawar/orgs", "repos_url": "https://api.github.com/users/joawar/repos", "events_url": "https://api.github.com/users/joawar/events{/privacy}", "received_events_url": "https://api.github.com/users/joawar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The issue is that you're passing a `labels` argument as the third argument, whereas it's the 10th argument, as you can see in the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertfortokenclassification).\r\n\r\nYou should do the following:\r\n\r\n```py\r\noutput = model(inputs, attention_mask, labels=labels)\r\n```" ]
1,600
1,600
1,600
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.29 - Python version: 3.8.2 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> albert, bert, GPT2, XLM: @LysandreJik ## Information Model I am using (Bert, XLNet ...): BertForTokenClassification The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Get input_ids, attention_masks, and labels, all of size [32,128] (more generally [batch_size, sequence_length]) from the current batch 2. Send all three inputs to the same device as the model 3. Run the model with the three inputs: ```loss = model(input_ids, attention_masks, labels)``` This is not the code I was running when I encountered the error but running this small chunk reproduces the error for me: ```python from transformers import BertTokenizer, BertForTokenClassification import torch as th batch_size = 32 seq_len = 128 inputs = th.randint(2000, 3000, size=(batch_size, seq_len)) attention_mask = th.randint(0,1,size=(batch_size, seq_len)) labels = th.randint(0,17, size=(batch_size, seq_len)) model = BertForTokenClassification.from_pretrained('bert-base-uncased') output = model(inputs, attention_mask, labels) ``` and then the error: ``` Traceback (most recent call last): File "test.py", line 15, in <module> output = model(inputs, attention_mask, labels) File "/home/joakim/.virtualenvs/martino-replication/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/joakim/.virtualenvs/martino-replication/lib/python3.8/site-packages/transformers/modeling_bert.py", line 1488, in forward outputs = self.bert( File "/home/joakim/.virtualenvs/martino-replication/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/joakim/.virtualenvs/martino-replication/lib/python3.8/site-packages/transformers/modeling_bert.py", line 824, in forward embedding_output = self.embeddings( File "/home/joakim/.virtualenvs/martino-replication/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/joakim/.virtualenvs/martino-replication/lib/python3.8/site-packages/transformers/modeling_bert.py", line 209, in forward token_type_embeddings = self.token_type_embeddings(token_type_ids) File "/home/joakim/.virtualenvs/martino-replication/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/joakim/.virtualenvs/martino-replication/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 124, in forward return F.embedding( File "/home/joakim/.virtualenvs/martino-replication/lib/python3.8/site-packages/torch/nn/functional.py", line 1814, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The model does the forward pass with the current batch and returns the output <!-- A clear and concise description of what you would expect to happen. --> ## What I've looked at From previous similar issues like #4153 there was one person who added special tokens to the tokenizer. I don't know what that means but I don't think I've done that. There was also a bug which was patched. In issue #2371 the problem was that a sequence of size greater than 512 was sent into the model. I've printed the size before sending it into the model, and the size is as expected [batch_size, sequence_length], where sequence_length <= 512, so that's not what's happening here either. Finally I tried making sequences of size 512 and sending into the model, in case for some reason the model expected sequence length 512 and nothing else, but as expected that didn't work.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7287/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7286/comments
https://api.github.com/repos/huggingface/transformers/issues/7286/events
https://github.com/huggingface/transformers/pull/7286
705,548,199
MDExOlB1bGxSZXF1ZXN0NDkwMjU2MTI2
7,286
Added RobBERT-v2 model card
{ "login": "twinters", "id": 3677639, "node_id": "MDQ6VXNlcjM2Nzc2Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/3677639?v=4", "gravatar_id": "", "url": "https://api.github.com/users/twinters", "html_url": "https://github.com/twinters", "followers_url": "https://api.github.com/users/twinters/followers", "following_url": "https://api.github.com/users/twinters/following{/other_user}", "gists_url": "https://api.github.com/users/twinters/gists{/gist_id}", "starred_url": "https://api.github.com/users/twinters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/twinters/subscriptions", "organizations_url": "https://api.github.com/users/twinters/orgs", "repos_url": "https://api.github.com/users/twinters/repos", "events_url": "https://api.github.com/users/twinters/events{/privacy}", "received_events_url": "https://api.github.com/users/twinters/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7286?src=pr&el=h1) Report\n> Merging [#7286](https://codecov.io/gh/huggingface/transformers/pull/7286?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/67c4b0c5178c8a532cf461ed2a1152fe821dc750?el=desc) will **increase** coverage by `0.30%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7286/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7286?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7286 +/- ##\n==========================================\n+ Coverage 80.63% 80.94% +0.30% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n+ Hits 26969 27072 +103 \n+ Misses 6477 6374 -103 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7286?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `60.39% <0.00%> (-34.66%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.96% <0.00%> (-30.18%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `95.23% <0.00%> (+49.20%)` | :arrow_up: |\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `95.23% <0.00%> (+74.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7286?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7286?src=pr&el=footer). Last update [67c4b0c...0ca18fd](https://codecov.io/gh/huggingface/transformers/pull/7286?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Hi there! We [just released](https://twitter.com/pieterdelobelle/status/1308016771744530433) version 2 of our [RobBERT](https://github.com/iPieter/RobBERT) model (= Dutch version of RoBERTa). In this pull request, we included our model card for this version. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7286/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7286/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7286", "html_url": "https://github.com/huggingface/transformers/pull/7286", "diff_url": "https://github.com/huggingface/transformers/pull/7286.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7286.patch", "merged_at": 1600719448000 }
https://api.github.com/repos/huggingface/transformers/issues/7285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7285/comments
https://api.github.com/repos/huggingface/transformers/issues/7285/events
https://github.com/huggingface/transformers/issues/7285
705,527,907
MDU6SXNzdWU3MDU1Mjc5MDc=
7,285
scibert-nli out of dace
{ "login": "dbsousa01", "id": 19359518, "node_id": "MDQ6VXNlcjE5MzU5NTE4", "avatar_url": "https://avatars.githubusercontent.com/u/19359518?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dbsousa01", "html_url": "https://github.com/dbsousa01", "followers_url": "https://api.github.com/users/dbsousa01/followers", "following_url": "https://api.github.com/users/dbsousa01/following{/other_user}", "gists_url": "https://api.github.com/users/dbsousa01/gists{/gist_id}", "starred_url": "https://api.github.com/users/dbsousa01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dbsousa01/subscriptions", "organizations_url": "https://api.github.com/users/dbsousa01/orgs", "repos_url": "https://api.github.com/users/dbsousa01/repos", "events_url": "https://api.github.com/users/dbsousa01/events{/privacy}", "received_events_url": "https://api.github.com/users/dbsousa01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @dbsousa01 ,\r\n\r\nThanks for the information. The models I published on the HuggingFace model repository were produced using transformers 2.7.0 and tested up to version 2.11.0. Can you try using transformers 2.11.0 for your specific use case, to see if the same warnings are still present? \r\n\r\nSadly, currently I am not in a situation that allows me to work on an update for the model, but using 2.11.0 may be enough for you! Otherwise you can try to replicate the model yourself by following the procedure described [here](https://github.com/gsarti/covid-papers-browser), using parameters specified in the model card [here](https://huggingface.co/gsarti/scibert-nli).\r\n\r\nHope this helps,\r\n\r\nGabriele", "Hey @gsarti,\r\n\r\nThanks for the help. If I go back to `version 2.11.0` some other dependencies can break the model itself since some functions are already deprecated. \r\n\r\nNevertheless, and I don't know if this makes sense, I downloaded your model from the older version (`2.11.0`) and imported as a pre-trained local model in the newer version (`3.1.0`). The warning no longer appears and the model's output seems to be stable, always giving the same results, which didn't happen when downloading the model with the most recent transformers version (`3.1.0`).", "That's an interesting behavior though, I wouldn't expect a downloaded model to behave differently from a hosted one during the `from_pretrained` call. @LysandreJik @julien-c any idea of why this might be the case?", "@dbsousa01 Not sure I understand what you did – did you `save_pretrained()` the model from version `2.11.0` of the library?", "@julien-c yes, saved the model locally using `save_pretrained` from the version `2.11.0` and then updated the package to the latest version, `3.1.0`, then used the method `from_pretrained` to load it up again, from the local path. By doing this, the warning no longer shows (which could be a bug) **but**, and this is why it's interesting, the model starts outputting stable results, which didn't happen before when I downloaded the model from the latest version" ]
1,600
1,601
1,601
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: macOS-10.15.5-x86_64-i386-64bit - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help Owner: @gsarti Bert owner: @LysandreJik ## Information By importing the model [here](https://huggingface.co/gsarti/scibert-nli?), two warnings prompt up ## To reproduce Steps to reproduce the behavior: 1. Just import the model from the pretrained like in the example link `FutureWarning: The class AutoModelWithLMHead is deprecated and will be removed in a future version. Please use AutoModelForCausalLM for causal language models, AutoModelForMaskedLM for masked language models and AutoModelForSeq2SeqLM for encoder-decoder models.` `Some weights of BertForMaskedLM were not initialized from the model checkpoint at gsarti/scibert-nli and are newly initialized: ['cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.decoder.bias', 'cls.predictions.transform.dense.weight']` ## Expected behavior Import without any kind of errors/warnings. I suppose the first warning is due to deprecation and should be solved by importing `AutoModelForMaskedLM` instead (just looking for confirmation and giving the heads-up). The second, it seems that some layers are out of date and not trained, it would be good if the owner could update the model if possible.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7285/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7284/comments
https://api.github.com/repos/huggingface/transformers/issues/7284/events
https://github.com/huggingface/transformers/issues/7284
705,493,905
MDU6SXNzdWU3MDU0OTM5MDU=
7,284
Fine tune BERT based models using Trainer fails
{ "login": "adamwawrzynski", "id": 19324675, "node_id": "MDQ6VXNlcjE5MzI0Njc1", "avatar_url": "https://avatars.githubusercontent.com/u/19324675?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adamwawrzynski", "html_url": "https://github.com/adamwawrzynski", "followers_url": "https://api.github.com/users/adamwawrzynski/followers", "following_url": "https://api.github.com/users/adamwawrzynski/following{/other_user}", "gists_url": "https://api.github.com/users/adamwawrzynski/gists{/gist_id}", "starred_url": "https://api.github.com/users/adamwawrzynski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adamwawrzynski/subscriptions", "organizations_url": "https://api.github.com/users/adamwawrzynski/orgs", "repos_url": "https://api.github.com/users/adamwawrzynski/repos", "events_url": "https://api.github.com/users/adamwawrzynski/events{/privacy}", "received_events_url": "https://api.github.com/users/adamwawrzynski/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for flagging, I have pushed a fix inside the data collator in the PR mentioned above." ]
1,600
1,600
1,600
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-45-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes. - Using distributed or parallel set-up in script?: No. ### Who can help Trainer: @sgugger ## Information I am using pretrained BERT 'bert-base-multilingual-uncased' model and I would like to fine tune it on Next Sentence Prediction task. I followed example given here: [https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py). ## To reproduce Steps to reproduce the behavior: 1. python3.8 program.py program.py: ```python import torch from transformers import (BertForNextSentencePrediction, BertTokenizer, RobertaModel, RobertaTokenizer, Trainer, TrainingArguments) from transformers.data.datasets.language_modeling import TextDatasetForNextSentencePrediction from transformers.data.data_collator import DataCollatorForNextSentencePrediction if __name__ == "__main__": model_dir = "./model/" result_model_dir = "./result/" logs_directory = './logs' dataset_path = 'train.txt' # file in TextDatasetForNextSentencePrediction format tokenizer = RobertaTokenizer.from_pretrained('bert-base-multilingual-uncased') finetune_model = BertForNextSentencePrediction.from_pretrained('bert-base-multilingual-uncased') training_args = TrainingArguments( output_dir=result_model_dir, num_train_epochs=3, per_device_train_batch_size=1, per_device_eval_batch_size=1, warmup_steps=500, weight_decay=0.01, logging_dir=logs_directory, ) data_collator = DataCollatorForNextSentencePrediction( tokenizer=tokenizer, mlm=False, block_size=512, nsp_probability=0.5, ) train_dataset = TextDatasetForNextSentencePrediction( tokenizer=tokenizer, file_path=dataset_path, block_size=512, ) trainer = Trainer( model=finetune_model, args=training_args, train_dataset=train_dataset, data_collator=data_collator, ) trainer.train() trainer.save_model(result_model_dir) ``` Output in terminal: ```bash Special tokens have been added in the vocabulary, make sure the associated word emebedding are fine-tuned or trained. Some weights of the model checkpoint at ./model/ were not used when initializing BertForNextSentencePrediction: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.encoder.layer.12.attention.self.query.weight', 'roberta.encoder.layer.12.attention.self.query.bias', 'roberta.encoder.layer.12.attention.self.key.weight', 'roberta.encoder.layer.12.attention.self.key.bias', 'roberta.encoder.layer.12.attention.self.value.weight', 'roberta.encoder.layer.12.attention.self.value.bias', 'roberta.encoder.layer.12.attention.output.dense.weight', 'roberta.encoder.layer.12.attention.output.dense.bias', 'roberta.encoder.layer.12.attention.output.LayerNorm.weight', 'roberta.encoder.layer.12.attention.output.LayerNorm.bias', 'roberta.encoder.layer.12.intermediate.dense.weight', 'roberta.encoder.layer.12.intermediate.dense.bias', 'roberta.encoder.layer.12.output.dense.weight', 'roberta.encoder.layer.12.output.dense.bias', 'roberta.encoder.layer.12.output.LayerNorm.weight', 'roberta.encoder.layer.12.output.LayerNorm.bias', 'roberta.encoder.layer.13.attention.self.query.weight', 'roberta.encoder.layer.13.attention.self.query.bias', 'roberta.encoder.layer.13.attention.self.key.weight', 'roberta.encoder.layer.13.attention.self.key.bias', 'roberta.encoder.layer.13.attention.self.value.weight', 'roberta.encoder.layer.13.attention.self.value.bias', 'roberta.encoder.layer.13.attention.output.dense.weight', 'roberta.encoder.layer.13.attention.output.dense.bias', 'roberta.encoder.layer.13.attention.output.LayerNorm.weight', 'roberta.encoder.layer.13.attention.output.LayerNorm.bias', 'roberta.encoder.layer.13.intermediate.dense.weight', 'roberta.encoder.layer.13.intermediate.dense.bias', 'roberta.encoder.layer.13.output.dense.weight', 'roberta.encoder.layer.13.output.dense.bias', 'roberta.encoder.layer.13.output.LayerNorm.weight', 'roberta.encoder.layer.13.output.LayerNorm.bias', 'roberta.encoder.layer.14.attention.self.query.weight', 'roberta.encoder.layer.14.attention.self.query.bias', 'roberta.encoder.layer.14.attention.self.key.weight', 'roberta.encoder.layer.14.attention.self.key.bias', 'roberta.encoder.layer.14.attention.self.value.weight', 'roberta.encoder.layer.14.attention.self.value.bias', 'roberta.encoder.layer.14.attention.output.dense.weight', 'roberta.encoder.layer.14.attention.output.dense.bias', 'roberta.encoder.layer.14.attention.output.LayerNorm.weight', 'roberta.encoder.layer.14.attention.output.LayerNorm.bias', 'roberta.encoder.layer.14.intermediate.dense.weight', 'roberta.encoder.layer.14.intermediate.dense.bias', 'roberta.encoder.layer.14.output.dense.weight', 'roberta.encoder.layer.14.output.dense.bias', 'roberta.encoder.layer.14.output.LayerNorm.weight', 'roberta.encoder.layer.14.output.LayerNorm.bias', 'roberta.encoder.layer.15.attention.self.query.weight', 'roberta.encoder.layer.15.attention.self.query.bias', 'roberta.encoder.layer.15.attention.self.key.weight', 'roberta.encoder.layer.15.attention.self.key.bias', 'roberta.encoder.layer.15.attention.self.value.weight', 'roberta.encoder.layer.15.attention.self.value.bias', 'roberta.encoder.layer.15.attention.output.dense.weight', 'roberta.encoder.layer.15.attention.output.dense.bias', 'roberta.encoder.layer.15.attention.output.LayerNorm.weight', 'roberta.encoder.layer.15.attention.output.LayerNorm.bias', 'roberta.encoder.layer.15.intermediate.dense.weight', 'roberta.encoder.layer.15.intermediate.dense.bias', 'roberta.encoder.layer.15.output.dense.weight', 'roberta.encoder.layer.15.output.dense.bias', 'roberta.encoder.layer.15.output.LayerNorm.weight', 'roberta.encoder.layer.15.output.LayerNorm.bias', 'roberta.encoder.layer.16.attention.self.query.weight', 'roberta.encoder.layer.16.attention.self.query.bias', 'roberta.encoder.layer.16.attention.self.key.weight', 'roberta.encoder.layer.16.attention.self.key.bias', 'roberta.encoder.layer.16.attention.self.value.weight', 'roberta.encoder.layer.16.attention.self.value.bias', 'roberta.encoder.layer.16.attention.output.dense.weight', 'roberta.encoder.layer.16.attention.output.dense.bias', 'roberta.encoder.layer.16.attention.output.LayerNorm.weight', 'roberta.encoder.layer.16.attention.output.LayerNorm.bias', 'roberta.encoder.layer.16.intermediate.dense.weight', 'roberta.encoder.layer.16.intermediate.dense.bias', 'roberta.encoder.layer.16.output.dense.weight', 'roberta.encoder.layer.16.output.dense.bias', 'roberta.encoder.layer.16.output.LayerNorm.weight', 'roberta.encoder.layer.16.output.LayerNorm.bias', 'roberta.encoder.layer.17.attention.self.query.weight', 'roberta.encoder.layer.17.attention.self.query.bias', 'roberta.encoder.layer.17.attention.self.key.weight', 'roberta.encoder.layer.17.attention.self.key.bias', 'roberta.encoder.layer.17.attention.self.value.weight', 'roberta.encoder.layer.17.attention.self.value.bias', 'roberta.encoder.layer.17.attention.output.dense.weight', 'roberta.encoder.layer.17.attention.output.dense.bias', 'roberta.encoder.layer.17.attention.output.LayerNorm.weight', 'roberta.encoder.layer.17.attention.output.LayerNorm.bias', 'roberta.encoder.layer.17.intermediate.dense.weight', 'roberta.encoder.layer.17.intermediate.dense.bias', 'roberta.encoder.layer.17.output.dense.weight', 'roberta.encoder.layer.17.output.dense.bias', 'roberta.encoder.layer.17.output.LayerNorm.weight', 'roberta.encoder.layer.17.output.LayerNorm.bias', 'roberta.encoder.layer.18.attention.self.query.weight', 'roberta.encoder.layer.18.attention.self.query.bias', 'roberta.encoder.layer.18.attention.self.key.weight', 'roberta.encoder.layer.18.attention.self.key.bias', 'roberta.encoder.layer.18.attention.self.value.weight', 'roberta.encoder.layer.18.attention.self.value.bias', 'roberta.encoder.layer.18.attention.output.dense.weight', 'roberta.encoder.layer.18.attention.output.dense.bias', 'roberta.encoder.layer.18.attention.output.LayerNorm.weight', 'roberta.encoder.layer.18.attention.output.LayerNorm.bias', 'roberta.encoder.layer.18.intermediate.dense.weight', 'roberta.encoder.layer.18.intermediate.dense.bias', 'roberta.encoder.layer.18.output.dense.weight', 'roberta.encoder.layer.18.output.dense.bias', 'roberta.encoder.layer.18.output.LayerNorm.weight', 'roberta.encoder.layer.18.output.LayerNorm.bias', 'roberta.encoder.layer.19.attention.self.query.weight', 'roberta.encoder.layer.19.attention.self.query.bias', 'roberta.encoder.layer.19.attention.self.key.weight', 'roberta.encoder.layer.19.attention.self.key.bias', 'roberta.encoder.layer.19.attention.self.value.weight', 'roberta.encoder.layer.19.attention.self.value.bias', 'roberta.encoder.layer.19.attention.output.dense.weight', 'roberta.encoder.layer.19.attention.output.dense.bias', 'roberta.encoder.layer.19.attention.output.LayerNorm.weight', 'roberta.encoder.layer.19.attention.output.LayerNorm.bias', 'roberta.encoder.layer.19.intermediate.dense.weight', 'roberta.encoder.layer.19.intermediate.dense.bias', 'roberta.encoder.layer.19.output.dense.weight', 'roberta.encoder.layer.19.output.dense.bias', 'roberta.encoder.layer.19.output.LayerNorm.weight', 'roberta.encoder.layer.19.output.LayerNorm.bias', 'roberta.encoder.layer.20.attention.self.query.weight', 'roberta.encoder.layer.20.attention.self.query.bias', 'roberta.encoder.layer.20.attention.self.key.weight', 'roberta.encoder.layer.20.attention.self.key.bias', 'roberta.encoder.layer.20.attention.self.value.weight', 'roberta.encoder.layer.20.attention.self.value.bias', 'roberta.encoder.layer.20.attention.output.dense.weight', 'roberta.encoder.layer.20.attention.output.dense.bias', 'roberta.encoder.layer.20.attention.output.LayerNorm.weight', 'roberta.encoder.layer.20.attention.output.LayerNorm.bias', 'roberta.encoder.layer.20.intermediate.dense.weight', 'roberta.encoder.layer.20.intermediate.dense.bias', 'roberta.encoder.layer.20.output.dense.weight', 'roberta.encoder.layer.20.output.dense.bias', 'roberta.encoder.layer.20.output.LayerNorm.weight', 'roberta.encoder.layer.20.output.LayerNorm.bias', 'roberta.encoder.layer.21.attention.self.query.weight', 'roberta.encoder.layer.21.attention.self.query.bias', 'roberta.encoder.layer.21.attention.self.key.weight', 'roberta.encoder.layer.21.attention.self.key.bias', 'roberta.encoder.layer.21.attention.self.value.weight', 'roberta.encoder.layer.21.attention.self.value.bias', 'roberta.encoder.layer.21.attention.output.dense.weight', 'roberta.encoder.layer.21.attention.output.dense.bias', 'roberta.encoder.layer.21.attention.output.LayerNorm.weight', 'roberta.encoder.layer.21.attention.output.LayerNorm.bias', 'roberta.encoder.layer.21.intermediate.dense.weight', 'roberta.encoder.layer.21.intermediate.dense.bias', 'roberta.encoder.layer.21.output.dense.weight', 'roberta.encoder.layer.21.output.dense.bias', 'roberta.encoder.layer.21.output.LayerNorm.weight', 'roberta.encoder.layer.21.output.LayerNorm.bias', 'roberta.encoder.layer.22.attention.self.query.weight', 'roberta.encoder.layer.22.attention.self.query.bias', 'roberta.encoder.layer.22.attention.self.key.weight', 'roberta.encoder.layer.22.attention.self.key.bias', 'roberta.encoder.layer.22.attention.self.value.weight', 'roberta.encoder.layer.22.attention.self.value.bias', 'roberta.encoder.layer.22.attention.output.dense.weight', 'roberta.encoder.layer.22.attention.output.dense.bias', 'roberta.encoder.layer.22.attention.output.LayerNorm.weight', 'roberta.encoder.layer.22.attention.output.LayerNorm.bias', 'roberta.encoder.layer.22.intermediate.dense.weight', 'roberta.encoder.layer.22.intermediate.dense.bias', 'roberta.encoder.layer.22.output.dense.weight', 'roberta.encoder.layer.22.output.dense.bias', 'roberta.encoder.layer.22.output.LayerNorm.weight', 'roberta.encoder.layer.22.output.LayerNorm.bias', 'roberta.encoder.layer.23.attention.self.query.weight', 'roberta.encoder.layer.23.attention.self.query.bias', 'roberta.encoder.layer.23.attention.self.key.weight', 'roberta.encoder.layer.23.attention.self.key.bias', 'roberta.encoder.layer.23.attention.self.value.weight', 'roberta.encoder.layer.23.attention.self.value.bias', 'roberta.encoder.layer.23.attention.output.dense.weight', 'roberta.encoder.layer.23.attention.output.dense.bias', 'roberta.encoder.layer.23.attention.output.LayerNorm.weight', 'roberta.encoder.layer.23.attention.output.LayerNorm.bias', 'roberta.encoder.layer.23.intermediate.dense.weight', 'roberta.encoder.layer.23.intermediate.dense.bias', 'roberta.encoder.layer.23.output.dense.weight', 'roberta.encoder.layer.23.output.dense.bias', 'roberta.encoder.layer.23.output.LayerNorm.weight', 'roberta.encoder.layer.23.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias'] - This IS expected if you are initializing BertForNextSentencePrediction from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BertForNextSentencePrediction from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForNextSentencePrediction were not initialized from the model checkpoint at ./model/ and are newly initialized: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'encoder.layer.12.attention.self.query.weight', 'encoder.layer.12.attention.self.query.bias', 'encoder.layer.12.attention.self.key.weight', 'encoder.layer.12.attention.self.key.bias', 'encoder.layer.12.attention.self.value.weight', 'encoder.layer.12.attention.self.value.bias', 'encoder.layer.12.attention.output.dense.weight', 'encoder.layer.12.attention.output.dense.bias', 'encoder.layer.12.attention.output.LayerNorm.weight', 'encoder.layer.12.attention.output.LayerNorm.bias', 'encoder.layer.12.intermediate.dense.weight', 'encoder.layer.12.intermediate.dense.bias', 'encoder.layer.12.output.dense.weight', 'encoder.layer.12.output.dense.bias', 'encoder.layer.12.output.LayerNorm.weight', 'encoder.layer.12.output.LayerNorm.bias', 'encoder.layer.13.attention.self.query.weight', 'encoder.layer.13.attention.self.query.bias', 'encoder.layer.13.attention.self.key.weight', 'encoder.layer.13.attention.self.key.bias', 'encoder.layer.13.attention.self.value.weight', 'encoder.layer.13.attention.self.value.bias', 'encoder.layer.13.attention.output.dense.weight', 'encoder.layer.13.attention.output.dense.bias', 'encoder.layer.13.attention.output.LayerNorm.weight', 'encoder.layer.13.attention.output.LayerNorm.bias', 'encoder.layer.13.intermediate.dense.weight', 'encoder.layer.13.intermediate.dense.bias', 'encoder.layer.13.output.dense.weight', 'encoder.layer.13.output.dense.bias', 'encoder.layer.13.output.LayerNorm.weight', 'encoder.layer.13.output.LayerNorm.bias', 'encoder.layer.14.attention.self.query.weight', 'encoder.layer.14.attention.self.query.bias', 'encoder.layer.14.attention.self.key.weight', 'encoder.layer.14.attention.self.key.bias', 'encoder.layer.14.attention.self.value.weight', 'encoder.layer.14.attention.self.value.bias', 'encoder.layer.14.attention.output.dense.weight', 'encoder.layer.14.attention.output.dense.bias', 'encoder.layer.14.attention.output.LayerNorm.weight', 'encoder.layer.14.attention.output.LayerNorm.bias', 'encoder.layer.14.intermediate.dense.weight', 'encoder.layer.14.intermediate.dense.bias', 'encoder.layer.14.output.dense.weight', 'encoder.layer.14.output.dense.bias', 'encoder.layer.14.output.LayerNorm.weight', 'encoder.layer.14.output.LayerNorm.bias', 'encoder.layer.15.attention.self.query.weight', 'encoder.layer.15.attention.self.query.bias', 'encoder.layer.15.attention.self.key.weight', 'encoder.layer.15.attention.self.key.bias', 'encoder.layer.15.attention.self.value.weight', 'encoder.layer.15.attention.self.value.bias', 'encoder.layer.15.attention.output.dense.weight', 'encoder.layer.15.attention.output.dense.bias', 'encoder.layer.15.attention.output.LayerNorm.weight', 'encoder.layer.15.attention.output.LayerNorm.bias', 'encoder.layer.15.intermediate.dense.weight', 'encoder.layer.15.intermediate.dense.bias', 'encoder.layer.15.output.dense.weight', 'encoder.layer.15.output.dense.bias', 'encoder.layer.15.output.LayerNorm.weight', 'encoder.layer.15.output.LayerNorm.bias', 'encoder.layer.16.attention.self.query.weight', 'encoder.layer.16.attention.self.query.bias', 'encoder.layer.16.attention.self.key.weight', 'encoder.layer.16.attention.self.key.bias', 'encoder.layer.16.attention.self.value.weight', 'encoder.layer.16.attention.self.value.bias', 'encoder.layer.16.attention.output.dense.weight', 'encoder.layer.16.attention.output.dense.bias', 'encoder.layer.16.attention.output.LayerNorm.weight', 'encoder.layer.16.attention.output.LayerNorm.bias', 'encoder.layer.16.intermediate.dense.weight', 'encoder.layer.16.intermediate.dense.bias', 'encoder.layer.16.output.dense.weight', 'encoder.layer.16.output.dense.bias', 'encoder.layer.16.output.LayerNorm.weight', 'encoder.layer.16.output.LayerNorm.bias', 'encoder.layer.17.attention.self.query.weight', 'encoder.layer.17.attention.self.query.bias', 'encoder.layer.17.attention.self.key.weight', 'encoder.layer.17.attention.self.key.bias', 'encoder.layer.17.attention.self.value.weight', 'encoder.layer.17.attention.self.value.bias', 'encoder.layer.17.attention.output.dense.weight', 'encoder.layer.17.attention.output.dense.bias', 'encoder.layer.17.attention.output.LayerNorm.weight', 'encoder.layer.17.attention.output.LayerNorm.bias', 'encoder.layer.17.intermediate.dense.weight', 'encoder.layer.17.intermediate.dense.bias', 'encoder.layer.17.output.dense.weight', 'encoder.layer.17.output.dense.bias', 'encoder.layer.17.output.LayerNorm.weight', 'encoder.layer.17.output.LayerNorm.bias', 'encoder.layer.18.attention.self.query.weight', 'encoder.layer.18.attention.self.query.bias', 'encoder.layer.18.attention.self.key.weight', 'encoder.layer.18.attention.self.key.bias', 'encoder.layer.18.attention.self.value.weight', 'encoder.layer.18.attention.self.value.bias', 'encoder.layer.18.attention.output.dense.weight', 'encoder.layer.18.attention.output.dense.bias', 'encoder.layer.18.attention.output.LayerNorm.weight', 'encoder.layer.18.attention.output.LayerNorm.bias', 'encoder.layer.18.intermediate.dense.weight', 'encoder.layer.18.intermediate.dense.bias', 'encoder.layer.18.output.dense.weight', 'encoder.layer.18.output.dense.bias', 'encoder.layer.18.output.LayerNorm.weight', 'encoder.layer.18.output.LayerNorm.bias', 'encoder.layer.19.attention.self.query.weight', 'encoder.layer.19.attention.self.query.bias', 'encoder.layer.19.attention.self.key.weight', 'encoder.layer.19.attention.self.key.bias', 'encoder.layer.19.attention.self.value.weight', 'encoder.layer.19.attention.self.value.bias', 'encoder.layer.19.attention.output.dense.weight', 'encoder.layer.19.attention.output.dense.bias', 'encoder.layer.19.attention.output.LayerNorm.weight', 'encoder.layer.19.attention.output.LayerNorm.bias', 'encoder.layer.19.intermediate.dense.weight', 'encoder.layer.19.intermediate.dense.bias', 'encoder.layer.19.output.dense.weight', 'encoder.layer.19.output.dense.bias', 'encoder.layer.19.output.LayerNorm.weight', 'encoder.layer.19.output.LayerNorm.bias', 'encoder.layer.20.attention.self.query.weight', 'encoder.layer.20.attention.self.query.bias', 'encoder.layer.20.attention.self.key.weight', 'encoder.layer.20.attention.self.key.bias', 'encoder.layer.20.attention.self.value.weight', 'encoder.layer.20.attention.self.value.bias', 'encoder.layer.20.attention.output.dense.weight', 'encoder.layer.20.attention.output.dense.bias', 'encoder.layer.20.attention.output.LayerNorm.weight', 'encoder.layer.20.attention.output.LayerNorm.bias', 'encoder.layer.20.intermediate.dense.weight', 'encoder.layer.20.intermediate.dense.bias', 'encoder.layer.20.output.dense.weight', 'encoder.layer.20.output.dense.bias', 'encoder.layer.20.output.LayerNorm.weight', 'encoder.layer.20.output.LayerNorm.bias', 'encoder.layer.21.attention.self.query.weight', 'encoder.layer.21.attention.self.query.bias', 'encoder.layer.21.attention.self.key.weight', 'encoder.layer.21.attention.self.key.bias', 'encoder.layer.21.attention.self.value.weight', 'encoder.layer.21.attention.self.value.bias', 'encoder.layer.21.attention.output.dense.weight', 'encoder.layer.21.attention.output.dense.bias', 'encoder.layer.21.attention.output.LayerNorm.weight', 'encoder.layer.21.attention.output.LayerNorm.bias', 'encoder.layer.21.intermediate.dense.weight', 'encoder.layer.21.intermediate.dense.bias', 'encoder.layer.21.output.dense.weight', 'encoder.layer.21.output.dense.bias', 'encoder.layer.21.output.LayerNorm.weight', 'encoder.layer.21.output.LayerNorm.bias', 'encoder.layer.22.attention.self.query.weight', 'encoder.layer.22.attention.self.query.bias', 'encoder.layer.22.attention.self.key.weight', 'encoder.layer.22.attention.self.key.bias', 'encoder.layer.22.attention.self.value.weight', 'encoder.layer.22.attention.self.value.bias', 'encoder.layer.22.attention.output.dense.weight', 'encoder.layer.22.attention.output.dense.bias', 'encoder.layer.22.attention.output.LayerNorm.weight', 'encoder.layer.22.attention.output.LayerNorm.bias', 'encoder.layer.22.intermediate.dense.weight', 'encoder.layer.22.intermediate.dense.bias', 'encoder.layer.22.output.dense.weight', 'encoder.layer.22.output.dense.bias', 'encoder.layer.22.output.LayerNorm.weight', 'encoder.layer.22.output.LayerNorm.bias', 'encoder.layer.23.attention.self.query.weight', 'encoder.layer.23.attention.self.query.bias', 'encoder.layer.23.attention.self.key.weight', 'encoder.layer.23.attention.self.key.bias', 'encoder.layer.23.attention.self.value.weight', 'encoder.layer.23.attention.self.value.bias', 'encoder.layer.23.attention.output.dense.weight', 'encoder.layer.23.attention.output.dense.bias', 'encoder.layer.23.attention.output.LayerNorm.weight', 'encoder.layer.23.attention.output.LayerNorm.bias', 'encoder.layer.23.intermediate.dense.weight', 'encoder.layer.23.intermediate.dense.bias', 'encoder.layer.23.output.dense.weight', 'encoder.layer.23.output.dense.bias', 'encoder.layer.23.output.LayerNorm.weight', 'encoder.layer.23.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Epoch: 0%| | 0/3 [00:00<?, ?it/sTraceback (most recent call last): | 0/1 [00:00<?, ?it/s] File "polish_roberta.py", line 77, in <module> File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/trainer.py", line 707, in train tr_loss += self.training_step(model, inputs) File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/trainer.py", line 995, in training_step outputs = model(**inputs) File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) TypeError: Caught TypeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'masked_lm_labels' Epoch: 0%| | 0/3 [00:04<?, ?it/s] Iteration: 0%| | 0/1 [00:04<?, ?it/s] ``` ## Expected behavior Model is trained and after training is finished model is saved into pointed directory. ## Solution I've followed errors in terminal and find out that excessive parameter is passed into Trainer's training_step function. I've added line that removes this key from dictionary passed into model and it works. ```python def training_step(self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) -> torch.Tensor: """ Perform a training step on a batch of inputs. Subclass and override to inject custom behavior. Args: model (:obj:`nn.Module`): The model to train. inputs (:obj:`Dict[str, Union[torch.Tensor, Any]]`): The inputs and targets of the model. The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument :obj:`labels`. Check your model's documentation for all accepted arguments. Return: :obj:`torch.Tensor`: The tensor with training loss on this batch. """ if hasattr(self, "_training_step"): warnings.warn( "The `_training_step` method is deprecated and won't be called in a future version, define `training_step` in your subclass.", FutureWarning, ) return self._training_step(model, inputs, self.optimizer) model.train() inputs = self._prepare_inputs(inputs) inputs.pop("masked_lm_labels") # I added this line and it works. if self.args.fp16 and _use_native_amp: with autocast(): outputs = model(**inputs) loss = outputs[0] else: outputs = model(**inputs) # We don't use .loss here since the model may return tuples instead of ModelOutput. loss = outputs[0] if self.args.past_index >= 0: self._past = outputs[self.args.past_index] if self.args.n_gpu > 1: loss = loss.mean() # mean() to average on multi-gpu parallel training if self.args.gradient_accumulation_steps > 1: loss = loss / self.args.gradient_accumulation_steps if self.args.fp16 and _use_native_amp: self.scaler.scale(loss).backward() elif self.args.fp16 and _use_apex: with amp.scale_loss(loss, self.optimizer) as scaled_loss: scaled_loss.backward() else: loss.backward() return loss ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7284/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7284/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7283/comments
https://api.github.com/repos/huggingface/transformers/issues/7283/events
https://github.com/huggingface/transformers/pull/7283
705,485,056
MDExOlB1bGxSZXF1ZXN0NDkwMjA0MTA2
7,283
IXAmBERT model card
{ "login": "jjacampos", "id": 11363790, "node_id": "MDQ6VXNlcjExMzYzNzkw", "avatar_url": "https://avatars.githubusercontent.com/u/11363790?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jjacampos", "html_url": "https://github.com/jjacampos", "followers_url": "https://api.github.com/users/jjacampos/followers", "following_url": "https://api.github.com/users/jjacampos/following{/other_user}", "gists_url": "https://api.github.com/users/jjacampos/gists{/gist_id}", "starred_url": "https://api.github.com/users/jjacampos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jjacampos/subscriptions", "organizations_url": "https://api.github.com/users/jjacampos/orgs", "repos_url": "https://api.github.com/users/jjacampos/repos", "events_url": "https://api.github.com/users/jjacampos/events{/privacy}", "received_events_url": "https://api.github.com/users/jjacampos/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7283?src=pr&el=h1) Report\n> Merging [#7283](https://codecov.io/gh/huggingface/transformers/pull/7283?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/67c4b0c5178c8a532cf461ed2a1152fe821dc750?el=desc) will **decrease** coverage by `2.05%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7283/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7283?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7283 +/- ##\n==========================================\n- Coverage 80.63% 78.58% -2.06% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n- Hits 26969 26283 -686 \n- Misses 6477 7163 +686 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7283?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `19.02% <0.00%> (-74.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `88.17% <0.00%> (-4.66%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <0.00%> (+4.00%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7283/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7283?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7283?src=pr&el=footer). Last update [67c4b0c...db6e73c](https://codecov.io/gh/huggingface/transformers/pull/7283?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
This PR includes the model card for the IXAmBERT model which has been recently uploaded to the huggingface repository. <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7283/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7283", "html_url": "https://github.com/huggingface/transformers/pull/7283", "diff_url": "https://github.com/huggingface/transformers/pull/7283.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7283.patch", "merged_at": 1600719332000 }
https://api.github.com/repos/huggingface/transformers/issues/7282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7282/comments
https://api.github.com/repos/huggingface/transformers/issues/7282/events
https://github.com/huggingface/transformers/pull/7282
705,438,889
MDExOlB1bGxSZXF1ZXN0NDkwMTY1NzY1
7,282
Disable missing weight warning for RobertaForMaskedLM/CamembertForMaskedLM
{ "login": "raphael0202", "id": 9609923, "node_id": "MDQ6VXNlcjk2MDk5MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/9609923?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raphael0202", "html_url": "https://github.com/raphael0202", "followers_url": "https://api.github.com/users/raphael0202/followers", "following_url": "https://api.github.com/users/raphael0202/following{/other_user}", "gists_url": "https://api.github.com/users/raphael0202/gists{/gist_id}", "starred_url": "https://api.github.com/users/raphael0202/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raphael0202/subscriptions", "organizations_url": "https://api.github.com/users/raphael0202/orgs", "repos_url": "https://api.github.com/users/raphael0202/repos", "events_url": "https://api.github.com/users/raphael0202/events{/privacy}", "received_events_url": "https://api.github.com/users/raphael0202/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7282?src=pr&el=h1) Report\n> Merging [#7282](https://codecov.io/gh/huggingface/transformers/pull/7282?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/39062d05f05334961861c44e2a13bcda029ae61e?el=desc) will **increase** coverage by `1.08%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7282/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7282?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7282 +/- ##\n==========================================\n+ Coverage 79.98% 81.07% +1.08% \n==========================================\n Files 174 174 \n Lines 33446 33447 +1 \n==========================================\n+ Hits 26753 27116 +363 \n+ Misses 6693 6331 -362 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7282?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `97.09% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.03% <0.00%> (-73.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `83.74% <0.00%> (-14.14%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.87% <0.00%> (-7.18%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.95% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.16% <0.00%> (-2.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7282/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7282?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7282?src=pr&el=footer). Last update [39062d0...875899a](https://codecov.io/gh/huggingface/transformers/pull/7282?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Fixes #7167, #6193 When loading RobertaForMaskedLM/CamembertForMaskedLM checkpoints, a warning is displayed because "lm_head.decoder.bias" is not loaded. This warning is an artifact of the way the bias are stored in RobertaLMHead. This PR adds `lm_head.decoder.bias` to the authorized missing keys list for RobertaForMaskedLM.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7282/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7282", "html_url": "https://github.com/huggingface/transformers/pull/7282", "diff_url": "https://github.com/huggingface/transformers/pull/7282.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7282.patch", "merged_at": 1600694089000 }
https://api.github.com/repos/huggingface/transformers/issues/7281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7281/comments
https://api.github.com/repos/huggingface/transformers/issues/7281/events
https://github.com/huggingface/transformers/pull/7281
705,375,460
MDExOlB1bGxSZXF1ZXN0NDkwMTE0NzUx
7,281
[seq2seq testing] multigpu test run via subprocess
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" }, { "id": 2107554019, "node_id": "MDU6TGFiZWwyMTA3NTU0MDE5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models", "name": "Distributed Training / Models", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "To finish up the test, I don't yet know this functionality, it'd be something like (adapting the end from `_test_distiller_cli`):\r\n\r\n``` \r\n [...]\r\n p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)\r\n print(\"\\nWarning: there will be no output while subprocess will take some time to complete\")\r\n out, err = p.communicate(timeout=360)\r\n out = out.decode(\"utf-8\").strip()\r\n err = err.decode(\"utf-8\").strip()\r\n print(f\"err: {err}\")\r\n print(f\"out: {out}\")\r\n assert out, \"produced no output\"\r\n if p.returncode > 0:\r\n pytest.fail(err)\r\n\r\n # model = distill_main(argparse.Namespace(**args_d))\r\n # if not check_contents:\r\n # return model\r\n contents = os.listdir(output_dir)\r\n contents = {os.path.basename(p) for p in contents}\r\n ckpt_files = [p for p in contents if p.endswith(\"ckpt\")]\r\n assert len(ckpt_files) > 0\r\n\r\n self.assertIn(\"test_generations.txt\", contents)\r\n self.assertIn(\"test_results.txt\", contents)\r\n\r\n # XXX: get the following from the module, (we don't have access to `model` here)\r\n metrics_save_path = os.path.join(output_dir, \"metrics.json\")\r\n val_metric = \"rouge2\"\r\n \r\n metrics = load_json(metrics_save_path)\r\n # {'test': [{'test_avg_loss': 10.63731575012207, 'test_avg_rouge1': 0.0, 'test_avg_rouge2': 0.0, 'test_avg_rougeL': 0.0, 'test_avg_gen_time': 0.1822289228439331, 'test_avg_gen_len': 142.0, 'step_count': 1}]}\r\n print(metrics)\r\n last_step_stats = metrics[\"val\"][-1]\r\n self.assertGreaterEqual(last_step_stats[\"val_avg_gen_time\"], 0.01)\r\n self.assertGreaterEqual(1.0, last_step_stats[\"val_avg_gen_time\"])\r\n self.assertIsInstance(last_step_stats[f\"val_avg_{val_metric}\"], float)\r\n desired_n_evals = int(args_d[\"max_epochs\"] * (1 / args_d[\"val_check_interval\"]) + 1)\r\n self.assertEqual(len(metrics[\"val\"]), desired_n_evals)\r\n self.assertEqual(len(metrics[\"test\"]), 1)\r\n```\r\nbut I get test results in the metrics and not validation...\r\n\r\nI'm sure you can quickly sort it out since you're familiar with what it's supposed to do. I hope it actually does the right thing. As it works with tiny models, it's impossible to tell whether it works or not quality-wise.\r\n", "The only dealbreaker here is hanging.\r\nWill timeout_decorator work in this context.\r\n\r\nAlso I'd love to move the test to a separate file.", "> The only dealbreaker here is hanging.\r\n> Will timeout_decorator work in this context.\r\n\r\nWe have the timeout already. But it still hangs - when the sub-process fails - and it does dump the error. I will poke at it some more. I want it to `tee` the outputs of the subprocess, instead of the silent-until-done treatment.\r\n\r\n> Also I'd love to move the test to a separate file.\r\n\r\nJust the multigpu test? or split all those unrelated example test into their own `test_*specific_feature*`", "Just multigpu.", "> Just multigpu.\r\n\r\nWill do. I think I understand why you want it apart - a troublemaker that affects other tests.", "Made some progress on this, I think pl 1.0.0 will obviate the need to comment out the checking output_diir logic. Will push my changes soon, but I can take this from here.\r\nYou made huge progress on this thank you @stas00 !\r\n", "yes, please." ]
1,600
1,603
1,603
CONTRIBUTOR
null
This PR is trying to fix the hanging/misbehaving/self-replicating pytest for tests using PL with `gpu>1` (`ddp` backend). OK, I couldn't figure out how to make `dp` or `ddp_spawn` to work, all kinds of obscure errors inside PL (It doesn't look like these are closely maintained as it's recommended not to use either), so `ddp` it is. I tried to get `dp` to work first, since it doesn't require forking a new process and special handling inside pytest. Here is a working solution for `ddp`. Bottom line - you have to fork a new process and run the distributed script from it - to get it working with `ddp` - otherwise pytest either hangs or runs itself multiple times, breaking other scripts, a big mess. I borrow the idea from PL itself https://github.com/PyTorchLightning/pytorch-lightning/blob/master/tests/models/test_gpu.py#L111 - what a better place to find the correct way to test something but from the horse's mouth. So what I had to do: * [x] split into `test_seq2seq_examples_multi_gpu.py` as requested * [x] added `subprocess.Popen` - but then replaced it with the modern `asyncio`- apparently using stdout and stderr pipes with `wait` can still cause a deadlock - but let's see if that works for our needs. * [x] had to mess with args to correctly convert them into cl args - so many of them! perhaps there is already a helper util that does that - I probably re-invented the wheel * [x] had to provide a new flag `--overwrite_output_dir` to support multi-gpu processes, as one of the children otherwise creates the output dir and the other fails to do so. instead we create the dir in the parent process. * [x] there are some issues with accessing module attributes in the distributed env (see details here: https://github.com/huggingface/transformers/issues/5887#issuecomment-695919119) - had to tweak the `lightning_base.py` to not use attributes in 2 accessors. (I didn't check - it's possible I need to adjust other scripts if they use `self.total_steps`) - I'm not 100% sure what is different under ddp - but somehow things behave differently and we have no access to module's attributes unless they are part of the model - see `nn.Module.__getattr__` - might have to do with modules getting pickled. if you have insights I'm all ears. * [x] the test validation had to be adjust to handle 2 gpus @sshleifer <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #5887
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7281", "html_url": "https://github.com/huggingface/transformers/pull/7281", "diff_url": "https://github.com/huggingface/transformers/pull/7281.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7281.patch", "merged_at": 1603315254000 }
https://api.github.com/repos/huggingface/transformers/issues/7280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7280/comments
https://api.github.com/repos/huggingface/transformers/issues/7280/events
https://github.com/huggingface/transformers/issues/7280
705,363,493
MDU6SXNzdWU3MDUzNjM0OTM=
7,280
I want to use the Bert2GPT2 architecture,but my pretrained Bert and GPT2 have different vocabs,so what should I do for the vocabs?
{ "login": "wulaoshi", "id": 27938964, "node_id": "MDQ6VXNlcjI3OTM4OTY0", "avatar_url": "https://avatars.githubusercontent.com/u/27938964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wulaoshi", "html_url": "https://github.com/wulaoshi", "followers_url": "https://api.github.com/users/wulaoshi/followers", "following_url": "https://api.github.com/users/wulaoshi/following{/other_user}", "gists_url": "https://api.github.com/users/wulaoshi/gists{/gist_id}", "starred_url": "https://api.github.com/users/wulaoshi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wulaoshi/subscriptions", "organizations_url": "https://api.github.com/users/wulaoshi/orgs", "repos_url": "https://api.github.com/users/wulaoshi/repos", "events_url": "https://api.github.com/users/wulaoshi/events{/privacy}", "received_events_url": "https://api.github.com/users/wulaoshi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This model card gives an in-detail explanation of how to train, use a bert2gpt2 model: https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16#bert2gpt2-summarization-with-%F0%9F%A4%97-encoderdecoder-framework\r\n\r\nLet me know if this does not solve your errors.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7280/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7279/comments
https://api.github.com/repos/huggingface/transformers/issues/7279/events
https://github.com/huggingface/transformers/pull/7279
705,353,294
MDExOlB1bGxSZXF1ZXN0NDkwMDk2NDE5
7,279
[wip/dont-merge] pegasus beam search implementation
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,601
1,601
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7279/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7279", "html_url": "https://github.com/huggingface/transformers/pull/7279", "diff_url": "https://github.com/huggingface/transformers/pull/7279.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7279.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7278/comments
https://api.github.com/repos/huggingface/transformers/issues/7278/events
https://github.com/huggingface/transformers/pull/7278
705,310,354
MDExOlB1bGxSZXF1ZXN0NDkwMDU5MjY4
7,278
[model card] distlbart-mnli model cards
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7278?src=pr&el=h1) Report\n> Merging [#7278](https://codecov.io/gh/huggingface/transformers/pull/7278?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1d90d0f386af2af52017d51c421e71a51ec94de0?el=desc) will **decrease** coverage by `3.11%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7278/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7278?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7278 +/- ##\n==========================================\n- Coverage 81.81% 78.69% -3.12% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n- Hits 27364 26321 -1043 \n- Misses 6082 7125 +1043 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7278?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `19.02% <0.00%> (-74.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.94% <0.00%> (-1.18%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.31% <0.00%> (+2.54%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <0.00%> (+4.00%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+12.87%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/7278/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7278?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7278?src=pr&el=footer). Last update [1d90d0f...3d09c99](https://codecov.io/gh/huggingface/transformers/pull/7278?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
MEMBER
null
No teacher bart distillation for MNLI cc @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7278/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7278", "html_url": "https://github.com/huggingface/transformers/pull/7278", "diff_url": "https://github.com/huggingface/transformers/pull/7278.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7278.patch", "merged_at": 1600705579000 }
https://api.github.com/repos/huggingface/transformers/issues/7277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7277/comments
https://api.github.com/repos/huggingface/transformers/issues/7277/events
https://github.com/huggingface/transformers/issues/7277
705,263,677
MDU6SXNzdWU3MDUyNjM2Nzc=
7,277
Unable to serialize/save TF2.0 Bert model
{ "login": "Douboo", "id": 32014271, "node_id": "MDQ6VXNlcjMyMDE0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/32014271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Douboo", "html_url": "https://github.com/Douboo", "followers_url": "https://api.github.com/users/Douboo/followers", "following_url": "https://api.github.com/users/Douboo/following{/other_user}", "gists_url": "https://api.github.com/users/Douboo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Douboo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Douboo/subscriptions", "organizations_url": "https://api.github.com/users/Douboo/orgs", "repos_url": "https://api.github.com/users/Douboo/repos", "events_url": "https://api.github.com/users/Douboo/events{/privacy}", "received_events_url": "https://api.github.com/users/Douboo/received_events", "type": "User", "site_admin": false }
[ { "id": 1834054694, "node_id": "MDU6TGFiZWwxODM0MDU0Njk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow", "name": "TensorFlow", "color": "FF6F00", "default": false, "description": "Anything TensorFlow" }, { "id": 1862634478, "node_id": "MDU6TGFiZWwxODYyNjM0NDc4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix", "name": "Should Fix", "color": "FF0000", "default": false, "description": "This has been identified as a bug and should be fixed." } ]
closed
false
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[ { "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false } ]
[ "Hello!\r\n\r\nI cannot reproduce your issue, can you try on the master version please?", "> Hello!\r\n> \r\n> I cannot reproduce your issue, can you try on the master version please?\r\n\r\nI install transformers by `pip install git+https://github.com/huggingface/transformers`. I think it is master version by default. This is the complete [code](https://colab.research.google.com/gist/Douboo/fccd6bcda2e098b10b1c7490f2d8bbf3/untitled3.ipynb#scrollTo=wjN-uwz3LzJf)", "> Hello!\r\n> \r\n> I cannot reproduce your issue, can you try on the master version please?\r\n\r\nThe master version still reports the same bug. Install transformers by `pip install git+https://github.com/huggingface/transformers@master`.", "Thanks a lot for the colab link. I will check this ASAP and will let you know here.", "I can confirm the bug. As a workaround you can use `model.save_weights` instead. I cannot give you a specific date for the fix, but I will update the thread.", "> I can confirm the bug. As a workaround you can use `model.save_weights` instead. I cannot give you a specific date for the fix, but I will update the thread.\r\n\r\nok, thanks!" ]
1,600
1,600
1,600
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> https://colab.research.google.com/gist/Douboo/fccd6bcda2e098b10b1c7490f2d8bbf3/untitled3.ipynb#scrollTo=yW4t3uQxjhy1 - `transformers` version: 3.1.0 - Platform: colab - Python version: 3.7.0 - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: yes ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @jplu ## Information Model I am using (Bert): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python # define model via functional api def build_model(item_dim, num_layers, num_heads, max_len): config = BertConfig(hidden_size=item_dim, num_hidden_layers=num_layers, num_attention_heads=num_heads, intermediate_size=item_dim*4, max_position_embeddings=max_len) bert = TFBertMainLayer(config=config) inputs_embeds = Input(shape=(max_len, item_dim), dtype=tf.float32, name='inputs') inputs = {"inputs_embeds": inputs_embeds} # pre-training vectors to bert seq_emb = bert(inputs)[0] last_token_emb = seq_emb[:, -1, :] outputs = Dense(1, activation='sigmoid')(last_token_emb) model = Model(inputs=inputs, outputs=outputs) return model model = build_model(item_dim, num_layers, num_heads, max_len) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc']) model.fit(X, y, epochs=2, batch_size=128, verbose=2 ) model.save('./my_model') ``` Errors with `TypeError: ('Not JSON Serializable:', BertConfig { "attention_probs_dropout_prob": 0.1, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 32, "initializer_range": 0.02, "intermediate_size": 128, "layer_norm_eps": 1e-12, "max_position_embeddings": 9, "model_type": "bert", "num_attention_heads": 1, "num_hidden_layers": 1, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 30522 } )` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior * Successfully saved keras model. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7277/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7276/comments
https://api.github.com/repos/huggingface/transformers/issues/7276/events
https://github.com/huggingface/transformers/pull/7276
705,259,666
MDExOlB1bGxSZXF1ZXN0NDkwMDE1OTQ3
7,276
fix unnessasry cpu memory usage when training
{ "login": "xiye17", "id": 43059752, "node_id": "MDQ6VXNlcjQzMDU5NzUy", "avatar_url": "https://avatars.githubusercontent.com/u/43059752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiye17", "html_url": "https://github.com/xiye17", "followers_url": "https://api.github.com/users/xiye17/followers", "following_url": "https://api.github.com/users/xiye17/following{/other_user}", "gists_url": "https://api.github.com/users/xiye17/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiye17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiye17/subscriptions", "organizations_url": "https://api.github.com/users/xiye17/orgs", "repos_url": "https://api.github.com/users/xiye17/repos", "events_url": "https://api.github.com/users/xiye17/events{/privacy}", "received_events_url": "https://api.github.com/users/xiye17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7276?src=pr&el=h1) Report\n> Merging [#7276](https://codecov.io/gh/huggingface/transformers/pull/7276?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cbf0f722d23440f3342aafc27697b50ead5996b?el=desc) will **decrease** coverage by `1.15%`.\n> The diff coverage is `60.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7276/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7276?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7276 +/- ##\n==========================================\n- Coverage 80.32% 79.17% -1.16% \n==========================================\n Files 174 174 \n Lines 33446 33445 -1 \n==========================================\n- Hits 26867 26481 -386 \n- Misses 6579 6964 +385 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7276?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.80% <60.00%> (+0.08%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.03% <0.00%> (-73.03%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-58.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.96% <0.00%> (-30.18%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `83.74% <0.00%> (-14.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/7276/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7276?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7276?src=pr&el=footer). Last update [7cbf0f7...e9bc37d](https://codecov.io/gh/huggingface/transformers/pull/7276?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Use scalar when recording the aggregated training loss throughout the training procedure in `Trainer`. The previous version uses `torch.Tensor` to store `tr_loss`. Here, tr_loss is accumulating history across your training loop and takes up lots of CPU memory during training. There is an official pytorch post on **NOT to accumulate history across your training loop**. ([https://pytorch.org/docs/stable/notes/faq.html#my-model-reports-cuda-runtime-error-2-out-of-memory](https://pytorch.org/docs/stable/notes/faq.html#my-model-reports-cuda-runtime-error-2-out-of-memory)) When I tried to train a sentence classififier with batch size 10 on HotpotQA dataset (for document filtering), the program consumed around 120G CPU memory when I trained for 240,000 steps (roughly 3 epoch over all 900,000 HotpotQA documents), which is unnessasry. So I think it's better to just use a scalar (as suggested by the official post). Also, the tensor format returned value has never been used in the whole repo. It's a safe fix.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7276/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7276", "html_url": "https://github.com/huggingface/transformers/pull/7276", "diff_url": "https://github.com/huggingface/transformers/pull/7276.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7276.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7275/comments
https://api.github.com/repos/huggingface/transformers/issues/7275/events
https://github.com/huggingface/transformers/issues/7275
705,217,210
MDU6SXNzdWU3MDUyMTcyMTA=
7,275
Question about model configuration
{ "login": "dmortem", "id": 20540613, "node_id": "MDQ6VXNlcjIwNTQwNjEz", "avatar_url": "https://avatars.githubusercontent.com/u/20540613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dmortem", "html_url": "https://github.com/dmortem", "followers_url": "https://api.github.com/users/dmortem/followers", "following_url": "https://api.github.com/users/dmortem/following{/other_user}", "gists_url": "https://api.github.com/users/dmortem/gists{/gist_id}", "starred_url": "https://api.github.com/users/dmortem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dmortem/subscriptions", "organizations_url": "https://api.github.com/users/dmortem/orgs", "repos_url": "https://api.github.com/users/dmortem/repos", "events_url": "https://api.github.com/users/dmortem/events{/privacy}", "received_events_url": "https://api.github.com/users/dmortem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "```python\r\nfrom transformers import GPT2Config\r\nconfig = GPT2Config()\r\nprint(\"Pad token id\", config.pad_token_id) # should give `None`\r\nprint(\"EOS token id\", config.eos_token_id) # should give `50256`\r\n```\r\n\r\n=> So it seems like your config is wrong. Hope this helps!", "hi @patrickvonplaten \r\nI found I used transformer version 2.4.1, after I installed 2.8.0, the output is the same as yours. Also, I found the generate function used \"eos_token_ids\" instead of. \"eos_token_id\" as the stop token in 2.4.1. \"Eos_token_ids\" is not initialized in the configuration and it is 0 by default. Now in 2.8.0, the generation function don't use \"Eos_token_ids\" any longer.\r\n\r\nThank you!" ]
1,600
1,600
1,600
NONE
null
# ❓ Questions & Help Hi, Now I am trying gpt-2 model to generate sentences. I have a question about the model configuration. I load model using [this line in the provided code](https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py#L221). I found "eos_tokens_ids" and "pad_token_id" both equal to 0. But when I checked the tokenizer, I found the token with id 0 is '!'. So when I use the provided code to generate sentences, sentences always end with '!'. Is there any mistake in the vocabulary or the configuration? Many thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7275/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7274/comments
https://api.github.com/repos/huggingface/transformers/issues/7274/events
https://github.com/huggingface/transformers/pull/7274
705,199,354
MDExOlB1bGxSZXF1ZXN0NDg5OTY4MTU5
7,274
[seq2seq] make it easier to run the scripts
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Does `python finetune.py` still work? If not we have a lot of untested bash scripts to update.", "Everything works as before. You just don't have to use `python foo.py` anymore. You can still use that way or the shortcut of `./foo.py`.\r\n\r\nAlso there is no more need to tweak `PYTHONPATH`, but again, the existing tweaks still work - it just does it twice.\r\n\r\n", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7274?src=pr&el=h1) Report\n> Merging [#7274](https://codecov.io/gh/huggingface/transformers/pull/7274?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cbf0f722d23440f3342aafc27697b50ead5996b?el=desc) will **increase** coverage by `0.09%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7274/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7274?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7274 +/- ##\n==========================================\n+ Coverage 80.32% 80.42% +0.09% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n+ Hits 26867 26900 +33 \n+ Misses 6579 6546 -33 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7274?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.52% <0.00%> (-34.77%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `82.81% <0.00%> (-9.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.00% <0.00%> (-0.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.87% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (-0.36%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7274/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7274?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7274?src=pr&el=footer). Last update [7cbf0f7...c855ebe](https://codecov.io/gh/huggingface/transformers/pull/7274?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
This PR: * converts scripts into independent executables - easier to run. Docs updates. * tweaks `./finetune.py` and `./distillation.py` to be self-contained - no need to tweak `PYTHONPATH` via a shell script anymore * fixes a few more left-overs from `try: import .foo; except: import foo` Let me know if there any other tweaks to make. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7274", "html_url": "https://github.com/huggingface/transformers/pull/7274", "diff_url": "https://github.com/huggingface/transformers/pull/7274.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7274.patch", "merged_at": 1600975428000 }
https://api.github.com/repos/huggingface/transformers/issues/7273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7273/comments
https://api.github.com/repos/huggingface/transformers/issues/7273/events
https://github.com/huggingface/transformers/issues/7273
705,183,141
MDU6SXNzdWU3MDUxODMxNDE=
7,273
[Bug/Question] Write With Transformers Implementation vs. Custom Implementation
{ "login": "krrishdholakia", "id": 17561003, "node_id": "MDQ6VXNlcjE3NTYxMDAz", "avatar_url": "https://avatars.githubusercontent.com/u/17561003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krrishdholakia", "html_url": "https://github.com/krrishdholakia", "followers_url": "https://api.github.com/users/krrishdholakia/followers", "following_url": "https://api.github.com/users/krrishdholakia/following{/other_user}", "gists_url": "https://api.github.com/users/krrishdholakia/gists{/gist_id}", "starred_url": "https://api.github.com/users/krrishdholakia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krrishdholakia/subscriptions", "organizations_url": "https://api.github.com/users/krrishdholakia/orgs", "repos_url": "https://api.github.com/users/krrishdholakia/repos", "events_url": "https://api.github.com/users/krrishdholakia/events{/privacy}", "received_events_url": "https://api.github.com/users/krrishdholakia/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
Hi, Not sure if this is a bug or perhaps a misimplementation, but i'm comparing the results of using gpt2-large on the 'Write With Transformers' Text Generation example - https://transformer.huggingface.co/doc/gpt2-large, vs. my own implementation of the text generation tool. My use-case is to generate distractor options in an MCQ environment, given a few (15-20) prior examples for the style of distractor questions to generate. The general format I am implementing is: `Question: _______ . Answer 1: ___<the correct answer>___ . Answer 2: _____<distractor 1>_____ . Answer 3: ____<distractor 2>_____` ## Write With Transformers full doc available here - https://transformer.huggingface.co/share/CZqVXdngic ### Model Config Model size - gpt2/large Top-p - 0.9 Temperature - 1 Max time - 2.3 ### Output On the 'Write With Transformers' page, I write in the examples: eg. `Question: How is calorie related to the S.I unit of that quantity?Answer 1: 1 cal = 4.2 J.Answer 2: 1 cal = 3.2 J.Answer 3: 1 cal = 10 J.` and when I try to generate predictions for the following Question-Answer pair: `Question: How is the unit horse power related to the S.I. unit of power? Answer 1: 1 H.P. = 750 W.` am able to generate a few solid distractors: `Question: How is the unit horse power related to the S.I. unit of power? Answer 1: 1 H.P. = 750 W. Answer 2: 1 H. P. = 1. 3 W. Answer 3 : 1 H . P. = 1. 5 W.` ## Custom Case ### Output Here's how I create a dataset from the set of questions I've initially written myself. ``` def make_dataset(dataset, epochs): total_text = '<|endoftext|>' qa = [t for t in dataset] for _ in range(epochs): random.shuffle(qa) total_text += '<|endoftext|>'.join(qa) + '<|endoftext|>' return total_text ``` This is the training model params: ``` !python run_language_modeling.py \ --output_dir=output/$handle \ --overwrite_output_dir \ --overwrite_cache \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --do_train --train_data_file=$handle\_train.txt \ --logging_steps 20 \ --per_gpu_train_batch_size 1 \ --num_train_epochs 15 ``` num_return_sequences = 5 ``` for start in SENTENCES: val = !python run_generation.py \ --model_type gpt2 \ --model_name_or_path output/$handle \ --length 40 \ --num_return_sequences $num_return_sequences \ --temperature 0.23 \ --p 0.95 \ --seed $seed \ --prompt {'"<|endoftext|>' + start + '"'} generated = [val[-1-2*k] for k in range(num_return_sequences)[::-1]] print(f'\nStart of sentence: {start}') for i, g in enumerate(generated): g = g.replace('<|endoftext|>', '') print(f'* Generated #{i+1}: {g}') ``` These are my generated mcq pairs: **Generated #1: Question: How is the unit horse power related to the S.I. unit of power? Answer 1: 1 H.P. = 750 W. Answer 2: กราท.Answer 3: 50 J.** **Generated #2: Question: How is the unit horse power related to the S.I. unit of power? Answer 1: 1 H.P. = 750 W. Answer 2: กราท.Answer 3: 50 J.** **Generated #3: Question: How is the unit horse power related to the S.I. unit of power? Answer 1: 1 H.P. = 750 W. Answer 2: ______________ = 905 J.Answer 3: 50 J.** **Generated #4: Question: How is the unit horse power related to the S.I. unit of power? Answer 1: 1 H.P. = 750 W. Answer 2: กราท.Answer 3: 50 J.** **Generated #5: Question: How is the unit horse power related to the S.I. unit of power? Answer 1: 1 H.P. = 750 W. Answer 2: กราท.Answer 3: 50 J.**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7273/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7272/comments
https://api.github.com/repos/huggingface/transformers/issues/7272/events
https://github.com/huggingface/transformers/pull/7272
705,160,757
MDExOlB1bGxSZXF1ZXN0NDg5OTQwMTgx
7,272
[Longformer, Bert, Roberta, ...] Fix multi gpu training
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7272?src=pr&el=h1) Report\n> Merging [#7272](https://codecov.io/gh/huggingface/transformers/pull/7272?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2c8ecdf8a87019c438262d8c692e1bdffe05149f?el=desc) will **increase** coverage by `1.10%`.\n> The diff coverage is `92.92%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7272/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7272?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7272 +/- ##\n==========================================\n+ Coverage 77.58% 78.69% +1.10% \n==========================================\n Files 181 181 \n Lines 35725 35781 +56 \n==========================================\n+ Hits 27719 28157 +438 \n+ Misses 8006 7624 -382 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7272?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `81.81% <0.00%> (-18.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.69% <45.45%> (-55.46%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.35% <100.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `84.33% <100.00%> (+0.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.48% <100.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.43% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <100.00%> (-15.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `97.35% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.91% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `17.46% <100.00%> (-81.13%)` | :arrow_down: |\n| ... and [30 more](https://codecov.io/gh/huggingface/transformers/pull/7272/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7272?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7272?src=pr&el=footer). Last update [2c8ecdf...0d1f787](https://codecov.io/gh/huggingface/transformers/pull/7272?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@patrickvonplaten:\r\n\r\nI am just curious about this statement:\r\n\r\n This pooling layer is however not needed by all of the \"higher\" model extensions. The ...ForSequenceClassification model extensions e.g. simply disregards the pooled output of the BaseModel.\r\n\r\nIf I remembered correctly, at least for `Bert`, the extension `TFBertForSequenceClassification` does use the pooler output. Since Roberta is similar to Bert, I think it also uses pooler output for sequence classification. Not sure about LongFormer though.", "> @patrickvonplaten:\r\n> \r\n> I am just curious about this statement:\r\n> \r\n> ```\r\n> This pooling layer is however not needed by all of the \"higher\" model extensions. The ...ForSequenceClassification model extensions e.g. simply disregards the pooled output of the BaseModel.\r\n> ```\r\n> \r\n> If I remembered correctly, at least for `Bert`, the extension `TFBertForSequenceClassification` does use the pooler output. Since Roberta is similar to Bert, I think it also uses pooler output for sequence classification. Not sure about LongFormer though.\r\n\r\nTrue, thanks for the remark -> I updated the description above :-) ", "I'm not 100% happy with adding a new config parameter the user will have to learn about instead of a solution where it just works out of the box. I would be happier with a solution where there is a parameter at init for the base model to choose whether or not to add that layer, and set it to False for the models that don't need it.\r\n\r\nI understand this might break the `from_pretrained` method with existing models, but we can probably add a test that removes any warning that might appear because of that.\r\n\r\nWhat do you think?", "> I'm not 100% happy with adding a new config parameter the user will have to learn about instead of a solution where it just works out of the box. I would be happier with a solution where there is a parameter at init for the base model to choose whether or not to add that layer, and set it to False for the models that don't need it.\r\n> \r\n> I understand this might break the `from_pretrained` method with existing models, but we can probably add a test that removes any warning that might appear because of that.\r\n> \r\n> What do you think?\r\n\r\nYes I thought about this as well, but was a bit afraid since some people seem to rely on `load_state_dict` in strict mode (issue here: https://github.com/huggingface/transformers/issues/6882). In strict mode there would actually be an error instead of just a warning. \r\n\r\nBut I agree that it's cleaner to not expose another config param! \r\n@LysandreJik and @sgugger - what do you think? I think I would also prefer @sgugger proposition and a slight breaking change here (one that we already had before anyways for all models)", "This is just a slight breaking change as we would have a workaround with `strict=False`, and it would only be for user who stubbornly don't want to use our methods, since `from_pretrained` would have no breaking change ;-) Also for those users, it would only be a one-time thing since I imagine they download the model by themselves and store it somewhere. Just one load with `from_pretrained` then one save to the place they want to save the model would solve the potential bug, since the model saved would not have the weights anymore.\r\n\r\n\r\n\r\n", "Agree! Is this fine for you as well @LysandreJik ?", "I think that what @sgugger proposes is cleaner for the end-user. Maybe we should specify somewhere that even if we aim for 0 breaking changes between minor (and minimize them for major) version changes, we can't expect to have no breaking changes for users that do not leverage our methods but rely on methods like `torch.load` and `torch.save` between versions.", "I'm talking here only for the TF part.\r\n\r\nFor TF I'm not really in favor of ignoring a layer mostly because you can easily mess up a lot of things. Let me explain.\r\n\r\nThe error raised by @patrickvonplaten `ValueError: Layer #0 (named \"bert\") expects 197 weight(s), but the saved weights have 199 element(s).` beyond saying that you try to load saved weights into the wrong layer, it simply means that the order where the weights are saved and the order where the layers are instantiated in the `__init__` must be exactly the same, otherwise it fails. Including the total number of weights.\r\n\r\nWhen training, if a layer is not used its gradients will be None and will be properly handled by the `.fit()` method anyway. This is also exactly for this reason of missing layer that when I started to develop the TF trainer I handled it with:\r\n```\r\ngradients = [\r\n g if g is not None else tf.zeros_like(v) for g, v in zip(gradients, self.model.trainable_variables)\r\n ]\r\n```\r\n\r\nSo at the end, for TF having this layer is really not an issue. The main issue will be when someone would like to load a PT model in TF with `TFBertModel.from_pretrained(\"bert-base-cased\", from_pt=True)` and the opposite, we have to be sure it works properly.\r\n\r\nThe quickest solution I can propose for TF is to change the way we load the models, from:\r\n```\r\nmodel.load_weights(resolved_archive_file, by_name=True)\r\n````\r\nto:\r\n```\r\nmodel.load_weights(resolved_archive_file, by_name=True, skip_mismatch=True)\r\n````\r\n\r\nThe `skip_mismatch` parameters is a boolean that skip loading the layers where there is a mismatch in the number of weights, or a mismatch in the shape of the weight. But it has an inconvenient, if it cannot load the weights of a layer it might not be able to load all the weights after the one that has failed and might results to an \"empty\" model.\r\n\r\nBTW @patrickvonplaten I cannot reproduce your error, can you tell me more on how you did it please?", "> I'm talking here only for the TF part.\r\n> \r\n> For TF I'm not really in favor of ignoring a layer mostly because you can easily mess up a lot of things. Let me explain.\r\n> \r\n> The error raised by @patrickvonplaten `ValueError: Layer #0 (named \"bert\") expects 197 weight(s), but the saved weights have 199 element(s).` beyond saying that you try to load saved weights into the wrong layer, it simply means that the order where the weights are saved and the order where the layers are instantiated in the `__init__` must be exactly the same, otherwise it fails. Including the total number of weights.\r\n> \r\n> When training, if a layer is not used its gradients will be None and will be properly handled by the `.fit()` method anyway. This is also exactly for this reason of missing layer that when I started to develop the TF trainer I handled it with:\r\n> \r\n> ```\r\n> gradients = [\r\n> g if g is not None else tf.zeros_like(v) for g, v in zip(gradients, self.model.trainable_variables)\r\n> ]\r\n> ```\r\n> \r\n> So at the end, for TF having this layer is really not an issue. The main issue will be when someone would like to load a PT model in TF with `TFBertModel.from_pretrained(\"bert-base-cased\", from_pt=True)` and the opposite, we have to be sure it works properly.\r\n> \r\n> The quickest solution I can propose for TF is to change the way we load the models, from:\r\n> \r\n> ```\r\n> model.load_weights(resolved_archive_file, by_name=True)\r\n> ```\r\n> \r\n> to:\r\n> \r\n> ```\r\n> model.load_weights(resolved_archive_file, by_name=True, skip_mismatch=True)\r\n> ```\r\n> \r\n> The `skip_mismatch` parameters is a boolean that skip loading the layers where there is a mismatch in the number of weights, or a mismatch in the shape of the weight. But it has an inconvenient, if it cannot load the weights of a layer it might not be able to load all the weights after the one that has failed and might results to an \"empty\" model.\r\n> \r\n> BTW @patrickvonplaten I cannot reproduce your error, can you tell me more on how you did it please?\r\n\r\nJust to be clear to problem is the following: `TFBertModel` and `BertModel` have a pooling layer that is not needed by classes such as `BertForMaskedLM` and `TFBertForMaskedLM`. What happens in practice is that during the forward pass of `BertForMaskedLM` the `BertModel` pools the hidden states, but `BertForMaskedLM` does not need the pooled output and simply ignores it. In PyTorch this leads to problems with multi-gpu training as shown in the issue linked above. Now, the best and obvious solution here is to remove this layer for all Bert models that don't need it. Removing a layer in a model architecture is dangerous however as all previously serialized weights for this architecture include the removed layer. This can lead to problems with backwards compatibility when loading serialized weight files of previous transformer versions.\r\n\r\nNow, in PyTorch this is no problem as such single layers \"[bert.pooler]\" are simply ignored, but in TF there does not seem to be such an option. Using `skip_mismatch=True` would simply lead to not loading any weights of the whole `TFBertMainLayer` which is obviously not possible. So I do not see a possibility to use `model.load_weights` in TF **and** removing the unnecessary pooling layer for TFBert, ... \r\n\r\nI was wondering if there is a better method or way to load model weights than `model.load_weights`, but could not find one and it seems like to big of a change for such a small issue.\r\n\r\nTherefore, I think the best option is to only remove it for PT and leave it for TF which is done in this PR currently because:\r\n- 100% backwards compatibility\r\n- Solves the issue, as TF does not seem to have a problem with multi-gpu inference in its current version.\r\n- Verified than loading into PT from TF and vice-versa works", ">Now, in PyTorch this is no problem as such single layers \"[bert.pooler]\" are simply ignored, but in TF there does not seem to be such an option. Using skip_mismatch=True would simply lead to not loading any weights of the whole TFBertMainLayer which is obviously not possible. So I do not see a possibility to use model.load_weights in TF and removing the unnecessary pooling layer for TFBert, ...\r\n> I was wondering if there is a better method or way to load model weights than model.load_weights, but could not find one and it seems like to big of a change for such a small issue.\r\n\r\nIndeed, there is not such way unfortunately, or at least to do it properly in a as nice as in the PT manner.\r\n\r\n>Therefore, I think the best option is to only remove it for PT and leave it for TF which is done in this PR currently because:\r\n> 100% backwards compatibility\r\n> Solves the issue, as TF does not seem to have a problem with multi-gpu inference in its current version.\r\n> Verified than loading into PT from TF and vice-versa works\r\n\r\n100% agree" ]
1,600
1,601
1,601
MEMBER
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6256. Issue #6256 shows that distributed training is not possible when the model has layers that are not used at all. Bert, Roberta and Longformer all have a "pooling_layer" added to their BaseModel (`BertModel`, `RobertaModel`, `LongformerModel`). This pooling layer is however not needed by all the "higher" model extensions. The `LongformerForSequenceClassification`, `...ForTokenClassification` model extensions e.g. simply disregards the pooled output of the BaseModel. Looking back, I think the pooling layer should not have been part of the BaseModel classes, but is now not really possible to revert the design choice anymore given the number of `BertModel`s already trained with this layer. Ideally, we should both in TF and PT automatically never instantiate a layer that is never needed. This is now implemented in PyTorch. In TF loading pretrained weights depends on the `load_weights` function here: https://github.com/huggingface/transformers/blob/38f17037957d325b5540a8031f065e6f23c9e265/src/transformers/modeling_tf_utils.py#L614 The load_weights function **cannot** load a layer which has a different number of weights in the `resolved_archive_file` as `model`. This means that if we would remove the `pooler` layer for `TFBertForMaskedLM` because it is not needed, the following command: ``` python -c 'from transformers import TFBertForMaskedLM; TFBertForMaskedLM.from_pretrained("bert-base-cased")' ``` would throw the following error: ``` ValueError: Layer #0 (named "bert") expects 197 weight(s), but the saved weights have 199 element(s). ``` At the moment I don't see how we could keep backwards compatibility in TF without completely replacing the `from_pretrained` functionality and even then I don't think it will be easy to load hdf5 files with more layers than expected in TF. So I propose the following solution: Only implement the feature for PT and leave the layer in TF for now because a) It solves the issue b) is 100% backwards compatible. It would be nice to remove the pooling layers for TF in a future PR, but I think we would have to rewrite `from_pretrained` for TF models in this case (cc @jplu ) @sgugger @LysandreJik - what do you think?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7272/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7272", "html_url": "https://github.com/huggingface/transformers/pull/7272", "diff_url": "https://github.com/huggingface/transformers/pull/7272.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7272.patch", "merged_at": 1601058801000 }
https://api.github.com/repos/huggingface/transformers/issues/7271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7271/comments
https://api.github.com/repos/huggingface/transformers/issues/7271/events
https://github.com/huggingface/transformers/issues/7271
705,146,371
MDU6SXNzdWU3MDUxNDYzNzE=
7,271
Distilbert classification
{ "login": "akald", "id": 59920376, "node_id": "MDQ6VXNlcjU5OTIwMzc2", "avatar_url": "https://avatars.githubusercontent.com/u/59920376?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akald", "html_url": "https://github.com/akald", "followers_url": "https://api.github.com/users/akald/followers", "following_url": "https://api.github.com/users/akald/following{/other_user}", "gists_url": "https://api.github.com/users/akald/gists{/gist_id}", "starred_url": "https://api.github.com/users/akald/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akald/subscriptions", "organizations_url": "https://api.github.com/users/akald/orgs", "repos_url": "https://api.github.com/users/akald/repos", "events_url": "https://api.github.com/users/akald/events{/privacy}", "received_events_url": "https://api.github.com/users/akald/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
Is Maximum Entropy the same as Cross-entropy loss in the context of Distilbert (happy or sad sentiment label) classification?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7271/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7270/comments
https://api.github.com/repos/huggingface/transformers/issues/7270/events
https://github.com/huggingface/transformers/issues/7270
705,134,185
MDU6SXNzdWU3MDUxMzQxODU=
7,270
Multitask pre-training setting
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @antoniomastro1996,\r\n\r\nSure you can train T5 on a mixture of tasks. If you frame each tasks as a text-to-text task and mix it with *unsupervised denoising training* training examples, you would have the pre-training setting used in the original T5 paper.\r\n\r\nThis might also help: https://github.com/huggingface/datasets/issues/217", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
# 🚀 Feature request Hi everybody! Is it possible to pre-train T5 model on a mixture of tasks? Just to be more clear, let's assume I have created a dataset by following the example provided here: https://huggingface.co/transformers/model_doc/t5.html Specifically I'm referring to the _**Unsupervised denoising training**_ and _**Supervised training**_. What I've done is to construct a huge dataset where each tasks is concatenated to the previous one. Is this enough to experiment with the multitask pertaining setting?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7270/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7269/comments
https://api.github.com/repos/huggingface/transformers/issues/7269/events
https://github.com/huggingface/transformers/pull/7269
705,110,411
MDExOlB1bGxSZXF1ZXN0NDg5OTA0MTE1
7,269
Add model cards for new pre-trained BERTweet-COVID19 models
{ "login": "datquocnguyen", "id": 2412555, "node_id": "MDQ6VXNlcjI0MTI1NTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2412555?v=4", "gravatar_id": "", "url": "https://api.github.com/users/datquocnguyen", "html_url": "https://github.com/datquocnguyen", "followers_url": "https://api.github.com/users/datquocnguyen/followers", "following_url": "https://api.github.com/users/datquocnguyen/following{/other_user}", "gists_url": "https://api.github.com/users/datquocnguyen/gists{/gist_id}", "starred_url": "https://api.github.com/users/datquocnguyen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/datquocnguyen/subscriptions", "organizations_url": "https://api.github.com/users/datquocnguyen/orgs", "repos_url": "https://api.github.com/users/datquocnguyen/repos", "events_url": "https://api.github.com/users/datquocnguyen/events{/privacy}", "received_events_url": "https://api.github.com/users/datquocnguyen/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7269?src=pr&el=h1) Report\n> Merging [#7269](https://codecov.io/gh/huggingface/transformers/pull/7269?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4f6e52574248636352a746cfe6cc0b13cf3eb7f9?el=desc) will **increase** coverage by `2.63%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7269/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7269?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7269 +/- ##\n==========================================\n+ Coverage 78.63% 81.27% +2.63% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n+ Hits 26300 27182 +882 \n+ Misses 7146 6264 -882 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7269?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `75.91% <0.00%> (-21.17%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-1.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+10.00%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/7269/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7269?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7269?src=pr&el=footer). Last update [4f6e525...2100262](https://codecov.io/gh/huggingface/transformers/pull/7269?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thank you!", "Thanks a lot @julien-c " ]
1,600
1,600
1,600
CONTRIBUTOR
null
Two new pre-trained models "vinai/bertweet-covid19-base-cased" and "vinai/bertweet-covid19-base-uncased" are resulted by further pre-training the pre-trained model "vinai/bertweet-base" on a corpus of 23M COVID-19 English Tweets for 40 epochs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7269/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7269/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7269", "html_url": "https://github.com/huggingface/transformers/pull/7269", "diff_url": "https://github.com/huggingface/transformers/pull/7269.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7269.patch", "merged_at": 1600683171000 }
https://api.github.com/repos/huggingface/transformers/issues/7268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7268/comments
https://api.github.com/repos/huggingface/transformers/issues/7268/events
https://github.com/huggingface/transformers/pull/7268
705,089,121
MDExOlB1bGxSZXF1ZXN0NDg5ODg4NzYx
7,268
Fix typo in model name
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7268?src=pr&el=h1) Report\n> Merging [#7268](https://codecov.io/gh/huggingface/transformers/pull/7268?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4f6e52574248636352a746cfe6cc0b13cf3eb7f9?el=desc) will **increase** coverage by `3.11%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7268/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7268?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7268 +/- ##\n==========================================\n+ Coverage 78.63% 81.75% +3.11% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n+ Hits 26300 27343 +1043 \n+ Misses 7146 6103 -1043 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7268?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `75.91% <0.00%> (-21.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.87% <0.00%> (-7.04%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.96% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.49% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7268/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7268?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7268?src=pr&el=footer). Last update [4f6e525...0d057d8](https://codecov.io/gh/huggingface/transformers/pull/7268?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hey @mrm8488,\r\n\r\nThanks a lot for the fix!" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7268/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7268", "html_url": "https://github.com/huggingface/transformers/pull/7268", "diff_url": "https://github.com/huggingface/transformers/pull/7268.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7268.patch", "merged_at": 1600621950000 }
https://api.github.com/repos/huggingface/transformers/issues/7267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7267/comments
https://api.github.com/repos/huggingface/transformers/issues/7267/events
https://github.com/huggingface/transformers/pull/7267
705,080,408
MDExOlB1bGxSZXF1ZXN0NDg5ODgyMzE2
7,267
[Bug fix] Fixed target_mapping preparation for XLNet (Pytorch)
{ "login": "guillaume-be", "id": 27071604, "node_id": "MDQ6VXNlcjI3MDcxNjA0", "avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guillaume-be", "html_url": "https://github.com/guillaume-be", "followers_url": "https://api.github.com/users/guillaume-be/followers", "following_url": "https://api.github.com/users/guillaume-be/following{/other_user}", "gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}", "starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions", "organizations_url": "https://api.github.com/users/guillaume-be/orgs", "repos_url": "https://api.github.com/users/guillaume-be/repos", "events_url": "https://api.github.com/users/guillaume-be/events{/privacy}", "received_events_url": "https://api.github.com/users/guillaume-be/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7267?src=pr&el=h1) Report\n> Merging [#7267](https://codecov.io/gh/huggingface/transformers/pull/7267?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4f6e52574248636352a746cfe6cc0b13cf3eb7f9?el=desc) will **decrease** coverage by `0.26%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7267/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7267?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7267 +/- ##\n==========================================\n- Coverage 78.63% 78.37% -0.27% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n- Hits 26300 26213 -87 \n- Misses 7146 7233 +87 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7267?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `83.42% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (+4.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.97% <0.00%> (+23.37%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/7267/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7267?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7267?src=pr&el=footer). Last update [4f6e525...1defddf](https://codecov.io/gh/huggingface/transformers/pull/7267?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
The pytorch version of XLNet does not create the `target_mapping` correctly when the batch size > 1. For generation, the `target_mapping` should be made of zeros, except at the last position (token to be predicted). The current Pytorch implementation is currently only non-zero at the last position of the first batch position: https://github.com/huggingface/transformers/blob/4f6e52574248636352a746cfe6cc0b13cf3eb7f9/src/transformers/modeling_xlnet.py#L1316 This causes the model to be incorrect when multiple inputs are passed to the model, or when beam search is turned on. This PR fixes the issue by setting the last (sequence) position of `target_mapping` to 1 for all batch positions. For reference, the Tensorflow implementation seems to be already correct: https://github.com/huggingface/transformers/blob/4f6e52574248636352a746cfe6cc0b13cf3eb7f9/src/transformers/modeling_tf_xlnet.py#L1158
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7267/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7267", "html_url": "https://github.com/huggingface/transformers/pull/7267", "diff_url": "https://github.com/huggingface/transformers/pull/7267.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7267.patch", "merged_at": 1600678432000 }
https://api.github.com/repos/huggingface/transformers/issues/7266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7266/comments
https://api.github.com/repos/huggingface/transformers/issues/7266/events
https://github.com/huggingface/transformers/issues/7266
705,074,917
MDU6SXNzdWU3MDUwNzQ5MTc=
7,266
LXMERT pre-training tasks
{ "login": "LetiP", "id": 16118202, "node_id": "MDQ6VXNlcjE2MTE4MjAy", "avatar_url": "https://avatars.githubusercontent.com/u/16118202?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LetiP", "html_url": "https://github.com/LetiP", "followers_url": "https://api.github.com/users/LetiP/followers", "following_url": "https://api.github.com/users/LetiP/following{/other_user}", "gists_url": "https://api.github.com/users/LetiP/gists{/gist_id}", "starred_url": "https://api.github.com/users/LetiP/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LetiP/subscriptions", "organizations_url": "https://api.github.com/users/LetiP/orgs", "repos_url": "https://api.github.com/users/LetiP/repos", "events_url": "https://api.github.com/users/LetiP/events{/privacy}", "received_events_url": "https://api.github.com/users/LetiP/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Tagging the LXMERT implementation author @eltoto1219 ", "Hi, \"unc-nlp/lxmert-base-uncased\" was trained with all tasks specified in the paper (as aforementioned). We have benchmarked the pre-trained model to make sure it reaches the same performance on all QA tasks. If you do run into any troubles though, please let me know!", "Hello @eltoto1219, thank you for the answer! I suppose it was a weird question from my part, since I was asking this to make sure that I am loading a pre-trained LXMERT model and not some random weights. Especially because I look at the `output_lxmert['cross_relationship_score']` of COCO images and captions (so not on some out of distribution images and captions), after I loaded LXMERT with the aforementioned code `lxmert_base = LxmertForPreTraining.from_pretrained(\"unc-nlp/lxmert-base-uncased\")`. It seems that **on cross-modality matching LXMERT performs with 50% accuracy** (random guessing). So I wanted to make sure that I load pre-trained weights on (4) cross-modality matching in the first place.\r\n\r\n**New question**: Do you know how it can be that LXMERT randomly guesses on cross-modality matching even it was pre-trained to deliver a score (after the Softmax, of course) of smaller 0.5 if the caption is not describing the image and a score bigger 0.5 it the caption and the image match?", "+1 I am also interested in the question/answer!", "I also meet the problem. Does any have ideas about why this happends? @eltoto1219 ", "Hello @LetiP, \r\n\r\nWere you loading images from URLs or locally from image files? I have noticed a discrepancy in how they are processed in FRCNN and I was getting different visual features. I opened an issue about it here: https://github.com/huggingface/transformers/issues/8333\r\n\r\nBest,\r\nEce", "@ecekt Well observed, thank you very much!\r\nI did not notice that difference between URLs and local files, because I did not look closely at the features regarding this aspect. I have conducted my experiments over many samples with local files. However, I tested also with 10-20 images from URLs too and there I observed a similar random guessing behavior regarding the `cross_relationship_score`.\r\n\r\nNow I took a closer look at this and with your proposed solution in [#8333](https://github.com/huggingface/transformers/issues/8333) I see how the features change, but not the performance. Still random guessing.", "Hi @LetiP! Super interesting question. I was curious so I run a test on 1000 COCO-Val images with 5013 captions in total.\r\n\r\nUsing the original implementation (without considering #8333, i.e. with wrong color ordering for local files) I received 56 % correct classifications for images with correct captions - so the same results you got :+1:. Interestingly this model gets 99.7 % correct for wrong image-caption combinations (image with randomly drawn COCO caption). Hence we have 56 % Recall, but 99.7 % Specificity.\r\n\r\nFixing the bug noted in #8333 (see code below), Recall goes up to 71 %, Specificity is at 99.2 %, Precision at 98 %, and Accuracy is at 85 %. From this result I follow that `\"unc-nlp/lxmert-base-uncased\"` was trained on *cross-modality matching*. :)\r\n\r\n```python\r\n# transformers/examples/lxmert/utils.py\r\ndef img_tensorize(im, input_format=\"RGB\"):\r\n [...]\r\n assert img is not None, f\"could not connect to: {im}\"\r\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # <=== indent this line, so it works for local and url images.\r\n if input_format == \"RGB\":\r\n [...]\r\n```\r\n", "Hello @phiyodr ,\r\n\r\nlooking at your results, I can not but keep wondering: How did you decide whether the classification is correct or not?\r\n\r\nI am asking because the cross_relationship_score is a tensor with two logit entries for an image-text pair. \r\n\r\nHow do you decide which logit represents a match and which one a mismatch?\r\n", "Indeed the documentation is not super clear whether the first or second value in `cross_relationship_score` means `is_match`.\r\n\r\nI used 5013 correct image-caption pairs and 5013 wrong image-caption combinations, then I made a confusion matrix and decided whether the first or second value of the tensor is more plausible for `is_match`. \r\n\r\nWithout considering #8333:\r\n* Using the first entry as `is_match` receives an accuracy of 22 %.\r\n* Using the second entry as `is_match` receives an accuracy of 78 %. Recall=56 %, Specificity=99.7 %, TP=2830, FN=2183, FP=14, TN=5002). You looked at Recall which is indeed close to random guessing.\r\n\r\nConsidering #8333:\r\n* Using the first entry as `is_match` receives an accuracy of 15 %.\r\n* Using the second entry as `is_match` receives an accuracy of 85 %.\r\n\r\nHence the first value is likely to be `no_match` and the second value is likely to be `is_match`.", "Hello @phiyodr , thank you for your quick response!\r\n\r\n### I think that this issue evolved into the question:\r\n> How do you decide which logit represents a match and which one a mismatch?\r\n\r\nI understand that you pick the logit delivering better results. But I followed the [documentation](https://huggingface.co/transformers/model_doc/lxmert.html) which (I understand) assigns the first logit to `is_match` (True).\r\n> cross_relationship_score – (torch.FloatTensor of shape (batch_size, 2)): Prediction scores of the textual matching objective (classification) head (scores of True/False continuation before SoftMax).\r\n\r\n@eltoto1219 Do you perhaps know which logit in the `output_lxmert['cross_relationship_score']` represents a match and which one a mismatch? How to interpret the documentation?", "Yeah, the docu is actually vice versa. \r\nActually looking at specificity makes more sense than accuray: Specificity of 99.7% for the second value vs. 0.3 % for the first. ", "Still waiting for confirmation about what is happening here, about the way that the model was trained and which logit was intended to predict the match. I do not see any reason why one can simply invert the logits with wishful thinking.", "Is there any entry-level example of Lxmert? Following example from [Lxmert](https://huggingface.co/transformers/model_doc/lxmert.html).\r\n```python\r\nfrom transformers import LxmertTokenizer, LxmertModel\r\nimport torch\r\n\r\ntokenizer = LxmertTokenizer.from_pretrained('unc-nlp/lxmert-base-uncased')\r\nmodel = LxmertModel.from_pretrained('unc-nlp/lxmert-base-uncased')\r\n\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\noutputs = model(**inputs)\r\n\r\nlast_hidden_states = outputs.last_hidden_state\r\n```\r\ncomes up \r\n```\r\nFile \"/Users/yezli/miniconda3/lib/python3.8/site-packages/transformers/models/lxmert/modeling_lxmert.py\", line 933, in forward\r\n assert visual_feats is not None, \"`visual_feats` cannot be `None`\"\r\nAssertionError: `visual_feats` cannot be `None` \r\n```\r\n", "Hi @LetiP ! I am super sorry for all of the confusion regarding the correct/incorrect logit for the cross_relationship score. Per the documentation you pointed out, it is indeed a bit ambiguous in where the correct position of the \"is_matched\" index is. However, while pre-training, one must provide the sentence/image matching labels if including cross modality matching in the loss regime. \r\nIt is listed [here](https://huggingface.co/transformers/model_doc/lxmert.html) that:\r\n\r\n```\r\nmatched_label (tf.Tensor of shape (batch_size,), optional) –\r\n\r\nLabels for computing the whether or not the text input matches the image (classification) loss. Input should be a sequence pair (see input_ids docstring) Indices should be in [0, 1]:\r\n\r\n0 indicates that the sentence does not match the image,\r\n\r\n1 indicates that the sentence does match the image.\r\n```\r\nThe pytorch loss that was used can be found [here](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html?highlight=crossentropyloss#torch.nn.CrossEntropyLoss). This should also indicate the assignments of the indicies. \r\n\r\n\r\nThus, if the sentence does match the image, the model will have maximized the likelihood of the first index to be 1. In the case of a mismatch, the zero'th index will have been maximized to be 1 (aka True). If for some reason this is not occurring properly, please let me know!\r\n\r\nHey @yezhengli-Mr9 ! That is also a very misleading example for lxmert as one must provide the visual-position (normalized bounding boxes) and the FRCNN ROI-pooled visual features in order for the model to run. For all optional/non-optional inputs, please see the [docs](https://huggingface.co/transformers/model_doc/lxmert.html)! I should be able to fix that sometime soon. For now, if needed, you can refer to the LXMERT [pytests](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_lxmert.py).\r\n\r\n---\r\n\r\nI will be making a pull-request to remove the image tensorization from urls as it seems to be outside the scope of the demo and will remove one source of error. I will also formalize the feature-extraction code as using a batch size larger than one entails image padding which, consequently, lowers the quality of the image features. Perhaps, I can include that in an example too.", "Hello @eltoto1219 , thank you, this is the answer I have been looking for! Indeed, the behavior is exactly as you say:\r\n> In the case of a mismatch, the zero'th index will have been maximized to be 1 (aka True)\r\n\r\nIt is good to hear which part of the documentation is correct (or as you say, not ambiguous 😉). For helping everyone and to avoid any confusion, I would suggest to adapt the documentation of `cross_relationship_score` accordingly.\r\nJust replacing:\r\n> ... (scores of True/False continuation before SoftMax)\r\n\r\nwith \r\n\r\n> (scores of False (index 0)/True (index 1) continuation before SoftMax)\r\n\r\nwould to the job.", "+1 for the suggested documentation change. I was trying to figure this out as well. ", "Feel free to open a PR to update the documentation, we'll glady merge it!", "Hello @LetiP @eltoto1219 @ecekt \r\n\r\nI tried what I believe is the same experiment - predict match/no-match over the MSCOCO 2017 val set. Specifically, I used all image-caption pairs in the val set (25014 pairs over 5000 images) and sampled captions from random different images to create an equal number of negative examples (leading to a total of 50028 examples). I am getting the following results using this \r\nNumber of examples = 50028\r\nNumber of true positives (TP) = 17485\r\nNumber of false positives (FP) = 17485\r\nNumber of true negatives (TN) = 7529\r\nNumber of false negatives (FN) = 7529\r\nAccuracy = 0.5\r\nPrecision = 0.5\r\nRecall = 0.6990085552090829\r\nF1 = 0.5829887970125367\r\nThis is a higher recall than @LetiP and @ecekt but a much lower sensitivity and precision. \r\n\r\nI am trying to understand what caused the differences. I used [this script](https://drive.google.com/file/d/1er2axVyGj8eW84QBGrV0dqTmKbxyS8F7/view) provided by @eltoto1219 in [#8769](https://github.com/huggingface/transformers/issues/8769#issuecomment-740206982) to extract image features and I am getting the prediction by performing\r\n`pred = torch.argmax(softmax(output[\"cross_relationship_score\"])).item()\r\n `\r\nand am treating 0 as `no-match` and 1 as `match`.\r\nI did confirm that there is no color format issue in the feature extraction. Also, as in the demo I am \r\n\r\n- Loading the model and tokenizer from `unc-nlp/lxmert-base-uncased`. \r\n- Not performing any preprocessing on the caption (not even lower casing) as this did not seem required based on the demo and tokenizer source.\r\n\r\nWould appreciate any help trying to figure out what is causing these differences? So far I have not tested on VQA/ GQA.\r\n\r\nThanks!", "Hello @LetiP @eltoto1219 @ecekt\r\n\r\nReminder if any of you have thoughts/ suggestions about my question.\r\n\r\nThanks!\r\n", "Hello @aishwaryap ,\r\n\r\nI could **exactly** reproduce the numbers of @phiyodr 🥳 (here the relevant excerpt):\r\n \r\n> Without considering #8333:\r\n> \r\n> * Using the second entry as `is_match` receives an accuracy of 78 %. Recall=56 %, Specificity=99.7 %, TP=2830, FN=2183, FP=14, TN=5002).\r\n> \r\n> Considering #8333:\r\n> \r\n> * Using the second entry as `is_match` receives an accuracy of 85 %.\r\n\r\n[The script](https://drive.google.com/file/d/1er2axVyGj8eW84QBGrV0dqTmKbxyS8F7/view) you are referring to for image feature extraction is unknown to me, therefore I did not use it. For reading in the images I closely followed the original LXMERT demo in [this Colab Notebook](https://colab.research.google.com/drive/18TyuMfZYlgQ_nXo-tr8LCnzUaoX0KS-h?usp=sharing).\r\n\r\nCan you reproduce my and @phiyodr 's numbers with the data loading code from that Notebook as well?\r\n\r\nSorry for the late answer, I had too much going on.", "Hi @LetiP,\r\n\r\nThanks a lot for sharing your script! \r\n\r\nUnfortunately, I am not able to reproduce those numbers using that notebook. Using the first 1000 val images on MSCOCO with all their paired captions as positive examples, and one randomly sampled caption from a different image as negative examples, I get \r\nTotal number of examples tested = 10004\r\ntp = 3546\r\nfp = 3546\r\ntn = 1456\r\nfn = 1456\r\nAccuracy = 0.5\r\nPrecision = 0.5\r\nRecall = 0.7089164334266294\r\nSpecificity = 0.29108356657337064\r\nThis is a significantly higher recall than what the two of you got but a much lower specificity. Note that this did require me to change the transformers source to prevent color format conversion for local images ([#8333](https://github.com/huggingface/transformers/issues/8333)). Without that change, recall was 55.6% and specificity was 44.37%.\r\n\r\nI did have to modify the script you provided in order to run on a remote server, load MSCOCO images and sample negatives, but I don't think I changed anything that should result in different numbers. Just in case, here are [the python script](https://drive.google.com/file/d/1ToyeDOIrCWZkBhF6RaKWcEPX5rR6jnRW/view?usp=sharing) used, and [the bash script](https://drive.google.com/file/d/1IT82A5S2RyI36pOsbWa5g-ulycKVAeTr/view?usp=sharing) which shows other steps. \r\n\r\nOverall I'm still confused about why I'm unable to reproduce your and @phiyodr 's results and would appreciate any suggestions.\r\nTagging @eltoto1219 as well in case he can provide further insight. \r\n\r\nThanks a lot! \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,600
1,622
1,622
NONE
null
# ❓ Questions & Help Hello, congrats to all contributors for the awesome work with LXMERT! It is exciting to see multimodal transformers coming to hugginface/transformers. Of course, I immediately tried it out and played with the demo. ## LXMERT pre-trained model, trained on what exactly? __Question:__ Does the line `lxmert_base = LxmertForPreTraining.from_pretrained("unc-nlp/lxmert-base-uncased")` load an already pre-trained LXMERT model on the tasks enumerated in the original paper _“(1) masked crossmodality language modeling, (2) masked object prediction via RoI-feature regression, (3) masked object prediction via detected-label classification, (4) cross-modality matching, and (5) image question answering.”_ [(Tan & Bansal, 2019)](https://arxiv.org/pdf/1908.07490.pdf)? If the pre-training tasks are not all the ones from the paper, would that line load pre-trained weights at all and if yes, on what? Thanks in advance! 🤗 <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**: Here is the link to the [hugginface forum](https://discuss.huggingface.co/t/lxmert-pre-trained-model/1195).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7266/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7266/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7265/comments
https://api.github.com/repos/huggingface/transformers/issues/7265/events
https://github.com/huggingface/transformers/issues/7265
705,066,362
MDU6SXNzdWU3MDUwNjYzNjI=
7,265
Feature Request: Support Longformer 3D attention mask ?
{ "login": "Maybewuss", "id": 38156589, "node_id": "MDQ6VXNlcjM4MTU2NTg5", "avatar_url": "https://avatars.githubusercontent.com/u/38156589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Maybewuss", "html_url": "https://github.com/Maybewuss", "followers_url": "https://api.github.com/users/Maybewuss/followers", "following_url": "https://api.github.com/users/Maybewuss/following{/other_user}", "gists_url": "https://api.github.com/users/Maybewuss/gists{/gist_id}", "starred_url": "https://api.github.com/users/Maybewuss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Maybewuss/subscriptions", "organizations_url": "https://api.github.com/users/Maybewuss/orgs", "repos_url": "https://api.github.com/users/Maybewuss/repos", "events_url": "https://api.github.com/users/Maybewuss/events{/privacy}", "received_events_url": "https://api.github.com/users/Maybewuss/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @Maybewuss,\r\ncould you post a code snippet example, where you show the use case of using a 3D attention mask? :-) \r\nWhat would you use it for?", "> Hey @Maybewuss,\r\n> could you post a code snippet example, where you show the use case of using a 3D attention mask? :-)\r\n> What would you use it for?\r\n\r\n@patrickvonplaten \r\nI want each word to foucs different words in an sentence, for example, in the generate task, the attntion mask is a lower triangular 3d matrix. Besides, other models like Bert and Roberta support 3d mask.\r\n\r\n```\r\nmodel = Lonformer.from_pretrained('model_path')\r\ninput_ids = torch.randint(20, (3, 5), dtype=int)\r\nattn_masks = torch.randint(2, (3, 5, 5), dtype=int)\r\nmodel(input_ids, attn_masks)\r\n```", "@Maybewuss , I think this feature should actually already be implemented for Longformer.\r\n\r\nCan you provide me with a code snippet with the error message?\r\nThe code snippet should include a public model and functional code so that I can copy-paste into my console to see the error. Otherwise, it will be difficult for me to help you. Thanks! ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I used 3d attention mask in the LongFormer, but also failed. I find that \r\nthe code \r\nattention_mask = nn.functional.pad(\r\n attention_mask, (0, padding_len), value=0\r\n ) # no attention on the padding tokens \r\nin line 1626 in modeling_longformer may be not support the 3D attention mask. Please correct me if I am wrong" ]
1,600
1,682
1,607
NONE
null
I try to use 3d attntion mask, but failed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7265/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7264/comments
https://api.github.com/repos/huggingface/transformers/issues/7264/events
https://github.com/huggingface/transformers/issues/7264
705,060,880
MDU6SXNzdWU3MDUwNjA4ODA=
7,264
Changing learning rate for BertModelforTokenClassification
{ "login": "YojanaGadiya", "id": 45199062, "node_id": "MDQ6VXNlcjQ1MTk5MDYy", "avatar_url": "https://avatars.githubusercontent.com/u/45199062?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YojanaGadiya", "html_url": "https://github.com/YojanaGadiya", "followers_url": "https://api.github.com/users/YojanaGadiya/followers", "following_url": "https://api.github.com/users/YojanaGadiya/following{/other_user}", "gists_url": "https://api.github.com/users/YojanaGadiya/gists{/gist_id}", "starred_url": "https://api.github.com/users/YojanaGadiya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YojanaGadiya/subscriptions", "organizations_url": "https://api.github.com/users/YojanaGadiya/orgs", "repos_url": "https://api.github.com/users/YojanaGadiya/repos", "events_url": "https://api.github.com/users/YojanaGadiya/events{/privacy}", "received_events_url": "https://api.github.com/users/YojanaGadiya/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "You can specify the learning rate in the optimizer, see the example using SGD below (ref: https://discuss.pytorch.org/t/different-learning-rate-for-a-specific-layer/33670)\r\n\r\n```\r\noptim.SGD([\r\n {'params': model.base.parameters()},\r\n {'params': model.classifier.parameters(), 'lr': 1e-3}\r\n ], lr=1e-2)\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,607
1,607
NONE
null
Dear all, I wanted to set a different learning rate for the linear layer and the Bert model for a BertModelforTokenClassification. How can I do so? This change should be done after loading a locally saved BertModelforTokenClassification model. Thank You.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7264/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7263/comments
https://api.github.com/repos/huggingface/transformers/issues/7263/events
https://github.com/huggingface/transformers/pull/7263
705,050,524
MDExOlB1bGxSZXF1ZXN0NDg5ODYwOTMx
7,263
[s2s] adjust finetune + test to work with fsmt
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2357479466, "node_id": "MDU6TGFiZWwyMzU3NDc5NDY2", "url": "https://api.github.com/repos/huggingface/transformers/labels/fsmt", "name": "fsmt", "color": "d0e884", "default": false, "description": "" } ]
closed
false
null
[]
[ "Looks good. Will wait to merge.\r\nI bet there is a clever way to use `nn.Module.apply` to freeze all submodules that match `isinstance(module, nn.Embedding)` and delete all the conditionals.\r\n", "Whichever way we do it it'd be good to abstract it into some helper function perhaps in `testing_utils.py`, that gets as the argument either `module_name` or the model object so it can return `model_type` (t5/bart/fsmt/etc.) auto-magically and then another helper function that will return the corresponding to `model_type` `task` and whatever other unique things we need that are currently being derived in conditionals. ", "PR #7224 has been merged, this PR has been adjusted, so it's now good to go.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7263?src=pr&el=h1) Report\n> Merging [#7263](https://codecov.io/gh/huggingface/transformers/pull/7263?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/63276b76d4fb54d096b491e89632859aed6b4364?el=desc) will **increase** coverage by `0.42%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7263/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7263?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7263 +/- ##\n==========================================\n+ Coverage 79.72% 80.14% +0.42% \n==========================================\n Files 174 174 \n Lines 33452 33452 \n==========================================\n+ Hits 26668 26810 +142 \n+ Misses 6784 6642 -142 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7263?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.40% <0.00%> (-42.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.96% <0.00%> (-30.18%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.97% <0.00%> (-24.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `40.00% <0.00%> (-18.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7263/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7263?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7263?src=pr&el=footer). Last update [63276b7...a793794](https://codecov.io/gh/huggingface/transformers/pull/7263?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
integrate FSMT into finetune + add the test Please note I started tweaking `finetune.py` to make explicit conditionals based on model name/type, rather than try/except. Might be a good idea to make `self.config.model_type` more easily accessible by some quick attribute alias or something... <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #7230 Once PR https://github.com/huggingface/transformers/pull/7224 is merged I will have to tweak `s/.weights/weight/` in the test (As we are moving to `nn.Embedding` subclass). So perhaps let's hold off on merging this until after #7224 is in and I pushed the adjustment. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7263/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7263", "html_url": "https://github.com/huggingface/transformers/pull/7263", "diff_url": "https://github.com/huggingface/transformers/pull/7263.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7263.patch", "merged_at": 1600715600000 }
https://api.github.com/repos/huggingface/transformers/issues/7262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7262/comments
https://api.github.com/repos/huggingface/transformers/issues/7262/events
https://github.com/huggingface/transformers/issues/7262
705,048,188
MDU6SXNzdWU3MDUwNDgxODg=
7,262
When I updated my transformers to the latest, the previously trained model loaded with an error
{ "login": "wulaoshi", "id": 27938964, "node_id": "MDQ6VXNlcjI3OTM4OTY0", "avatar_url": "https://avatars.githubusercontent.com/u/27938964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wulaoshi", "html_url": "https://github.com/wulaoshi", "followers_url": "https://api.github.com/users/wulaoshi/followers", "following_url": "https://api.github.com/users/wulaoshi/following{/other_user}", "gists_url": "https://api.github.com/users/wulaoshi/gists{/gist_id}", "starred_url": "https://api.github.com/users/wulaoshi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wulaoshi/subscriptions", "organizations_url": "https://api.github.com/users/wulaoshi/orgs", "repos_url": "https://api.github.com/users/wulaoshi/repos", "events_url": "https://api.github.com/users/wulaoshi/events{/privacy}", "received_events_url": "https://api.github.com/users/wulaoshi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "![image](https://user-images.githubusercontent.com/27938964/93694930-3f812e80-fb44-11ea-938e-6e36e51bf183.png)\r\n", "Duplicate of https://github.com/huggingface/transformers/issues/6882 I think", "> Duplicate of #6882 I think\r\n\r\nThanks. I'll look into it.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:'3.1.0' - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7262/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7261/comments
https://api.github.com/repos/huggingface/transformers/issues/7261/events
https://github.com/huggingface/transformers/issues/7261
705,039,228
MDU6SXNzdWU3MDUwMzkyMjg=
7,261
LXMERT visual feature extraction during training/fine-tuning phase
{ "login": "mmiakashs", "id": 5861942, "node_id": "MDQ6VXNlcjU4NjE5NDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5861942?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mmiakashs", "html_url": "https://github.com/mmiakashs", "followers_url": "https://api.github.com/users/mmiakashs/followers", "following_url": "https://api.github.com/users/mmiakashs/following{/other_user}", "gists_url": "https://api.github.com/users/mmiakashs/gists{/gist_id}", "starred_url": "https://api.github.com/users/mmiakashs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmiakashs/subscriptions", "organizations_url": "https://api.github.com/users/mmiakashs/orgs", "repos_url": "https://api.github.com/users/mmiakashs/repos", "events_url": "https://api.github.com/users/mmiakashs/events{/privacy}", "received_events_url": "https://api.github.com/users/mmiakashs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, I also came up with this error. It would be great if the feature gets published. TIA.", "Tagging LXMERT's implementation author @eltoto1219 ", "Haha, yes we only added the FRCNN for evaluation to accommodate lxmert in the demo. I'll add the training code sometime this week, and then post back here once it is done, in the future it may be useable as a publicly available model following the HF api, but for the time being ill just push the changes to where it is now. ", "Thanks for the prompt feedback. \r\nLooking forward to it. \r\n@eltoto1219 ", "@eltoto1219 thanks, that will be quite a help.", "@eltoto1219 Looking forward to it.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hello, any updates on this? 😃 ", "Hi @LetiP, \r\n\r\nMy apologies for the delay! I actually have a couple of conference deadlines mid-January and also some other projects after that, so my free time to implement training code for the FRCNN is unfortunately very limited. I think if I can still manage to add this functionality, it may not be ready until sometime in May. However, the code used for the FRCNN here was majorly adapted from Facebook's detectron2 library. I can point you to the source training code incase you need this functionality sooner!\r\n\r\nhere is the file for the region proposal network: https://github.com/facebookresearch/detectron2/blob/e0e166d864a2021a15a2bc2c9234d04938066265/detectron2/modeling/proposal_generator/rpn.py#L402\r\n\r\nhere is the file for the box matcher: https://github.com/facebookresearch/detectron2/blob/master/detectron2/modeling/matcher.py\r\n\r\nsome utils for the rpn: https://github.com/facebookresearch/detectron2/blob/master/detectron2/modeling/proposal_generator/proposal_utils.py\r\n\r\ncode for the frcnn output predictions: https://github.com/facebookresearch/detectron2/blob/e0e166d864a2021a15a2bc2c9234d04938066265/detectron2/modeling/roi_heads/fast_rcnn.py#L433\r\n\r\nnot completely sure if changes are needed in this file for training: https://github.com/facebookresearch/detectron2/blob/master/detectron2/modeling/roi_heads/box_head.py\r\n\r\nroi head logic: https://github.com/facebookresearch/detectron2/blob/e0e166d864a2021a15a2bc2c9234d04938066265/detectron2/modeling/roi_heads/roi_heads.py#L307\r\n\r\n\r\nI may be able to provide some quick pointers if you run into anything that seems impossible to get working by replying more to this thread!", "Rather than trying to \"add training functionality\" to the custom copy of an old subset of detectron2 in this repo, I can't see why you cannot just use detectron2 directly. That would not only provide the training functionality out of the box, but also probably reduce the 3000 lines of duplicated unmaintained code here into like 50 lines.", "I want to use a frcnn model that is trained on a custom dataset. I followed the tutorials in the original detectron2 repo (Colab Notebooks in https://github.com/facebookresearch/detectron2). However, I noticed that the config file architecture for your pretrained model is different from mine. \r\nFor example, this is the model part in your config file\r\n\"model :\r\n load_proposals: false\r\n device: cpu\r\n max_pool: true\r\n chkpoint: \"\"\r\n pixel_mean: [102.9801, 115.9465, 122.7717]\r\n pixel_std: [1.0, 1.0, 1.0]\" \r\n\r\nAnd this is mine: \r\n\"MODEL:\r\n ANCHOR_GENERATOR:\r\n ANGLES:\r\n - - -90\r\n - 0\r\n - 90\r\n ASPECT_RATIOS:\r\n - - 0.5\r\n - 1.0\r\n - 2.0\r\n NAME: DefaultAnchorGenerator\r\n OFFSET: 0.0\r\n SIZES:\r\n - - 32\r\n - - 64\r\n - - 128\r\n - - 256\r\n - - 512\r\n BACKBONE:\r\n FREEZE_AT: 2\r\n NAME: build_resnet_fpn_backbone\r\n DEVICE: cuda\r\n FPN:\r\n FUSE_TYPE: sum\r\n IN_FEATURES:\r\n - res2\r\n - res3\r\n - res4\r\n - res5\r\n NORM: ''\r\n OUT_CHANNELS: 256\r\n KEYPOINT_ON: false\r\n LOAD_PROPOSALS: false\r\n MASK_ON: true\r\n META_ARCHITECTURE: GeneralizedRCNN\r\n PANOPTIC_FPN:\r\n COMBINE:\r\n ENABLED: true\r\n INSTANCES_CONFIDENCE_THRESH: 0.5\r\n OVERLAP_THRESH: 0.5\r\n STUFF_AREA_LIMIT: 4096\r\n INSTANCE_LOSS_WEIGHT: 1.0\r\n PIXEL_MEAN:\r\n - 103.53\r\n - 116.28\r\n - 123.675\r\n PIXEL_STD:\r\n - 1.0\r\n - 1.0\r\n - 1.0\r\n PROPOSAL_GENERATOR:\r\n MIN_SIZE: 0\r\n NAME: RPN\r\n RESNETS:\r\n DEFORM_MODULATED: false\r\n DEFORM_NUM_GROUPS: 1\r\n DEFORM_ON_PER_STAGE:\r\n - false\r\n - false\r\n - false\r\n - false\r\n DEPTH: 50\r\n NORM: FrozenBN\r\n NUM_GROUPS: 1\r\n OUT_FEATURES:\r\n - res2\r\n - res3\r\n - res4\r\n - res5\r\n RES2_OUT_CHANNELS: 256\r\n RES5_DILATION: 1\r\n STEM_OUT_CHANNELS: 64\r\n STRIDE_IN_1X1: true\r\n WIDTH_PER_GROUP: 64\r\n RETINANET:\r\n BBOX_REG_LOSS_TYPE: smooth_l1\r\n BBOX_REG_WEIGHTS: &id001\r\n - 1.0\r\n - 1.0\r\n - 1.0\r\n - 1.0\r\n FOCAL_LOSS_ALPHA: 0.25\r\n FOCAL_LOSS_GAMMA: 2.0\r\n IN_FEATURES:\r\n - p3\r\n - p4\r\n - p5\r\n - p6\r\n - p7\r\n IOU_LABELS:\r\n - 0\r\n - -1\r\n - 1\r\n IOU_THRESHOLDS:\r\n - 0.4\r\n - 0.5\r\n NMS_THRESH_TEST: 0.5\r\n NORM: ''\r\n NUM_CLASSES: 80\r\n NUM_CONVS: 4\r\n PRIOR_PROB: 0.01\r\n SCORE_THRESH_TEST: 0.05\r\n SMOOTH_L1_LOSS_BETA: 0.1\r\n TOPK_CANDIDATES_TEST: 1000\r\n ROI_BOX_CASCADE_HEAD:\r\n BBOX_REG_WEIGHTS:\r\n - - 10.0\r\n - 10.0\r\n - 5.0\r\n - 5.0\r\n - - 20.0\r\n - 20.0\r\n - 10.0\r\n - 10.0\r\n - - 30.0\r\n - 30.0\r\n - 15.0\r\n - 15.0\r\n IOUS:\r\n - 0.5\r\n - 0.6\r\n - 0.7\r\n ROI_BOX_HEAD:\r\n BBOX_REG_LOSS_TYPE: smooth_l1\r\n BBOX_REG_LOSS_WEIGHT: 1.0\r\n BBOX_REG_WEIGHTS:\r\n - 10.0\r\n - 10.0\r\n - 5.0\r\n - 5.0\r\n CLS_AGNOSTIC_BBOX_REG: false\r\n CONV_DIM: 256\r\n FC_DIM: 1024\r\n NAME: FastRCNNConvFCHead\r\n NORM: ''\r\n NUM_CONV: 0\r\n NUM_FC: 2\r\n POOLER_RESOLUTION: 7\r\n POOLER_SAMPLING_RATIO: 0\r\n POOLER_TYPE: ROIAlignV2\r\n SMOOTH_L1_BETA: 0.0\r\n TRAIN_ON_PRED_BOXES: false\r\n ROI_HEADS:\r\n BATCH_SIZE_PER_IMAGE: 128\r\n IN_FEATURES:\r\n - p2\r\n - p3\r\n - p4\r\n - p5\r\n IOU_LABELS:\r\n - 0\r\n - 1\r\n IOU_THRESHOLDS:\r\n - 0.5\r\n NAME: StandardROIHeads\r\n NMS_THRESH_TEST: 0.5\r\n NUM_CLASSES: 1\r\n POSITIVE_FRACTION: 0.25\r\n PROPOSAL_APPEND_GT: true\r\n SCORE_THRESH_TEST: 0.7\r\n ROI_KEYPOINT_HEAD:\r\n CONV_DIMS:\r\n - 512\r\n - 512\r\n - 512\r\n - 512\r\n - 512\r\n - 512\r\n - 512\r\n - 512\r\n LOSS_WEIGHT: 1.0\r\n MIN_KEYPOINTS_PER_IMAGE: 1\r\n NAME: KRCNNConvDeconvUpsampleHead\r\n NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: true\r\n NUM_KEYPOINTS: 17\r\n POOLER_RESOLUTION: 14\r\n POOLER_SAMPLING_RATIO: 0\r\n POOLER_TYPE: ROIAlignV2\r\n ROI_MASK_HEAD:\r\n CLS_AGNOSTIC_MASK: false\r\n CONV_DIM: 256\r\n NAME: MaskRCNNConvUpsampleHead\r\n NORM: ''\r\n NUM_CONV: 4\r\n POOLER_RESOLUTION: 14\r\n POOLER_SAMPLING_RATIO: 0\r\n POOLER_TYPE: ROIAlignV2\r\n RPN:\r\n BATCH_SIZE_PER_IMAGE: 256\r\n BBOX_REG_LOSS_TYPE: smooth_l1\r\n BBOX_REG_LOSS_WEIGHT: 1.0\r\n BBOX_REG_WEIGHTS: *id001\r\n BOUNDARY_THRESH: -1\r\n HEAD_NAME: StandardRPNHead\r\n IN_FEATURES:\r\n - p2\r\n - p3\r\n - p4\r\n - p5\r\n - p6\r\n IOU_LABELS:\r\n - 0\r\n - -1\r\n - 1\r\n IOU_THRESHOLDS:\r\n - 0.3\r\n - 0.7\r\n LOSS_WEIGHT: 1.0\r\n NMS_THRESH: 0.7\r\n POSITIVE_FRACTION: 0.5\r\n POST_NMS_TOPK_TEST: 1000\r\n POST_NMS_TOPK_TRAIN: 1000\r\n PRE_NMS_TOPK_TEST: 1000\r\n PRE_NMS_TOPK_TRAIN: 2000\r\n SMOOTH_L1_BETA: 0.0\r\n SEM_SEG_HEAD:\r\n COMMON_STRIDE: 4\r\n CONVS_DIM: 128\r\n IGNORE_VALUE: 255\r\n IN_FEATURES:\r\n - p2\r\n - p3\r\n - p4\r\n - p5\r\n LOSS_WEIGHT: 1.0\r\n NAME: SemSegFPNHead\r\n NORM: GN\r\n NUM_CLASSES: 54\r\n WEIGHTS: ./output/model_final.pth\"\r\n\r\nCould you please provide any resources how we can use our own trained frcnn models ? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Any update on this 🙂 ?" ]
1,600
1,632
1,619
NONE
null
# 🚀 Feature request Thanks a lot for releasing LXMERT model. In the LXMERT model code samples, the visual feature extraction code (using generalized faster-rcnn: [modeling_frcnn](https://github.com/huggingface/transformers/blob/master/examples/lxmert/modeling_frcnn.py)) only in the inference step is given. However, the visual feature extraction during the training phase is not given. For this reason if we use the same code for fine-tuning, it raises NotImplementedError as the visual feature extraction during training is not implemented. Is it possible to share the visual feature extraction during training?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7261/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7261/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7260/comments
https://api.github.com/repos/huggingface/transformers/issues/7260/events
https://github.com/huggingface/transformers/issues/7260
705,038,084
MDU6SXNzdWU3MDUwMzgwODQ=
7,260
A confusion about mrc model
{ "login": "Maybewuss", "id": 38156589, "node_id": "MDQ6VXNlcjM4MTU2NTg5", "avatar_url": "https://avatars.githubusercontent.com/u/38156589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Maybewuss", "html_url": "https://github.com/Maybewuss", "followers_url": "https://api.github.com/users/Maybewuss/followers", "following_url": "https://api.github.com/users/Maybewuss/following{/other_user}", "gists_url": "https://api.github.com/users/Maybewuss/gists{/gist_id}", "starred_url": "https://api.github.com/users/Maybewuss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Maybewuss/subscriptions", "organizations_url": "https://api.github.com/users/Maybewuss/orgs", "repos_url": "https://api.github.com/users/Maybewuss/repos", "events_url": "https://api.github.com/users/Maybewuss/events{/privacy}", "received_events_url": "https://api.github.com/users/Maybewuss/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
# ❓ Questions & Help - I found all of pretrain models using for mrc use softmax to pred start and end probability. - If a paragraph has multi-answers, softmax not worked, otherwise, adding some postprocess to combine start and end probability to sort that we can choose n-best answer, but the numbert is fixed for all paragaph. - Squad 2.0 has unanswerable questions and i found data processor define start index == end index == 0 to handle this situation. - So my confusion is why not use sigmoid instead of softmax in the last layer, if i use sigmoid, i can deal with muti-answers or non-answer easily....
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7260/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7259/comments
https://api.github.com/repos/huggingface/transformers/issues/7259/events
https://github.com/huggingface/transformers/pull/7259
705,012,932
MDExOlB1bGxSZXF1ZXN0NDg5ODMzNTU3
7,259
[broken] quantization util for CPU
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,601
1,601
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> + completely destroys bart + Marian off by one or two words, but not faster. I wonder why!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7259/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7259", "html_url": "https://github.com/huggingface/transformers/pull/7259", "diff_url": "https://github.com/huggingface/transformers/pull/7259.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7259.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7258/comments
https://api.github.com/repos/huggingface/transformers/issues/7258/events
https://github.com/huggingface/transformers/issues/7258
705,011,140
MDU6SXNzdWU3MDUwMTExNDA=
7,258
[save/load model] authorized keys, no save keys, etc.
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@LysandreJik, @sgugger - sorry didn't think to tag you when I initially posted this. Thoughts? Or just leave it as is?", "For consistency's sake it would indeed be better. For backwards compatibility's sake, it would not be better.\r\n\r\nHowever, I'm thinking that these attributes should be private, to some extent. They're not meant to be read/written to by users. \r\n\r\nI think we can rename these, and take the opportunity to prepend an underscore to say it's a private attribute. What do you think? However, keeping breaking changes in mind, it would be great to include this in the v4.0.0, which we should release tomorrow morning. Do you think you could propose a fix by then? If not, then let's aim for v5.0.0 instead.", "Definitely no rush here, but if you want it now: https://github.com/huggingface/transformers/pull/8737\r\n\r\nGood call on making those private!!!" ]
1,600
1,606
1,606
CONTRIBUTOR
null
We already have: ``` # modeling_bart: authorized_missing_keys = [r"final_logits_bias", r"encoder\.version", r"decoder\.version"] ``` Once https://github.com/huggingface/transformers/pull/7224 is merged we will have: ``` class SomeModel(PretrainedFSMTModel): base_model_prefix = "model" authorized_missing_keys = [ "model.encoder.embed_positions.weight", "model.decoder.embed_positions.weight", ] keys_to_never_save= [ "model.encoder.embed_positions.weight", "model.decoder.embed_positions.weight", ] ``` I'd like to discuss several things: 1. let's pick consistent intuitive names for the group of these class variables - e.g. both ending with `_keys` or starting with `keys_` and use more descriptive mnemonics? I suggest: * `keys_to_ignore_on_load` * `keys_to_ignore_on_save` --------- 2. why is the current implementation for `authorized_missing_keys` uses a regex search? ``` # modeling_bart: authorized_missing_keys = [r"final_logits_bias", r"encoder\.version", r"decoder\.version"] [...] # modeling_utils: if cls.authorized_missing_keys is not None: for pat in cls.authorized_missing_keys: missing_keys = [k for k in missing_keys if re.search(pat, k) is None] ``` when a simple direct comparison `k in list` would work just fine? i.e.: (untested) ``` if cls.authorized_missing_keys missing_keys = [k for k in missing_keys if k is not in cls.authorized_missing_keys] ``` it'd make the listing of the keys easier to write/read. ``` authorized_missing_keys = ["final_logits_bias", "encoder.version", "decoder.version"] ``` ------------------- 3. I think we may have an issue with `model.` prefix present in some saved model `state_dict`s and lacking in others - or is it a save/load issue where it has the prefix on the way out, but not back in? Note, that `authorized_missing_keys` doesn't have the `model.` prefix. Perhaps that's why the regex was used - to catch w/ and w/o the prefix? So perhaps the normalization could happen in the core libs (load?) and the models listing special "needs" keys should all either have the prefix or not. ---- Thank you for your input.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7258/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7257/comments
https://api.github.com/repos/huggingface/transformers/issues/7257/events
https://github.com/huggingface/transformers/pull/7257
705,004,334
MDExOlB1bGxSZXF1ZXN0NDg5ODI3NTE3
7,257
[fsmt] build/test scripts
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2357479466, "node_id": "MDU6TGFiZWwyMzU3NDc5NDY2", "url": "https://api.github.com/repos/huggingface/transformers/labels/fsmt", "name": "fsmt", "color": "d0e884", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7257?src=pr&el=h1) Report\n> Merging [#7257](https://codecov.io/gh/huggingface/transformers/pull/7257?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4f6e52574248636352a746cfe6cc0b13cf3eb7f9?el=desc) will **decrease** coverage by `0.25%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7257/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7257?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7257 +/- ##\n==========================================\n- Coverage 78.63% 78.38% -0.26% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n- Hits 26300 26216 -84 \n- Misses 7146 7230 +84 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7257?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+4.76%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.97% <0.00%> (+23.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7257/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `93.23% <0.00%> (+74.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7257?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7257?src=pr&el=footer). Last update [4f6e525...47ef585](https://codecov.io/gh/huggingface/transformers/pull/7257?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Push all your stuff here and then we can merge it on Thursday. Trying to avoid git log pollution. If you need a merge sooner for another reason let me know!", "Good call, Sam. There is no rush." ]
1,600
1,600
1,600
CONTRIBUTOR
null
A few more essential building + testing scripts @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7257/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7257", "html_url": "https://github.com/huggingface/transformers/pull/7257", "diff_url": "https://github.com/huggingface/transformers/pull/7257.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7257.patch", "merged_at": 1600981827000 }
https://api.github.com/repos/huggingface/transformers/issues/7256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7256/comments
https://api.github.com/repos/huggingface/transformers/issues/7256/events
https://github.com/huggingface/transformers/issues/7256
704,999,489
MDU6SXNzdWU3MDQ5OTk0ODk=
7,256
[fsmt] Expanding Positional Embeddings
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 2357479466, "node_id": "MDU6TGFiZWwyMzU3NDc5NDY2", "url": "https://api.github.com/repos/huggingface/transformers/labels/fsmt", "name": "fsmt", "color": "d0e884", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I was hoping that someone would have a need for that and then investigate, but since nobody asked this is still a may be.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "as there has been no interest in this, closing this for now." ]
1,600
1,616
1,616
CONTRIBUTOR
null
splitting the comment from @sshleifer in https://github.com/huggingface/transformers/pull/7224#pullrequestreview-492155076 into this separate issue, as the PR hasn't introduced this potential issue. He writes: > The expansion of embeddings may require a bit more care, but the comment below doesn't prevent merging this PR. You can just delete that logic later if it is bad. # Expanding Positional Embeddings ``` if max_pos > self.weight.size(0): # recompute/expand embeddings if needed ``` > The reason I haven't autoexpanded the bart positional embeddings so far is that I wanted an error for long sequences that the model would translate poorly, instead of just poor performance. But if you can like concatenate a few en-ru examples and see that performance doesn't plummet it would be good. There is also a theoretical O(seq_len^2) theoretical cost associated with passing longer documents through transformers, so we may not want to encourage longer docs/instead write tooling that, for example, uses moses SentenceSplitter to chunk documents, pass them through the model, and rejoin the results correctly. If it just works with the auto expansion hack then I'm all for it. @stas00: I shall investigate and will follow up when I got a chance to do so
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7256/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7255/comments
https://api.github.com/repos/huggingface/transformers/issues/7255/events
https://github.com/huggingface/transformers/pull/7255
704,994,038
MDExOlB1bGxSZXF1ZXN0NDg5ODIwMTEx
7,255
Add "Fine-tune ALBERT for sentence-pair classification" notebook to the community notebooks
{ "login": "NadirEM", "id": 58773102, "node_id": "MDQ6VXNlcjU4NzczMTAy", "avatar_url": "https://avatars.githubusercontent.com/u/58773102?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NadirEM", "html_url": "https://github.com/NadirEM", "followers_url": "https://api.github.com/users/NadirEM/followers", "following_url": "https://api.github.com/users/NadirEM/following{/other_user}", "gists_url": "https://api.github.com/users/NadirEM/gists{/gist_id}", "starred_url": "https://api.github.com/users/NadirEM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NadirEM/subscriptions", "organizations_url": "https://api.github.com/users/NadirEM/orgs", "repos_url": "https://api.github.com/users/NadirEM/repos", "events_url": "https://api.github.com/users/NadirEM/events{/privacy}", "received_events_url": "https://api.github.com/users/NadirEM/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
Hello, I'm adding to the community notebooks a tutorial on fine-tuning ALBERT and other BERT-based models for sentence-pair classification. The main features of this tutorial are : [1] End-to-end ML implementation (training, validation, prediction, evaluation) [2] Easy adaptability to your own datasets [3] Facilitation of quick experiments with other BERT-based models (BERT, ALBERT, ...) [4] Quick training with limited computational resources (mixed-precision, gradient accumulation, ...) [5] Multi-GPU execution [6] Threshold choice for the classification decision (not necessarily 0.5) [7] Freeze BERT layers and only update the classification layer weights or update all the weights [8] Reproducible results with seed settings
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7255/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7255", "html_url": "https://github.com/huggingface/transformers/pull/7255", "diff_url": "https://github.com/huggingface/transformers/pull/7255.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7255.patch", "merged_at": 1600676722000 }
https://api.github.com/repos/huggingface/transformers/issues/7254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7254/comments
https://api.github.com/repos/huggingface/transformers/issues/7254/events
https://github.com/huggingface/transformers/pull/7254
704,984,570
MDExOlB1bGxSZXF1ZXN0NDg5ODEzMjc3
7,254
[s2s] distributed eval allows num_return_sequences > 1
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7254?src=pr&el=h1) Report\n> Merging [#7254](https://codecov.io/gh/huggingface/transformers/pull/7254?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ccb6f5c6da9e703766e8053581fddfc6dcc71a9?el=desc) will **increase** coverage by `1.08%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7254/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7254?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7254 +/- ##\n==========================================\n+ Coverage 78.20% 79.29% +1.08% \n==========================================\n Files 181 181 \n Lines 35751 35751 \n==========================================\n+ Hits 27959 28347 +388 \n+ Misses 7792 7404 -388 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7254?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `17.16% <0.00%> (-81.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.13% <0.00%> (-15.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.22% <0.00%> (-10.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.36% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `84.03% <0.00%> (+1.40%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7254/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7254?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7254?src=pr&el=footer). Last update [0ccb6f5...2960c40](https://codecov.io/gh/huggingface/transformers/pull/7254?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
For more complicated pseudolabeling procedures.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7254/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7254", "html_url": "https://github.com/huggingface/transformers/pull/7254", "diff_url": "https://github.com/huggingface/transformers/pull/7254.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7254.patch", "merged_at": 1600983010000 }
https://api.github.com/repos/huggingface/transformers/issues/7253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7253/comments
https://api.github.com/repos/huggingface/transformers/issues/7253/events
https://github.com/huggingface/transformers/pull/7253
704,981,977
MDExOlB1bGxSZXF1ZXN0NDg5ODExNTQ4
7,253
[wip] layernorm eps config
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,602
1,602
CONTRIBUTOR
null
+ Allow config.layernorm_eps to control layernorm epsilon for Bart and all it's children. + This is consistent with other models, and makes the code more extensible, at the cost of added complexity.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7253/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7253", "html_url": "https://github.com/huggingface/transformers/pull/7253", "diff_url": "https://github.com/huggingface/transformers/pull/7253.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7253.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7252/comments
https://api.github.com/repos/huggingface/transformers/issues/7252/events
https://github.com/huggingface/transformers/pull/7252
704,981,814
MDExOlB1bGxSZXF1ZXN0NDg5ODExNDIy
7,252
[s2s] add supported architecures to MD
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7252?src=pr&el=h1) Report\n> Merging [#7252](https://codecov.io/gh/huggingface/transformers/pull/7252?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6bc72c469c38a611fb99c3d61807f59b43fe2c9?el=desc) will **increase** coverage by `2.09%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7252/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7252?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7252 +/- ##\n==========================================\n+ Coverage 77.40% 79.49% +2.09% \n==========================================\n Files 181 174 -7 \n Lines 34827 33446 -1381 \n==========================================\n- Hits 26958 26589 -369 \n+ Misses 7869 6857 -1012 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7252?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.62% <0.00%> (-69.31%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `75.91% <0.00%> (-20.85%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.60% <0.00%> (-20.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `82.81% <0.00%> (-9.73%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `58.88% <0.00%> (-1.75%)` | :arrow_down: |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `66.88% <0.00%> (-1.64%)` | :arrow_down: |\n| ... and [48 more](https://codecov.io/gh/huggingface/transformers/pull/7252/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7252?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7252?src=pr&el=footer). Last update [d6bc72c...4a7282b](https://codecov.io/gh/huggingface/transformers/pull/7252?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7252/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7252", "html_url": "https://github.com/huggingface/transformers/pull/7252", "diff_url": "https://github.com/huggingface/transformers/pull/7252.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7252.patch", "merged_at": 1600794576000 }
https://api.github.com/repos/huggingface/transformers/issues/7251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7251/comments
https://api.github.com/repos/huggingface/transformers/issues/7251/events
https://github.com/huggingface/transformers/pull/7251
704,974,737
MDExOlB1bGxSZXF1ZXN0NDg5ODA2Mjky
7,251
[testing doc] @slow has to be last
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7251?src=pr&el=h1) Report\n> Merging [#7251](https://codecov.io/gh/huggingface/transformers/pull/7251?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1d90d0f386af2af52017d51c421e71a51ec94de0?el=desc) will **decrease** coverage by `3.14%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7251/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7251?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7251 +/- ##\n==========================================\n- Coverage 81.81% 78.66% -3.15% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n- Hits 27364 26311 -1053 \n- Misses 6082 7135 +1053 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7251?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `19.02% <0.00%> (-74.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.94% <0.00%> (-1.18%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.31% <0.00%> (+2.54%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/7251/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7251?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7251?src=pr&el=footer). Last update [1d90d0f...d143710](https://codecov.io/gh/huggingface/transformers/pull/7251?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for pointing it out!" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Found an issue when `@slow` isn't the last decorator (gets ignored!), so documenting this significance. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7251/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7251", "html_url": "https://github.com/huggingface/transformers/pull/7251", "diff_url": "https://github.com/huggingface/transformers/pull/7251.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7251.patch", "merged_at": 1600607850000 }
https://api.github.com/repos/huggingface/transformers/issues/7250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7250/comments
https://api.github.com/repos/huggingface/transformers/issues/7250/events
https://github.com/huggingface/transformers/issues/7250
704,956,079
MDU6SXNzdWU3MDQ5NTYwNzk=
7,250
[testing] when to @slow and when not to? (huge models download)
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I guess I haven't tagged that one so nobody looked at it - @LysandreJik, would you please have a look?", "Hi, sorry for getting back to you so late. I believe this was due to the pipeline tests, but that should not be the case anymore since the refactor of the pipeline tests by Thom.\r\n\r\nIf some tests still download large files, then that's an error which we should resolve.", "I had a quick look - we no longer have the job that runs all tests, so had to look in several places. I haven't check them all - here are some samples:\r\n\r\n* This one downloads a handful of 250-440MB files, if you look at the top of the output for the test suite run:\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/15941/workflows/1f3fc641-82f3-4d34-a45d-30ebc2a04292/jobs/121749\r\n\r\n* this one downloads a 500MB file\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/15941/workflows/1f3fc641-82f3-4d34-a45d-30ebc2a04292/jobs/121754\r\n", "It'd be useful to have documented a MB threshold agreed on at which point the developer knows to add `@slow` on the test.", "Okay, we should take a look and see if we can't replace some of these by smaller checkpoints, and if we can't just set these tests as slow.\r\n\r\nYes, that would be very useful. What do you think of a 100MB limit? (maybe even smaller - we can set it to 100MB now and come back later if it's too high)", "It all depends on how much it'll add up to - if now we are going to hypothetically enable 100 tests each downloading 100MB - that's 10TB which is probably too much overall.\r\n\r\nThey key is to keep the total run-time of the test suite to a reasonable length of time, taking into an account its constant growth.\r\n\r\nIn general we have mostly 2 types of tests (network io-wise):\r\n\r\n1) dumb tests that only check the mechanics and require either downloaded tiny models or created on the fly, with tiny datasets - all <10MB.\r\n2) quality-checking tests that typically require a download of full models, which are typically 400MB+\r\n\r\nI don't think 100MB threshold fits into either group - it's too big for dumb tests and too small for most quality-checking tests. So it's more like 20MB and anything bigger is already type 2, which are currently mostly '@slow`.\r\n\r\nThe following is a bit of a brain dump on how to keep the total CI job runtime to the minimum in the context of network overhead impacting the total duration of the CI job run:\r\n\r\n* In general to make an intelligent decision about how much download we should accommodate if we want CI jobs to be faster, we need to know an average download speed - which we can then translate to the time overhead added by the test being blocked while waiting for the download. For example, knowing this we perhaps could measure download demands of each test and then sort by the slowest and set them to slow if it's not an essential test or rewrite the slow test to be faster. Similar to duration, except here we are going more finegrained and look at the download aspect that impacts the total duration.\r\n\r\n* A possible bruteforce solution, is to find a magical way to fork off a process as soon as feasible (probably as soon as pip installed `transformers`) that will start downloading all the largish models used by fast tests (and it'd do those in parallel if it's faster than doing sequential downloads - needs to check which side of the download is the bottleneck) so that when the test comes around the model is already there.\r\n\r\n Note that `pip install` takes about 45-60 secs to run and it's not doing any downloads (local cache) - we have a whooping minute to pre-download a lot of models. So I'd install some bare version of `transformers` that all it knows is to run `from_pretrained` and the caching/downloading code and fork it off to already start downloading models.\r\n\r\n But perhaps while some of the tests are blocking on IO, there are more computer resources for other tests to complete faster. For this we need to watch the CPU utilization/ load / swapping and see whether the system struggles or stays cool. For example, through measuring with `time(1)` I found that `pytest -n3` leads to the fastest completion of the test suite on my machine, and higher `-n` leads to a slower total walltime (the bottleneck is gpu in my case). I have 12 CPU cores so `make test` starts -n12, which is a really bad idea. It'd require at least 6GPUs to handle 12 pytest workers.\r\n\r\n\r\n* Also I wonder whether the caching software could somehow be made even more advanced so that it can be made aware that another program is already in the process of downloading this model and not start a new one. That way in the worst case scenario, the test will block as before if the model download hasn't started, in the better case it'll just have to wait for a fraction of time for the model to complete to download when it started the download by another process and in the best case it'd be already downloaded. ( `pytest --dist=loadfile` at the moment mostly prevents from multiple attempts of downloading the same model by different pytest workers)\r\n\r\n* I checked that circle CI has a huge pip cache - only a few newer pip packages get downloaded during the `pip install` step, most come from a local cache.\r\n\r\n Would it be possible to request to create a local cache for essential `transformers` models? Then this whole issue of `@slow` because download is too big will be moot. Then `@slow` will be only bound by execution time.\r\n\r\n I think it'll have to be a local cache dir and not a cache to download from since even on a fast network it'll still add up to a large overheard.\r\n\r\nI hope my communication was clear. \r\n\r\n\r\n", "Thank you for your comment, I think you make some very good points. The last one is especially important, as caching large model files would be very helpful. We did consider this at one point, but refrained from it as it could have added another layer for the detection of errors.\r\n\r\nI think we should revisit this option, for two reasons:\r\n- It would speed up the CI, as you've mentioned here.\r\n- We have a lot of connection issues since using the new git-based system because the CI is bursting the website (cc @julien-c).\r\n\r\nWe can simply cache the `.cache/huggingface/transformers` folder for this.\r\n\r\n---\r\n\r\nRegarding your other points, indeed, it would be interesting to explore all of these options. Downloading the models while we are doing the docker instantiation/pip install/setup would be nice, but a download run in parallel to several commands is out of my knowledge. It may be a bit overkill, too, and require too much development time.\r\n\r\n---\r\n\r\nAll in all, the simplest answer would be to simply put the limit to a threshold, like you've mentioned earlier. I chose 100MB specifically because it's not in either of the two categories. However, thinking about it, maybe some of the smaller distilled models are 80-90MBs large, so an integration test using those would still be under the limit - which ignores the purpose of the 100MB limit.", "Thank you for reading my ideas and following up, @LysandreJik.\r\n\r\nI made a tentative 50MB suggestion in https://github.com/huggingface/transformers/pull/8824\r\n\r\nWe can tweak it if it's not right down the road.\r\n" ]
1,600
1,606
1,606
CONTRIBUTOR
null
Looking at the [CI logs](https://app.circleci.com/pipelines/github/huggingface/transformers/12476/workflows/b4ca9141-e992-4c00-95c6-a6ae7fff63b3/jobs/88844) we do have huge models downloaded (i.e. not @slow): ``` Downloading: 100% 1.16G/1.16G [00:52<00:00, 22.3MB/s] Downloading: 100% 433M/433M [00:08<00:00, 48.4MB/s]s] Downloading: 43% 369M/863M [00:08<00:10, 45.4MB/s] ``` so it's very inconsistent. Why not have a whole bunch more of tests not be `@slow` then if we are downloading huge files anyway? A lot of those tests are very fast, other than the download overhead. Or, perhaps, those currently doing huge downloads should be `@slow` in first place? I'm asking since I was told not to run any fsmt tests with the full model unless it's `@slow` (size ~1.1GB). So it's unclear when it's OK to include huge models in the non-slow test suite and when not to. Also, here is an alternative approach to think about - why not download large weights while other tests not needing them are running? i.e. fork a process early on on CI after pip installs are done and let it cache the models - then they will be ready to be used by the time the tests that need them get to run. This is an unpolished idea, since one needs to figure out how to re-sort the tests so that these large-model tests aren't run first...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7250/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7249/comments
https://api.github.com/repos/huggingface/transformers/issues/7249/events
https://github.com/huggingface/transformers/issues/7249
704,910,239
MDU6SXNzdWU3MDQ5MTAyMzk=
7,249
very poor performance of Longformer on SQuAD-like question-answering tasks
{ "login": "xixiaoyao", "id": 24541791, "node_id": "MDQ6VXNlcjI0NTQxNzkx", "avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xixiaoyao", "html_url": "https://github.com/xixiaoyao", "followers_url": "https://api.github.com/users/xixiaoyao/followers", "following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}", "gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}", "starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions", "organizations_url": "https://api.github.com/users/xixiaoyao/orgs", "repos_url": "https://api.github.com/users/xixiaoyao/repos", "events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}", "received_events_url": "https://api.github.com/users/xixiaoyao/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @xixiaoyao ,\r\n\r\nWe have a couple of Longformer models that have been fine-tuned on Squad and that yield good results, see here: https://huggingface.co/models?search=longformer .\r\n\r\nAlso, it might be helpful to take a look at this notebook: https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb . The notebook shows how to train Longformer for QA tasks.", "I mean, when I train longformer based on the script `example/run_squad.py` you provided, the poor performance can be re-produced. And I have located that `squad_convert_example_to_features` function in `transformers/data/processors/squad.py` will split the document into word-level tokens, and then the longformer tokenizer will tokenize for each token rather than the whole document text, which behavior is not consistent with the pre-trained process of Longformer. \r\n\r\nAs a general purpose script, I think this is a bug in `examples/run_squad.py`, and users are easy to be bothered with the poor performance of Longformer on `run_squad.py`, except for the professional users thar are quite familiar with the implementation details of `transformers/data/processors/squad.py`.\r\n\r\nIs there any plan to compact with Longformer on `examples/run_squad.py`? Or can I find any instructions to finetune Longformer Models on my own SQuAD-like dataset?", "Hey @xixiaoyao , feel free to open a PR to better integrate Longformer into `run_squad.py` :-) Otherwise I think @patil-suraj's notebook here: https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb should be very helpful :-) ", "The bug is in `squad_convert_examples_to_features` not in `run_squad` see #4615", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
## Environment info any environment can easily re-produce the awful results - `transformers` version: newest - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.6 - Using GPU in script?: Y - Using distributed or parallel set-up in script?: parallel ### Who can help Longformer/Reformer: @patrickvonplaten ## Information Model I am using (Longformer): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run examples/run_squad.py with Longformer-base-4096 Model (with fp16 and gradient_checkpointing enabled) This results in very awful results than BERT, RoBERTa and any pre-trained models... E.g., I run it on MARCO data, nothing modify the script. Then the BERT and RoBERTa can easily achieved 50+ F1 score on dev set, while Longformer can only reach **18** points! I believe there are severe bugs for the support of Longformer on SQuAD-like QA tasks. Please confirm that. ## Expected behavior At least superior than RoBERTa, and marginal superior than BERT models
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7249/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/7249/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7248/comments
https://api.github.com/repos/huggingface/transformers/issues/7248/events
https://github.com/huggingface/transformers/pull/7248
704,882,307
MDExOlB1bGxSZXF1ZXN0NDg5NzI2ODUy
7,248
[example/glue] fix compute_metrics_fn for bart like models
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Merged too fast, commited @sgugger's suggestion here: aae4edb\r\n", "Thanks @LysandreJik " ]
1,600
1,600
1,600
MEMBER
null
Fixes #7247 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7248/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7248", "html_url": "https://github.com/huggingface/transformers/pull/7248", "diff_url": "https://github.com/huggingface/transformers/pull/7248.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7248.patch", "merged_at": 1600680860000 }
https://api.github.com/repos/huggingface/transformers/issues/7247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7247/comments
https://api.github.com/repos/huggingface/transformers/issues/7247/events
https://github.com/huggingface/transformers/issues/7247
704,882,242
MDU6SXNzdWU3MDQ4ODIyNDI=
7,247
[example/glue] run_glue compute metrics fail for bart like models
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I didn't expect that side effect, thanks for fixing it so quickly!" ]
1,600
1,600
1,600
MEMBER
null
This PR #7126 introduced multiple predictions for trainer. This breaks the `compute_metrics_fn` of `run_glue.py` for `bart` like models which return multiple predictions. For `BartForSequenceClassfication` `p.predictions` is a `tuple`, so following code fails https://github.com/huggingface/transformers/blob/1d90d0f386af2af52017d51c421e71a51ec94de0/examples/text-classification/run_glue.py#L154 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7247/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7246/comments
https://api.github.com/repos/huggingface/transformers/issues/7246/events
https://github.com/huggingface/transformers/issues/7246
704,859,894
MDU6SXNzdWU3MDQ4NTk4OTQ=
7,246
How to get cross attention weights of decoder when using 'encoderdecodermodel'
{ "login": "kimmo1019", "id": 18159017, "node_id": "MDQ6VXNlcjE4MTU5MDE3", "avatar_url": "https://avatars.githubusercontent.com/u/18159017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kimmo1019", "html_url": "https://github.com/kimmo1019", "followers_url": "https://api.github.com/users/kimmo1019/followers", "following_url": "https://api.github.com/users/kimmo1019/following{/other_user}", "gists_url": "https://api.github.com/users/kimmo1019/gists{/gist_id}", "starred_url": "https://api.github.com/users/kimmo1019/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kimmo1019/subscriptions", "organizations_url": "https://api.github.com/users/kimmo1019/orgs", "repos_url": "https://api.github.com/users/kimmo1019/repos", "events_url": "https://api.github.com/users/kimmo1019/events{/privacy}", "received_events_url": "https://api.github.com/users/kimmo1019/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "There isn't a method yet, you need to modify the source code.\r\nHere is how I did it for for bart https://github.com/huggingface/transformers/pull/6967", "Hey @kimmo1019, yeah we should probably add all those cross-attention weights to the output as well. Might be a good idea to do this for all Seq2Seq models at once in one PR.", "> Hey @kimmo1019, yeah we should probably add all those cross-attention weights to the output as well. Might be a good idea to do this for all Seq2Seq models at once in one PR.\r\n\r\nThat would be great it this can be added!! Thanks in advance.", "Looking forward to this feature, too. How is it going now?", "It's on a ToDo-List!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,610
1,610
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> I'm recently building a encoder-decoder model (Bert2Bert) using `encoderdecodermodel`. But I found that it is really hard to get cross attention weights of the decoder. The document of this API said the return of the `forward` function will be the following - loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) – Languaged modeling loss. - logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) – List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). - Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. - decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). - Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. - **decoder_attentions** (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). - Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. - encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Sequence of hidden-states at the output of the last layer of the encoder of the model. - encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). - Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. - encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) – Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). - Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The `decoder_attentions` in the above only can return the self-attention weight `(batch_size, num_heads, sequence_length, sequence_length)`. However I want the cross attention weight `(batch_size, num_heads, input_sequence_length, target_sequence_length)`. I checked the source code of `encoderdecodermodel` and found the decoder is instantiated from the `AutoModelForCausalLM ` class. I also checked the source code of `AutoModelForCausalLM` and found no way to get cross-attention weight. Is there any method I can get this cross-attention weight? @patrickvonplaten Thanks very much! Would be really appreciated if you can help me. It bothers me for a while.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7246/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7246/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7245/comments
https://api.github.com/repos/huggingface/transformers/issues/7245/events
https://github.com/huggingface/transformers/issues/7245
704,813,377
MDU6SXNzdWU3MDQ4MTMzNzc=
7,245
[s2s] distributed_eval edge case
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "wontfix" ]
1,600
1,602
1,602
CONTRIBUTOR
null
two separate invocations of the script can write to _mp at the same time. Fix: Can use pid for tmp filename or even just a random number.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7245/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7244/comments
https://api.github.com/repos/huggingface/transformers/issues/7244/events
https://github.com/huggingface/transformers/pull/7244
704,791,025
MDExOlB1bGxSZXF1ZXN0NDg5NjU2MzYy
7,244
fsmt tiny model card + script
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7244?src=pr&el=h1) Report\n> Merging [#7244](https://codecov.io/gh/huggingface/transformers/pull/7244?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/83dba10b8fbaa3f16e82b5725dcceaf044dd6817?el=desc) will **increase** coverage by `0.81%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7244/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7244?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7244 +/- ##\n==========================================\n+ Coverage 78.99% 79.80% +0.81% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n+ Hits 26419 26690 +271 \n+ Misses 7027 6756 -271 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7244?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.94% <0.00%> (-1.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7244/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7244?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7244?src=pr&el=footer). Last update [83dba10...ed59733](https://codecov.io/gh/huggingface/transformers/pull/7244?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Tell me if you want me to merge!", "yes, please" ]
1,600
1,600
1,600
CONTRIBUTOR
null
- a tiny model card (this is used in testing only, so only a very brief doc) - a script that created that model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7244/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7244", "html_url": "https://github.com/huggingface/transformers/pull/7244", "diff_url": "https://github.com/huggingface/transformers/pull/7244.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7244.patch", "merged_at": 1600540632000 }
https://api.github.com/repos/huggingface/transformers/issues/7243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7243/comments
https://api.github.com/repos/huggingface/transformers/issues/7243/events
https://github.com/huggingface/transformers/pull/7243
704,663,054
MDExOlB1bGxSZXF1ZXN0NDg5NTQ1MTA2
7,243
Enable pegasus fp16 by clamping large activations
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nAny chance of implementing the same for the \"pegasus-cnn_dailymail\" (distilled) model?\r\n\r\nRegards,\r\nKarthik", "@karthikgali this PR implements for all models that inherit from Bart. This includes pretty much every `sshleifer/` checkpoint and every `pegasus/` checkpoint.\r\n\r\nPegasus variants with 16 encoder layers still do not work well in fp16, but I will push a 12 layer distilled version soon that does!" ]
1,600
1,601
1,601
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> + If encoder layer is about to return inf, clamp to a float **near** the boundary of fp16 range + This improves rouge for distil-pegasus-xsum-12-12 in fp16 by 0.1 ROUGE 2 + I cannot think of how this change could make things worse. + Other models' metrics are unaffected!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7243/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7243", "html_url": "https://github.com/huggingface/transformers/pull/7243", "diff_url": "https://github.com/huggingface/transformers/pull/7243.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7243.patch", "merged_at": 1601542117000 }
https://api.github.com/repos/huggingface/transformers/issues/7242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7242/comments
https://api.github.com/repos/huggingface/transformers/issues/7242/events
https://github.com/huggingface/transformers/pull/7242
704,613,852
MDExOlB1bGxSZXF1ZXN0NDg5NTA3MTY1
7,242
[s2s] distributed_eval.py saves better speed info
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7242/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7242", "html_url": "https://github.com/huggingface/transformers/pull/7242", "diff_url": "https://github.com/huggingface/transformers/pull/7242.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7242.patch", "merged_at": 1600458361000 }
https://api.github.com/repos/huggingface/transformers/issues/7241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7241/comments
https://api.github.com/repos/huggingface/transformers/issues/7241/events
https://github.com/huggingface/transformers/issues/7241
704,569,220
MDU6SXNzdWU3MDQ1NjkyMjA=
7,241
KeyError: 'squeezebert' which from model zoo
{ "login": "ORlGlN", "id": 35373089, "node_id": "MDQ6VXNlcjM1MzczMDg5", "avatar_url": "https://avatars.githubusercontent.com/u/35373089?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ORlGlN", "html_url": "https://github.com/ORlGlN", "followers_url": "https://api.github.com/users/ORlGlN/followers", "following_url": "https://api.github.com/users/ORlGlN/following{/other_user}", "gists_url": "https://api.github.com/users/ORlGlN/gists{/gist_id}", "starred_url": "https://api.github.com/users/ORlGlN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ORlGlN/subscriptions", "organizations_url": "https://api.github.com/users/ORlGlN/orgs", "repos_url": "https://api.github.com/users/ORlGlN/repos", "events_url": "https://api.github.com/users/ORlGlN/events{/privacy}", "received_events_url": "https://api.github.com/users/ORlGlN/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! We're still in the process of merging squeezebert. You can see the development [here](https://github.com/huggingface/transformers/pull/7083).", "Maybe try again now? SqueezeBERT support was recently merged into the master branch." ]
1,600
1,602
1,600
NONE
null
This error occurs when using the pretrained model from https://huggingface.co/squeezebert/squeezebert-mnli
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7241/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7240/comments
https://api.github.com/repos/huggingface/transformers/issues/7240/events
https://github.com/huggingface/transformers/pull/7240
704,514,831
MDExOlB1bGxSZXF1ZXN0NDg5NDI2NzM2
7,240
Add title to model card
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7240?src=pr&el=h1) Report\n> Merging [#7240](https://codecov.io/gh/huggingface/transformers/pull/7240?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9397436ea57aed8b24cbe72143422358ca98d236?el=desc) will **decrease** coverage by `2.78%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7240/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7240?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7240 +/- ##\n==========================================\n- Coverage 81.54% 78.76% -2.79% \n==========================================\n Files 172 172 \n Lines 33089 33089 \n==========================================\n- Hits 26984 26061 -923 \n- Misses 6105 7028 +923 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7240?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `19.02% <0.00%> (-74.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.60% <0.00%> (-20.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-8.71%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7240/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7240?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7240?src=pr&el=footer). Last update [9397436...b858e84](https://codecov.io/gh/huggingface/transformers/pull/7240?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7240/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7240", "html_url": "https://github.com/huggingface/transformers/pull/7240", "diff_url": "https://github.com/huggingface/transformers/pull/7240.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7240.patch", "merged_at": 1600495845000 }
https://api.github.com/repos/huggingface/transformers/issues/7239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7239/comments
https://api.github.com/repos/huggingface/transformers/issues/7239/events
https://github.com/huggingface/transformers/pull/7239
704,513,572
MDExOlB1bGxSZXF1ZXN0NDg5NDI1Njc2
7,239
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7239?src=pr&el=h1) Report\n> Merging [#7239](https://codecov.io/gh/huggingface/transformers/pull/7239?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9397436ea57aed8b24cbe72143422358ca98d236?el=desc) will **decrease** coverage by `2.77%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7239/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7239?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7239 +/- ##\n==========================================\n- Coverage 81.54% 78.77% -2.78% \n==========================================\n Files 172 172 \n Lines 33089 33089 \n==========================================\n- Hits 26984 26067 -917 \n- Misses 6105 7022 +917 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7239?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `19.02% <0.00%> (-74.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.60% <0.00%> (-20.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-8.71%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <0.00%> (+0.35%)` | :arrow_up: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7239/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7239?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7239?src=pr&el=footer). Last update [9397436...904f3d3](https://codecov.io/gh/huggingface/transformers/pull/7239?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7239/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7239", "html_url": "https://github.com/huggingface/transformers/pull/7239", "diff_url": "https://github.com/huggingface/transformers/pull/7239.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7239.patch", "merged_at": 1600495769000 }
https://api.github.com/repos/huggingface/transformers/issues/7238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7238/comments
https://api.github.com/repos/huggingface/transformers/issues/7238/events
https://github.com/huggingface/transformers/pull/7238
704,479,935
MDExOlB1bGxSZXF1ZXN0NDg5Mzk3OTA0
7,238
Update the TF models to remove their interdependencies
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I know this PR is in a draft state, but we could (and should!) leverage @sgugger's [script](https://github.com/huggingface/transformers/pull/7219) to ensure that the copy/pasted code does not diverge once it's merged.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7238?src=pr&el=h1) Report\n> Merging [#7238](https://codecov.io/gh/huggingface/transformers/pull/7238?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f5518e56318a79056ba3c80cbece29d9fe98558c?el=desc) will **increase** coverage by `1.28%`.\n> The diff coverage is `94.17%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7238/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7238?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7238 +/- ##\n==========================================\n+ Coverage 79.30% 80.59% +1.28% \n==========================================\n Files 181 181 \n Lines 34828 35437 +609 \n==========================================\n+ Hits 27620 28559 +939 \n+ Misses 7208 6878 -330 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7238?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <25.00%> (-34.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.93% <90.90%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `89.67% <91.57%> (+65.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `97.81% <93.57%> (+72.48%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `95.32% <97.76%> (+2.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `98.58% <98.36%> (-0.09%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `97.34% <100.00%> (+6.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.90% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `99.28% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.03% <0.00%> (-73.03%)` | :arrow_down: |\n| ... and [27 more](https://codecov.io/gh/huggingface/transformers/pull/7238/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7238?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7238?src=pr&el=footer). Last update [f5518e5...5b6ad05](https://codecov.io/gh/huggingface/transformers/pull/7238?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@LysandreJik @sgugger @thomwolf This PR is ok to be reviewed, this one was quite tricky... I have tested with as much models as I could (my ADSL connection is still burning 😄 ) and I don't get any issue with the last commit. Please let me know if this properly follow the philosophy of the lib.\r\n\r\nI have also fixed few tiny bugs in Flaubert and XLM.\r\n\r\nAs far as I understand @sgugger's script takes care only of the PyTorch implementation. @sgugger How much your script can be applied to TF and all the models? (Not only Bert to Roberta).", "I believe @sgugger's script can be applied to all python files, with no limitation on PyTorch. It was just merged to `master`!", "Can't run it:\r\n```\r\nTraceback (most recent call last):\r\n File \"utils/check_copies.py\", line 181, in <module>\r\n check_copies(args.fix_and_overwrite)\r\n File \"utils/check_copies.py\", line 164, in check_copies\r\n consistent = is_copy_consistent(filename, overwrite)\r\n File \"utils/check_copies.py\", line 98, in is_copy_consistent\r\n lines = f.readlines()\r\n File \"C:\\Users\\snake\\miniconda3\\envs\\transformers\\lib\\encodings\\cp1252.py\", line 23, in decode\r\n return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\nUnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 7178: character maps to <undefined>\r\n```", "That's weird, it runs fine on my laptop, desktop and the CI. What's your env?\r\n\r\n**Edit:** Oh Windows. But how do you run the make commands on Windows?", "I can run make with http://gnuwin32.sourceforge.net/packages/make.htm\r\n\r\nI can run it under WSL if it is an OS issue.", "I'll add `encoding=\"utf-8\"` a bit everywhere and see if it fixes the issue for Windows.", "It works under WSL.\r\n\r\nWhen I run `python utils/check_copies.py` I get no output. Does it means everything is ok?", "You would get an error if there was a problem (since the script is supposed to run under `make quality`).", "Nice then LGTM :)", "Flaubert and DistilBert was failing the last test I have added today on the training pipeline. It is now fixed :)", "Great, thanks for fixing them! The changes look good.\r\nSylvain's script patch for Windows was merged yesterday, do you think you could apply the comments where they apply? This way `make quality` checks that these will not diverge.\r\n\r\nYou can see the way it works on the original PR: https://github.com/huggingface/transformers/pull/7219", "Will do my best :)", "WHOA this script is so dangerous, I have tested it on Albert with `TFAlbertAttention`, and it updates the `__init__` with a strict copy of `TFBertSelfAttention` which is wrong, there are slight updates that have been removed. I also had an indent issue but this is something else.\r\n\r\nIs-it only for **STRICT** copies?", "Yes, it's only for strict copies. If your copy is different (other than the naming that changes), I don't think the script can be applied to a module-level. \r\n\r\nYou can apply it to select methods if these methods are identical to the initial implementation. By the way, you can use the script without the `--fix_and_overwrite` argument and it won't replace anything, just let you know if some copies diverge.", "> Yes, it's only for strict copies. If your copy is different (other than the naming that changes), I don't think the script can be applied to a module-level.\r\n\r\nOK, fortunately there are not much strict copies in what I have changed so it should be easier than I thought :)\r\n\r\n>You can apply it to select methods if these methods are identical to the initial implementation. By the way, you can use the script without the --fix_and_overwrite argument and it won't replace anything, just let you know if some copies diverge.\r\n\r\nI know, it was just for testing to know what was the error raised, because no diff is displayed in order to know what diverge from the original version, might be a useful new feature.", "Ok! I think we are finally good :)" ]
1,600
1,600
1,600
CONTRIBUTOR
null
This PR tries to remove all the interdependencies across the TF models. All the `.modeling_tf_X import TFXX` where `X` is the name of a model, and `XX` is the name of a layer. All the dependent layers have been rewritten directly in the corresponding file model in order to do not have these import anymore. Only the case where a strict usage without modification is allowed such as `TFCamembert` and `TFXLMRoberta`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7238/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7238", "html_url": "https://github.com/huggingface/transformers/pull/7238", "diff_url": "https://github.com/huggingface/transformers/pull/7238.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7238.patch", "merged_at": 1600950660000 }
https://api.github.com/repos/huggingface/transformers/issues/7237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7237/comments
https://api.github.com/repos/huggingface/transformers/issues/7237/events
https://github.com/huggingface/transformers/pull/7237
704,457,507
MDExOlB1bGxSZXF1ZXN0NDg5Mzc5NTUz
7,237
[wip] summarization dataset downloader
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,602
1,602
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7237/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7237", "html_url": "https://github.com/huggingface/transformers/pull/7237", "diff_url": "https://github.com/huggingface/transformers/pull/7237.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7237.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7236/comments
https://api.github.com/repos/huggingface/transformers/issues/7236/events
https://github.com/huggingface/transformers/pull/7236
704,421,315
MDExOlB1bGxSZXF1ZXN0NDg5MzQ5NTQ3
7,236
is_pretokenized -> is_split_into_words
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think that’s a better name too!", "@n1t0 We could also fix the name (with deprecation warnings) in tokenizers before the next release? Will then need three replacements in `tokenization_utils_fast.py`.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7236?src=pr&el=h1) Report\n> Merging [#7236](https://codecov.io/gh/huggingface/transformers/pull/7236?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3a03bab9db342ff0e7104880326c68cc83bda010?el=desc) will **increase** coverage by `1.36%`.\n> The diff coverage is `57.77%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7236/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7236?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7236 +/- ##\n==========================================\n+ Coverage 77.32% 78.68% +1.36% \n==========================================\n Files 172 172 \n Lines 33089 33120 +31 \n==========================================\n+ Hits 25585 26061 +476 \n+ Misses 7504 7059 -445 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7236?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <ø> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `91.83% <42.85%> (-2.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `88.01% <56.25%> (-1.87%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `93.50% <62.50%> (-3.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `96.10% <66.66%> (-2.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `19.02% <0.00%> (-74.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-4.77%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/7236/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7236?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7236?src=pr&el=footer). Last update [3a03bab...489bb9a](https://codecov.io/gh/huggingface/transformers/pull/7236?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
COLLABORATOR
null
As discussed offline, replacing `is_pretokenized` in `is_split_into_words` because the users were confused by what the argument represents. Added deprecation warnings in the functions where the code ends, can add more if you feel I have forgotten some!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7236/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7236/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7236", "html_url": "https://github.com/huggingface/transformers/pull/7236", "diff_url": "https://github.com/huggingface/transformers/pull/7236.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7236.patch", "merged_at": 1600781676000 }
https://api.github.com/repos/huggingface/transformers/issues/7235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7235/comments
https://api.github.com/repos/huggingface/transformers/issues/7235/events
https://github.com/huggingface/transformers/pull/7235
704,350,251
MDExOlB1bGxSZXF1ZXN0NDg5MjkyOTE0
7,235
[Bug Fix] The actual batch_size is inconsistent with the settings.
{ "login": "mojave-pku", "id": 26648528, "node_id": "MDQ6VXNlcjI2NjQ4NTI4", "avatar_url": "https://avatars.githubusercontent.com/u/26648528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mojave-pku", "html_url": "https://github.com/mojave-pku", "followers_url": "https://api.github.com/users/mojave-pku/followers", "following_url": "https://api.github.com/users/mojave-pku/following{/other_user}", "gists_url": "https://api.github.com/users/mojave-pku/gists{/gist_id}", "starred_url": "https://api.github.com/users/mojave-pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mojave-pku/subscriptions", "organizations_url": "https://api.github.com/users/mojave-pku/orgs", "repos_url": "https://api.github.com/users/mojave-pku/repos", "events_url": "https://api.github.com/users/mojave-pku/events{/privacy}", "received_events_url": "https://api.github.com/users/mojave-pku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger Hi, I'm a little confused when I reformat the codes by `make style` and `make quality`.\r\nIt seems that the files in those directories are not tested by isort. \r\n```\r\nmake style\r\nblack --line-length 119 --target-version py35 examples templates tests src utils\r\nAll done! ✨ 🍰 ✨\r\n417 files left unchanged.\r\nisort examples templates tests src utils\r\nWARNING: Unable to parse file examples due to [Errno 21] Is a directory: '/Users/hlz/my-bert-nsp/examples'\r\nWARNING: Unable to parse file templates due to [Errno 21] Is a directory: '/Users/hlz/my-bert-nsp/templates'\r\nWARNING: Unable to parse file tests due to [Errno 21] Is a directory: '/Users/hlz/my-bert-nsp/tests'\r\nWARNING: Unable to parse file src due to [Errno 21] Is a directory: '/Users/hlz/my-bert-nsp/src'\r\nWARNING: Unable to parse file utils due to [Errno 21] Is a directory: '/Users/hlz/my-bert-nsp/utils'\r\n```\r\nWhen I add the parameter `-rc`, it works.\r\n```\r\nmake style\r\nblack --line-length 119 --target-version py35 examples templates tests src utils\r\nAll done! ✨ 🍰 ✨\r\n417 files left unchanged.\r\nisort -rc examples templates tests src utils\r\n```\r\n\r\nAnd When I use `make quality`, it reports:\r\n```\r\nmake quality\r\nblack --check --line-length 119 --target-version py35 examples templates tests src utils\r\nAll done! ✨ 🍰 ✨\r\n417 files would be left unchanged.\r\nisort --check-only -rc examples templates tests src utils\r\nflake8 examples templates tests src utils\r\nexamples/seq2seq/test_datasets.py:25:75: E231 missing whitespace after ','\r\nexamples/seq2seq/test_fsmt_bleu_score.py:48:76: E231 missing whitespace after ','\r\ntests/test_modeling_fsmt.py:411:52: E231 missing whitespace after ','\r\nsrc/transformers/modeling_lxmert.py:229:110: E231 missing whitespace after ','\r\nsrc/transformers/modeling_tf_lxmert.py:1129:94: E231 missing whitespace after ','\r\nmake: *** [quality] Error 1\r\n```\r\nThose errors are not from the files I modify this time, but the CircleCI report the file `language_modeling.py` need to be reformatted.\r\n\r\n", "Hi @HuangLianzhe, it seems you have the wrong black/isort versions. The error shown on the CI is a different one. Which versions are you running on?", "> Hi @HuangLianzhe, it seems you have the wrong black/isort versions. The error shown on the CI is a different one. Which versions are you running on?\r\n\r\nisort, version 4.3.21\r\nblack, version 19.10b0", "These are not the correct versions. Please run `pip install -e \".[quality]\"` when in your `transformers` directory. You should have `[\"black >= 20.8b1\", \"isort >= 5\", \"flake8 >= 3.8.3\"]`", "Sorry, I forgot to update these libraries when I changed the work directory. Thanks for the hint!", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7235?src=pr&el=h1) Report\n> Merging [#7235](https://codecov.io/gh/huggingface/transformers/pull/7235?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/244e1b5ba331cb4c1ed96d88d0895c252567f7f3?el=desc) will **decrease** coverage by `0.32%`.\n> The diff coverage is `97.53%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7235/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7235?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7235 +/- ##\n==========================================\n- Coverage 78.81% 78.48% -0.33% \n==========================================\n Files 174 172 -2 \n Lines 33670 33079 -591 \n==========================================\n- Hits 26537 25963 -574 \n+ Misses 7133 7116 -17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7235?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `93.63% <96.49%> (+0.69%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.64% <100.00%> (-0.63%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mc210LnB5) | `93.58% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `91.93% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <0.00%> (-0.10%)` | :arrow_down: |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/7235/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7235?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7235?src=pr&el=footer). Last update [01f0fd0...e00c4bb](https://codecov.io/gh/huggingface/transformers/pull/7235?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks @sgugger for your careful revision!", "Can you just take care of the merge conflicts? Then we should be good to merge.", "Test failure has been fixed on master, so should be safe to merge. Thanks again!", "Thanks! I am just wondering why the test did not get passed. :)" ]
1,600
1,600
1,600
CONTRIBUTOR
null
In the previous version, the generation of **negative examples** was placed in `Datacollator`, which would cause `batch_size` to be **inconsistent** with the setting during training, resulting in **OOM errors**. Now I move the negative sample generation process to `TextDataset`, although `TextDataset` will need larger storage space and the reading procedure is more time-consuming, the training will not be interrupted due to OOM errors. In fact, in my own project, I have used the `Datasets` library you developed, which is very impressive, especially for scenarios with large data scales such as pre-training tasks. I am not sure if it's welcomed to use the `Datasets` library by default in `TextDatasetForNextSentencePrediction`. I can provide a version that depends on the the library, and try to use the library when it is available.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7235/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7235", "html_url": "https://github.com/huggingface/transformers/pull/7235", "diff_url": "https://github.com/huggingface/transformers/pull/7235.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7235.patch", "merged_at": 1600792282000 }
https://api.github.com/repos/huggingface/transformers/issues/7234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7234/comments
https://api.github.com/repos/huggingface/transformers/issues/7234/events
https://github.com/huggingface/transformers/issues/7234
704,207,227
MDU6SXNzdWU3MDQyMDcyMjc=
7,234
TypeError: 'ByteLevelBPETokenizer' object is not callable
{ "login": "yadavpp", "id": 68846939, "node_id": "MDQ6VXNlcjY4ODQ2OTM5", "avatar_url": "https://avatars.githubusercontent.com/u/68846939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yadavpp", "html_url": "https://github.com/yadavpp", "followers_url": "https://api.github.com/users/yadavpp/followers", "following_url": "https://api.github.com/users/yadavpp/following{/other_user}", "gists_url": "https://api.github.com/users/yadavpp/gists{/gist_id}", "starred_url": "https://api.github.com/users/yadavpp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yadavpp/subscriptions", "organizations_url": "https://api.github.com/users/yadavpp/orgs", "repos_url": "https://api.github.com/users/yadavpp/repos", "events_url": "https://api.github.com/users/yadavpp/events{/privacy}", "received_events_url": "https://api.github.com/users/yadavpp/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I have the same problem following the same example. Is it possible to adapt the LineByLineTextDataset class?", "Same issue here", "Hi, could you provide your software versions? `transformers` and `tokenizers`?", " print(transformers.__version__)\r\n 3.3.1\r\n\r\n print(tokenizers.__version__)\r\n 0.9.0.rc1", "`transformers==3.3.1` has a strict dependency on `tokenizers==0.8.1.rc2`. Using that version on the [colab for that scrip](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb)t I don't get that error.\r\n\r\nDo you get the same error when running the colab?", "![image](https://user-images.githubusercontent.com/12830451/94751724-0f087400-033e-11eb-864a-6d9ce2672499.png)\r\nI have the right versions but I still couldn't get it running @LysandreJik \r\nI am trying to use a BERT model. Hence I used the BPE tokenizer as is, instead of running a RobertaTokenizerFast command. \r\n\r\nAlternatively if I use BertTokenizerFast, I get an error saying 'sep_token' is missing. How can I successfully change this code for a Bert model instead of Roberta? \r\n\r\nWhat am I missing here? Thanks for helping out.", "they same for me, error persists with\r\n\r\n print(transformers.__version__)\r\n 3.3.1\r\n\r\n print(tokenizers.__version__)\r\n 0.8.1.rc2\r\n ", "Same thing here with `transformers` `3.3.1` and `tokenizers` `0.8.1.rc2`.", "I have the same issue with transformers 3.4.0 and tokenizers 0.9.2\r\n", "Same issue with \r\n\r\n```\r\ntokenizers==0.9.2\r\ntransformers==3.4.0\r\n```\r\n\r\nSelf-contained script to reproduce the error\r\n\r\n```\r\nfile_path = \"test.txt\"\r\nwith open(file_path, \"w\") as f:\r\n lorem_ipsum = \"Lorem ipsum dolor sit amet, consectetur adipiscing elit.\\n \" \\\r\n \"Pellentesque ultrices scelerisque sem, lobortis laoreet nisi semper eget.\\n \" \\\r\n \"Curabitur egestas hendrerit neque, et rhoncus enim vulputate blandit.\\n Nunc efficitur \" \\\r\n \"posuere neque id ornare.\\n Sed viverra nisi nec pulvinar accumsan. Nulla faucibus arcu \" \\\r\n \"nisl, non bibendum libero congue eu.\\n Mauris eget dignissim arcu, sed porttitor nunc. \" \\\r\n \"Vivamus venenatis nisl ac leo maximus, in aliquam risus auctor.\\n \"\r\n f.write(lorem_ipsum)\r\nfrom tokenizers import ByteLevelBPETokenizer\r\ntokenizer = ByteLevelBPETokenizer()\r\ntokenizer.train(files=file_path, vocab_size=20, min_frequency=2, special_tokens=[\r\n \"<s>\",\r\n \"<pad>\",\r\n \"</s>\",\r\n \"<unk>\",\r\n \"<mask>\",\r\n])\r\nfrom datasets import load_dataset\r\ndatasets = load_dataset('text', data_files=file_path)\r\ndatasets = datasets.map(lambda e: tokenizer(e['text']))\r\n```", "@n1t0 do you have some advice here?", "Hmm that right, we have not yet incorporated a way to load a custom tokenizer from `tokenizers` in `transformers`.\r\n\r\nI'll work on this in the coming days/weeks when @n1t0 has some time for #8073.\r\n\r\nI guess the simplest way to use a custom tokenizer from the `tokenizers` library in `transformers` would be to add a new `CustomTokenizer` specific class with some doc.", "Another way would be to have a `__call__` method identical to the one of `transformers` in `tokenizers` but here @n1t0 is the master, not me.", "I think the only viable way to really support the tokenizers from `tokenizers`, is to wrap them in what is expected throughout `transformers`: a `PreTrainedTokenizerBase`. \r\nI used to be able to advise some way to do it (cf [here](https://github.com/huggingface/tokenizers/issues/424#issuecomment-697974244)) but this recently changed, so it doesn't seem possible anymore without changing the private `_tokenizer`.\r\n\r\nWe could also add a `__call__` to `tokenizers`, and we probably will at some point, but that wouldn't fix this problem. Also, I think it's important to note that even with a `__call__` method, the input/outputs would most probably still be different and it couldn't be used as a drop-in replacement.", "It should be possible to do something like this for now:\n```python\n# Save the tokenizer you trained\ntokenizer.save(\"byte-level-BPE.tokenizer.json\")\n\n# Load it using transformers\ntokenizer = PreTrainedTokenizerFast(tokenizer_file=\"byte-level-BPE.tokenizer.json\")\n```\nAnd then you should be able to use it with the `LineByLineTextDataset`", "> It should be possible to do something like this for now:\r\n> \r\n> ```python\r\n> # Save the tokenizer you trained\r\n> tokenizer.save(\"byte-level-BPE.tokenizer.json\")\r\n> \r\n> # Load it using transformers\r\n> tokenizer = PreTrainedTokenizerFast(tokenizer_file=\"byte-level-BPE.tokenizer.json\")\r\n> ```\r\n> \r\n> And then you should be able to use it with the `LineByLineTextDataset`\r\n\r\nYes, it works! There is no error when I use the TextDataSet or LineByLineTextDataSet.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "When I do this \r\n\r\n#Save the tokenizer you trained\r\ntokenizer.save(\"byte-level-BPE.tokenizer.json\")\r\n\r\n#Load it using transformers\r\ntokenizer = PreTrainedTokenizerFast(tokenizer_file=\"byte-level-BPE.tokenizer.json\")\r\n\r\nmy tokenizer loads as a list and hence I cannot apply it to my data. \r\nDoes anybody know what could be the issue? " ]
1,600
1,636
1,610
NONE
null
I was trying to train a language model by following this link https://huggingface.co/blog/how-to-train , tokenizer trained successfully but when I'm loading calling this trained tokenizer in LineByLineTextDataset, it showing below error ![image](https://user-images.githubusercontent.com/68846939/93577895-3fedbe00-f9ba-11ea-8de7-70e542744dd6.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7234/reactions", "total_count": 5, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7234/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7233
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7233/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7233/comments
https://api.github.com/repos/huggingface/transformers/issues/7233/events
https://github.com/huggingface/transformers/pull/7233
704,205,194
MDExOlB1bGxSZXF1ZXN0NDg5MTcxNDA4
7,233
Fixed typo in README
{ "login": "sunnyville01", "id": 33743210, "node_id": "MDQ6VXNlcjMzNzQzMjEw", "avatar_url": "https://avatars.githubusercontent.com/u/33743210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sunnyville01", "html_url": "https://github.com/sunnyville01", "followers_url": "https://api.github.com/users/sunnyville01/followers", "following_url": "https://api.github.com/users/sunnyville01/following{/other_user}", "gists_url": "https://api.github.com/users/sunnyville01/gists{/gist_id}", "starred_url": "https://api.github.com/users/sunnyville01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sunnyville01/subscriptions", "organizations_url": "https://api.github.com/users/sunnyville01/orgs", "repos_url": "https://api.github.com/users/sunnyville01/repos", "events_url": "https://api.github.com/users/sunnyville01/events{/privacy}", "received_events_url": "https://api.github.com/users/sunnyville01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
Found a small typo in README.md and fixed it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7233/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7233", "html_url": "https://github.com/huggingface/transformers/pull/7233", "diff_url": "https://github.com/huggingface/transformers/pull/7233.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7233.patch", "merged_at": 1600419164000 }
https://api.github.com/repos/huggingface/transformers/issues/7232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7232/comments
https://api.github.com/repos/huggingface/transformers/issues/7232/events
https://github.com/huggingface/transformers/issues/7232
704,150,449
MDU6SXNzdWU3MDQxNTA0NDk=
7,232
trainer.evaluate() aggregates predictions on GPU and causes CUDA out of memory issues for large datasets
{ "login": "eugeneware", "id": 38154, "node_id": "MDQ6VXNlcjM4MTU0", "avatar_url": "https://avatars.githubusercontent.com/u/38154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eugeneware", "html_url": "https://github.com/eugeneware", "followers_url": "https://api.github.com/users/eugeneware/followers", "following_url": "https://api.github.com/users/eugeneware/following{/other_user}", "gists_url": "https://api.github.com/users/eugeneware/gists{/gist_id}", "starred_url": "https://api.github.com/users/eugeneware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eugeneware/subscriptions", "organizations_url": "https://api.github.com/users/eugeneware/orgs", "repos_url": "https://api.github.com/users/eugeneware/repos", "events_url": "https://api.github.com/users/eugeneware/events{/privacy}", "received_events_url": "https://api.github.com/users/eugeneware/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Yes that is a known issue. The workaround you suggest does not work for distributed training, as the tensors need to be kept on GPU/TPU to be reduced together. Our plans to address this problem are to use the same kind of code as the metrics in `datasets`, but this is a bit of work and will take us a few weeks to fix.\r\n\r\nIn the meantime, I suggest you use the `predict` method and feed your dataset in slices there to avoid the OOM (then concatenate them on the CPU before running your metrics) or if you don't care about distributed eval, you can just subclass `Trainer` and replace the problematic code you identified by your own.", "@eugeneware for context, what kind of task are you training on, and what's the approximate size/dimensions of the aggregated predictions tensor?", "@sgugger thanks for the update and getting back to me. I'm currently running just a single GPU for this task, so the subclassing will work OK at the moment. Given your work in the fast.ai trainer, I'm excited to see where the `Learner` goes! :-)\r\n\r\n---\r\n\r\n@julien-c - I'm training a classifier with a large number of output classes (9997 classes).\r\n\r\nI have 469530 items in my validation dataset.\r\n\r\nSo, by math, it would need to accumulate 469530 * 9997 * 4 = 17.5 GB of VRAM at fp32 or 8.8GB at fp16?\r\n\r\nI'm training on a Titan RTX with 24GB of RAM, and I've found with the `bert-base-uncased` model that evaluation seems to die a bit over 1/2 way through training.\r\n\r\n---\r\n\r\nFor others who might stumble across this issue, and are not using distributed training, here's my subclass of the `Trainer` class - I just changed the last line to return the tensors moved onto the `eval_device` argument which I'm defaulting to the cpu:\r\n\r\n``` py\r\nclass CustomTrainer(Trainer):\r\n def __init__(self, *args, eval_device=torch.device('cpu'), **kwargs):\r\n super().__init__(*args, **kwargs)\r\n self.eval_device = eval_device\r\n\r\n def prediction_step(\r\n self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]], prediction_loss_only: bool\r\n ) -> Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]:\r\n has_labels = any(inputs.get(k) is not None for k in [\"labels\", \"lm_labels\", \"masked_lm_labels\"])\r\n\r\n inputs = self._prepare_inputs(inputs)\r\n\r\n with torch.no_grad():\r\n outputs = model(**inputs)\r\n if has_labels:\r\n loss, logits = outputs[:2]\r\n loss = loss.mean().item()\r\n else:\r\n loss = None\r\n logits = outputs[0]\r\n if self.args.past_index >= 0:\r\n self._past = outputs[self.args.past_index if has_labels else self.args.past_index - 1]\r\n\r\n if prediction_loss_only:\r\n return (loss, None, None)\r\n\r\n labels = inputs.get(\"labels\")\r\n if labels is not None:\r\n labels = labels.detach()\r\n # move tensors to evaluation device\r\n ret = (loss, logits.detach().to(self.eval_device), labels.to(self.eval_device))\r\n return ret\r\n```", "That *is* a large number of output classes.\r\n\r\nI hope you post it to huggingface.co after training so that it breaks my inference widget 😅\r\nhttps://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal", "Thanks for creating this issue @eugeneware\r\nI just hit this issue too, but in a distributed setting.\r\n\r\nAs a quick fix, I thought about calling `distributed_concat` after every batch and keep `preds`/`label_ids` in cpu. My env has much larger CPU RAM compared to GPU RAM. \r\n\r\n@sgugger Could you point to the corresponding part in datasets? Curious about this approach too.\r\n> Our plans to address this problem are to use the same kind of code as the metrics in datasets, but this is a bit of work and will take us a few weeks to fix.\r\n\r\n\r\n\r\n", "There is a workaround (chunking your dataset in smaller parts) which is why we didn't implement a quick fix.\r\nI'll start working on the right fix next week and link the PR here once it's ready @usuyama, so you can see the corresponding part in datasets (I have not yet investigated the exact part in datasets so can't answer your question right now, I've just been told it should be possible ;-) )", "In case this solution is not suitable for someone like me, I had the same problem: `out of memory` error using cuda. But for my dataset, I could stay on the CPU for training. So I wrote my own training function and converted the dataset to a pytorch dataloader and it's done. Then I can totally control the processor 😃.", "Is this problem solved?", "I am still facing the error, Is there some code which would not get this error if executed?" ]
1,600
1,675
1,602
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Use the Trainer for evaluation (`.evaluate()`, `.predict()`) on the GPU with BERT with a large evaluation DataSet where the size of the returned prediction Tensors + Model exceed GPU RAM. (In my case I had an evaluation dataset of 469,530 sentences). 2. Trainer will crash with a CUDA Memory Exception ## Expected behavior - I would expect the predictions in `predict()` or `evaluate()` from each step would be moved to the CPU device (off the GPU) and then concatenated later. However, the tensors are concatenated while still be on the GPU device, and only converted to CPU numpy arrays after the whole dataset has been predicted/evaluated. This means that for large evaluation datasets you'll run out of CUDA memory. It also makes it difficult to pick the batch size to optimize the batch size for the GPU, as you need to allow space for not only the model and inputs, but also all the predictions, which can add up when dealing with large evaluation datasets. - ie. The problem in the trainer code is that the predictions stay on the GPU [here](https://github.com/huggingface/transformers/blob/67d9fc50d917c63cf67281106214e1d9ae018dff/src/transformers/trainer.py#L1315) - These tensors get concatenated but stay on the GPU [here](https://github.com/huggingface/transformers/blob/67d9fc50d917c63cf67281106214e1d9ae018dff/src/transformers/trainer_utils.py#L143) - and then the predictions eventually end up on the CPU [here](https://github.com/huggingface/transformers/blob/67d9fc50d917c63cf67281106214e1d9ae018dff/src/transformers/trainer.py#L1343-L1347) What this means, is that for larger evaluation datasets, *all* the predictions stay on the GPU, which is a function of how long your evaluation dataset is, and that you often run out of GPU RAM. A work around is to something like this and run the loop yourself, and predict by batch: ``` py preds = [] for i in tqdm(range(0, len(ds_valid), step)): ds_valid = KeywordDataset(tokenizer, df_valid[i:i+step], targets) batch_preds = trainer.predict(ds_valid) preds.append(batch_preds) batch_accuracy = (batch_preds.predictions.argmax(-1) == batch_preds.label_ids).mean() np_preds = np.concatenate([pred.predictions for pred in preds], axis=0) np_label_ids = np.concatenate([pred.label_ids for pred in preds], axis=0) acc = (np_preds.argmax(-1) == np_label_ids).mean() print('eval_accuracy = ', acc) ``` It would be nice if it defaulted the cpu (so you don't have to worry about it), and that there is another trainer argument to give the device to aggregate predictions that can be used to override that behaviour if it's important for certain use cases. Thanks for the amazing work on an amazing library. Working with transformers has never been easier due to the hard work of your team!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7232/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7232/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7231/comments
https://api.github.com/repos/huggingface/transformers/issues/7231/events
https://github.com/huggingface/transformers/pull/7231
704,107,406
MDExOlB1bGxSZXF1ZXN0NDg5MDkxMDA2
7,231
[draft][s2s]reload-dl
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,601
1,601
CONTRIBUTOR
null
Placeholder: first test whether you can just send `--reload_dataloaders_every_epoch` from the command line!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7231/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7231", "html_url": "https://github.com/huggingface/transformers/pull/7231", "diff_url": "https://github.com/huggingface/transformers/pull/7231.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7231.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7230
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7230/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7230/comments
https://api.github.com/repos/huggingface/transformers/issues/7230/events
https://github.com/huggingface/transformers/issues/7230
704,105,402
MDU6SXNzdWU3MDQxMDU0MDI=
7,230
test fsmt finetuning
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2357479466, "node_id": "MDU6TGFiZWwyMzU3NDc5NDY2", "url": "https://api.github.com/repos/huggingface/transformers/labels/fsmt", "name": "fsmt", "color": "d0e884", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "I did a quick:\r\n\r\n```\r\ndef test_finetune():\r\n model = \"stas/tiny-wmt19-en-de\"\r\n task = \"translation\" \r\n args_d: dict = CHEAP_ARGS.copy()\r\n args_d[\"label_smoothing\"] = 0.1 if task == \"translation\" else 0\r\n [...]\r\n```\r\ngetting:\r\n```\r\n torch.nn.modules.module.ModuleAttributeError: 'FSMTModel' object has no attribute 'shared'\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:778: ModuleAttributeError\r\n```\r\n`fsmt` doesn't have `.shared` as most of the models aren't shared embeds and dicts. So it won't work as is. Will need to do some tweaking.\r\n\r\nbut please assign this to me, as I need to learn these parts.", "Done: https://github.com/huggingface/transformers/pull/7263" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Would be interested to know whether this works with finetune.py. Just add your tiny guy to https://github.com/huggingface/transformers/blob/master/examples/seq2seq/test_seq2seq_examples.py#L378 and edit l382 to set task='translation'. If it doesn't pass, we can investigate, but I expect it to pass. Happy for either of us to take this one @stas00 (this is behind tiny fsmt in the queue, but writing it down so I don't forget.)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7230/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7229
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7229/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7229/comments
https://api.github.com/repos/huggingface/transformers/issues/7229/events
https://github.com/huggingface/transformers/issues/7229
704,095,809
MDU6SXNzdWU3MDQwOTU4MDk=
7,229
a possible hack for FSMT's SinusoidalPositionalEmbedding peculiarity
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "### Pegasus Strategy\r\n\r\n1) `SinusoidalPositionalEmbedding` should inherit from `nn.Embedding`. This will fix your `to` problem as the weight will be an actual parameter.\r\n2) put \r\n```\r\nauthorized_missing_keys = [\"model.encoder.embed_positions\", \"model.decoder.embed_positions\",]\r\n```\r\non the user facing class to avoid warnings.\r\n3) If somebody calls `save_pretrained` again, they will save the weights, but they are pretty small so who cares, and it won't error on `from_pretrained`. I also don't expect a lot of people to retrain FSMT at the moment.\r\n4) we wouldn't have these problems if we shared code ;)", "Thank you for your feedback, @sshleifer \r\n\r\n> SinusoidalPositionalEmbedding should inherit from nn.Embedding. This will fix your to problem as the weight will be an actual parameter.\r\n\r\nTrue. Another easy fix is to:\r\n\r\n```\r\n+ from torch.nn.parameter import Parameter\r\n- self.weights = SinusoidalPositionalEmbedding.get_embedding(init_size, embedding_dim, padding_idx)\r\n+ self.weights = Parameter(SinusoidalPositionalEmbedding.get_embedding(init_size, embedding_dim, padding_idx))\r\n```\r\nbut yes, re-making it into a `nn.Embedding` sub-class would be cleaner.\r\n\r\n> 3) If somebody calls save_pretrained again, they will save the weights, but they are pretty small so who cares, and it won't error on from_pretrained.\r\n\r\nI have just measured - It's 250MB (125MB each encoder/decoder). Far from being small.\r\n```\r\nstate_dict = model.state_dict()\r\ntorch.save(state_dict[\"model.encoder.embed_positions.weights\"], \"output\")\r\n```\r\n```\r\n-rw-rw-r-- 1 stas stas 123M Sep 17 22:47 output\r\n```\r\n\r\nI looked into overloading `save_pretrained`, but it works directly with model's state_dict, so it will have to be hacked to make a copy of state_dict, remove these weights and then forward to `super.save_pretrained`, save. \r\n\r\n> I also don't expect a lot of people to retrain FSMT at the moment.\r\n\r\nHa, if only it were so: https://github.com/huggingface/transformers/issues/7228 :) I guess one is not many, but it didn't take long. \r\n\r\n> 4) we wouldn't have these problems if we shared code ;)\r\n\r\nAgreed!", "wow bad math on my part. \nfeel free to do either solution in your most recent post.", "> wow bad math on my part.\r\n\r\nI had no clue it was so big.\r\n\r\nActually, that reinforces my guess at why they tried to hide this embedding from `state_dict`. It's an interesting concept, perhaps one day in addition to buffers and params we will have a third type of `nn.Module` variables that are like params, but which automatically don't get saved/loaded. \r\n\r\nI suggested this feature at pytorch: https://github.com/pytorch/pytorch/issues/44935\r\n", "Another hack different than solution 1 is to check the device of the inputs in the forward pass and maybe move the matrices to the right device.", "> Another hack different than solution 1 is to check the device of the inputs in the forward pass and maybe move the matrices to the right device.\r\n\r\nThank you for the idea, @sgugger.\r\n\r\nAlas, if I understood your suggestion correctly, I already tried it and it doesn't work. torchscript wants the vars to be on the same device before *any* `forward` call.", "Via my feature request, @mruberry helped to highlight a new pytorch feature - non-perisistent buffer\r\nhttps://github.com/pytorch/pytorch/issues/44935#issuecomment-694926094\r\nhttps://pytorch.org/docs/master/generated/torch.nn.Module.html?highlight=buffer#torch.nn.Module.register_buffer\r\n\r\n> register_buffer(name: str, tensor: Optional[torch.Tensor], persistent: bool = True) → None\r\n> Adds a buffer to the module.\r\n> This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict.\r\n\r\nI tested it and it works for normal functions.\r\n\r\nBut unfortunately:\r\n\r\n1) it was added just recently - I don't think `transformers` will be willing to require a minimal torch version that is very recent\r\n2) it's not working with torchscript (yet) - the latter saves the non-persistent buffer keys, which it shouldn't\r\n\r\nSo for now it seems that making the variable a normal parameter and modifying `save_pretrained` to skip some keys, and `from_pretrained` to ignore some keys, seems to be the most solid approach so far.", "Implemented the discussed changes here: https://github.com/huggingface/transformers/pull/7224", "We actually do use buffers in our code already, as you can see [here](https://github.com/huggingface/transformers/search?q=register_buffer&unscoped_q=register_buffer).\r\n\r\nThis method was at least present in torch version 1.0.1 as it is visible in the docs for [that version](https://pytorch.org/docs/1.0.1/nn.html?highlight=register_buffer#torch.nn.Module.register_buffer), and torch 1.0.1 is the minimal requirement for `transformers`.", "Indeed, the buffers have been around since a long time, but the need here is different. We want a non-persistent buffer, a functionality which [was added just a few months ago](https://github.com/pytorch/pytorch/pull/37191) and [it doesn't yet work with torchscript](https://github.com/pytorch/pytorch/issues/45012), so it doesn't help to solve the problem at hand.", "Resolved by https://github.com/huggingface/transformers/pull/7224" ]
1,600
1,600
1,600
CONTRIBUTOR
null
(with normal CIs not running USE_CUDA=1 I completely missed testing this, so found one issue with torchscript tests that I need help with.) We are talking about FSMT - ported fairseq transformers model. If I understand correctly their `SinusoidalPositionalEmbedding` was designed so that it won't be part of the model params https://github.com/pytorch/fairseq/blob/master/fairseq/modules/sinusoidal_positional_embedding.py#L25 most likely so that it won't be part of the `state_dict`, and save space in their already huge 3.3GB model dump (well 13GB actually as they use an ensemble of 4 models). I could be wrong about the reason for this design choice. I had to copy their implementation, and not use Bart's version, since the pretrained weights rely on it, and the positions it produces are different. So their `SinusoidalPositionalEmbedding`'s `self.weights` is a normal variable (not a buffer and not a `nn.parameter.Parameter`). They create a dummy buffer `self._float_tensor` to hold the `device`. So when `model.to()` is called, `self._float_tensor` gets the right `device`. During `forward` `self.weights` gets `to(self._float_tensor)` and all is good. So `self.weights` is kind of a ghost variable. Now you see me and now you don't. This approach works just fine until we get to `torchscript` - in particular 2 common tests: ``` def test_torchscript_output_attentions(self): def test_torchscript_output_hidden_state(self): ``` which blow up under `USE_CUDA=1`, with: ``` Comparison exception: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! ``` Everything is on `cuda:0` but `SinusoidalPositionalEmbedding`'s `self.weights` are on `cpu` still at this point. The first time it encounters `self.weights`inside `forward`, before it gets a chance to be moved to the device, torchscript blows up. It wants all variables to be on the same device before `forward`. ## Solution 1 So, I solved this problem with the following hack: ``` class FSMTForConditionalGeneration(PretrainedFSMTModel): def to(self, *args, **kwargs): super().to(*args, **kwargs) self.base_model.to(*args, **kwargs) return self class FSMTModel(PretrainedFSMTModel): def to(self, *args, **kwargs): super().to(*args, **kwargs) self.encoder.embed_positions.to(*args, **kwargs) self.decoder.embed_positions.to(*args, **kwargs) return self class SinusoidalPositionalEmbedding(nn.Module): def to(self, *args, **kwargs): super().to(*args, **kwargs) self.weights = self.weights.to(*args, **kwargs) return self ``` It's absolutely crazy, but it works. Basically it forwards `model.to()` call to `SinusoidalPositionalEmbedding`'s `self.weights`, via 3 "bridges". I thought that each torch module got `to()` called but that doesn't seem to be the case, I think it traverses the model structure instead and doesn't call `to` for each module. Hence the 2 classes are involved to bridge it on. (and there is also `half()` that needs to be dealt with too, since `model.half()` won't get forwarded to this non-parameter variable either.) ## Solution 2 The second solution is to make `SinusoidalPositionalEmbedding`'s `self.weights` a parameter, but then we have to hack save/load to not save/ignore-on-load `model.encoder.embed_positions.*` and `model.decoder.embed_positions.*` keys. ## Solution 3 The third solution is to save the useless weights (useless as they aren't trained and get calculated deterministically). Perhaps you can think of other solutions. Thank you. @sgugger, @patrickvonplaten, @sshleifer, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7229/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7228
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7228/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7228/comments
https://api.github.com/repos/huggingface/transformers/issues/7228/events
https://github.com/huggingface/transformers/issues/7228
704,067,192
MDU6SXNzdWU3MDQwNjcxOTI=
7,228
FSMT Training scripts
{ "login": "wangyong1122", "id": 20316692, "node_id": "MDQ6VXNlcjIwMzE2Njky", "avatar_url": "https://avatars.githubusercontent.com/u/20316692?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangyong1122", "html_url": "https://github.com/wangyong1122", "followers_url": "https://api.github.com/users/wangyong1122/followers", "following_url": "https://api.github.com/users/wangyong1122/following{/other_user}", "gists_url": "https://api.github.com/users/wangyong1122/gists{/gist_id}", "starred_url": "https://api.github.com/users/wangyong1122/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangyong1122/subscriptions", "organizations_url": "https://api.github.com/users/wangyong1122/orgs", "repos_url": "https://api.github.com/users/wangyong1122/repos", "events_url": "https://api.github.com/users/wangyong1122/events{/privacy}", "received_events_url": "https://api.github.com/users/wangyong1122/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Thank you for the kind words, @wangyong1122.\r\n\r\nThe initial task was to port the weights so that it could be used for Translation as is, which is mostly done. I'm still working out some kinks under CUDA - I forgot to test with USE_CUDA=1 :( But I'm almost there.\r\n\r\nI'd say, how about you start the training - like you'd any other `transformers` model that you already have the instructions for and I'm 100% sure you will encounter problems since that part wasn't worked on/tested at all. So when you run into problems please post the code so that I could reproduce the issues and slowly slowly we will sort it out. How does that sound?\r\n\r\nIt's my first model ported to `transformers` so things take much longer as I'm learning the nuances of this platform.", "Thanks very much for your kind reply, @stas00.\r\nIt sounds great! If I run into problems, I will post the code.", "To explain why I expect problems: `transformers` has been written with a single merged vocab in mind, whereas fairseq's transformer has 2 distinct vocabs of different size (for en-ru/ru-en, but merged for en-de/de-en). So I had to do all kinds of hacks to make things work - for example to overcome `self.vocab_size` usages in core functions, as here we have `self.src_vocab_size` and `self.tgt_vocab_size`. \r\n\r\nSo for the initial task of making translation work - almost everything should be fine, as most of it is well tested. \r\n\r\nBut for training from scratch and finetuning I expect problems - so we will solve them gradually as you uncover them. \r\n\r\nThe best help you can provide is to code those problems in new tests. If you can't, that's ok too, this is just if you'd like an extra learning challenge. But in any case, please, make small simple reproducible examples - train on 3 lines of text that you can hardcode here, and not 3GB that need to be set up :)", "Thanks very much for your details. @stas00.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
Hi, thank you very much for your implementation and it helps a lot. I want to train the MT models from scratch. Could you provide some instructions about the training details for FSMT? Thanks a lot. @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7228/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7228/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7227
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7227/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7227/comments
https://api.github.com/repos/huggingface/transformers/issues/7227/events
https://github.com/huggingface/transformers/issues/7227
704,022,910
MDU6SXNzdWU3MDQwMjI5MTA=
7,227
Suggestion: Better to change the task name of "sentiment-analysis" to "text-classification"
{ "login": "shirayu", "id": 963961, "node_id": "MDQ6VXNlcjk2Mzk2MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/963961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shirayu", "html_url": "https://github.com/shirayu", "followers_url": "https://api.github.com/users/shirayu/followers", "following_url": "https://api.github.com/users/shirayu/following{/other_user}", "gists_url": "https://api.github.com/users/shirayu/gists{/gist_id}", "starred_url": "https://api.github.com/users/shirayu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shirayu/subscriptions", "organizations_url": "https://api.github.com/users/shirayu/orgs", "repos_url": "https://api.github.com/users/shirayu/repos", "events_url": "https://api.github.com/users/shirayu/events{/privacy}", "received_events_url": "https://api.github.com/users/shirayu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "These are tasks that can be done with a specific pipeline, with a model and tokenizer as default. Having the `sentiment-analysis` task supported with a `TextClassificationPipeline` is the same as having a `ner` task supported with a `TokenClassificationPipeline`. These tasks are not generic text-classification or token-classification, but specific tasks that users can use.\r\n\r\nIf you want a general pipeline for text classification, then you can simply use the text-classification pipeline directly:\r\n```py\r\nfrom transformers import TextClassificationPipeline\r\n```" ]
1,600
1,600
1,600
CONTRIBUTOR
null
There is a task named ``sentiment-analysis`` in``SUPPORTED_TASKS`` used in piepelines. https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L2509 However, it can be used not only for sentiment analysis but also fo more generic tasks. Therefore, I think it is better to rename it to ``text-classification`` (or add a task named ``text-classification`` for backward compatibility).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7227/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7226
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7226/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7226/comments
https://api.github.com/repos/huggingface/transformers/issues/7226/events
https://github.com/huggingface/transformers/issues/7226
704,018,536
MDU6SXNzdWU3MDQwMTg1MzY=
7,226
The task name sentiment-analysis
{ "login": "shirayu", "id": 963961, "node_id": "MDQ6VXNlcjk2Mzk2MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/963961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shirayu", "html_url": "https://github.com/shirayu", "followers_url": "https://api.github.com/users/shirayu/followers", "following_url": "https://api.github.com/users/shirayu/following{/other_user}", "gists_url": "https://api.github.com/users/shirayu/gists{/gist_id}", "starred_url": "https://api.github.com/users/shirayu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shirayu/subscriptions", "organizations_url": "https://api.github.com/users/shirayu/orgs", "repos_url": "https://api.github.com/users/shirayu/repos", "events_url": "https://api.github.com/users/shirayu/events{/privacy}", "received_events_url": "https://api.github.com/users/shirayu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Duplication of #7227" ]
1,600
1,600
1,600
CONTRIBUTOR
null
https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L2509
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7226/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7225
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7225/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7225/comments
https://api.github.com/repos/huggingface/transformers/issues/7225/events
https://github.com/huggingface/transformers/pull/7225
704,010,086
MDExOlB1bGxSZXF1ZXN0NDg5MDExNTEz
7,225
Fix a typo
{ "login": "shirayu", "id": 963961, "node_id": "MDQ6VXNlcjk2Mzk2MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/963961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shirayu", "html_url": "https://github.com/shirayu", "followers_url": "https://api.github.com/users/shirayu/followers", "following_url": "https://api.github.com/users/shirayu/following{/other_user}", "gists_url": "https://api.github.com/users/shirayu/gists{/gist_id}", "starred_url": "https://api.github.com/users/shirayu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shirayu/subscriptions", "organizations_url": "https://api.github.com/users/shirayu/orgs", "repos_url": "https://api.github.com/users/shirayu/repos", "events_url": "https://api.github.com/users/shirayu/events{/privacy}", "received_events_url": "https://api.github.com/users/shirayu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7225?src=pr&el=h1) Report\n> Merging [#7225](https://codecov.io/gh/huggingface/transformers/pull/7225?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/67d9fc50d917c63cf67281106214e1d9ae018dff?el=desc) will **increase** coverage by `0.73%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7225/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7225?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7225 +/- ##\n==========================================\n+ Coverage 79.01% 79.74% +0.73% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n+ Hits 26135 26377 +242 \n+ Misses 6942 6700 -242 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7225?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `81.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.46% <0.00%> (-72.60%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `83.74% <0.00%> (-14.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.17% <0.00%> (-3.06%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.95% <0.00%> (-2.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.16% <0.00%> (-2.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.04% <0.00%> (+20.43%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7225/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7225?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7225?src=pr&el=footer). Last update [67d9fc5...320d1a3](https://codecov.io/gh/huggingface/transformers/pull/7225?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7225/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7225", "html_url": "https://github.com/huggingface/transformers/pull/7225", "diff_url": "https://github.com/huggingface/transformers/pull/7225.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7225.patch", "merged_at": 1600417413000 }
https://api.github.com/repos/huggingface/transformers/issues/7224
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7224/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7224/comments
https://api.github.com/repos/huggingface/transformers/issues/7224/events
https://github.com/huggingface/transformers/pull/7224
703,950,689
MDExOlB1bGxSZXF1ZXN0NDg4OTY0MzMz
7,224
[fsmt] rewrite SinusoidalPositionalEmbedding + USE_CUDA test fixes + new TranslationPipeline test
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7224?src=pr&el=h1) Report\n> Merging [#7224](https://codecov.io/gh/huggingface/transformers/pull/7224?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1d90d0f386af2af52017d51c421e71a51ec94de0?el=desc) will **decrease** coverage by `0.24%`.\n> The diff coverage is `96.77%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7224/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7224?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7224 +/- ##\n==========================================\n- Coverage 81.81% 81.57% -0.25% \n==========================================\n Files 174 174 \n Lines 33446 33448 +2 \n==========================================\n- Hits 27364 27285 -79 \n- Misses 6082 6163 +81 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7224?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.96% <83.33%> (-0.27%)` | :arrow_down: |\n| [src/transformers/modeling\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mc210LnB5) | `93.99% <100.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.63% <0.00%> (-47.62%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `83.74% <0.00%> (-14.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.95% <0.00%> (-3.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.16% <0.00%> (-2.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.03% <0.00%> (-1.30%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.94% <0.00%> (-1.18%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/7224/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7224?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7224?src=pr&el=footer). Last update [1d90d0f...1b80514](https://codecov.io/gh/huggingface/transformers/pull/7224?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "> The expansion of embeddings may require a bit more care, but the comment below doesn't prevent merging this PR. You can just delete that logic later if it is bad.\r\n\r\nThis is how fairseq does it - this PR doesn't change the original behavior, so yes, it can be merged, as it fixes USE_CUDA=1 with torchscript situation. And further algorithmic changes will require a separate care. I moved the issue you raised to its own ticket: https://github.com/huggingface/transformers/issues/7256\r\n\r\nSo let's continue discussing that suggestion there. Thank you for bringing it up, @sshleifer!", "Great, I will let the 2nd approving reviewer merge.", "Oh, your comment made me discover a bug in my porting - somehow I used vocab sizes as the number of positional embeddings - so it's not surprising the weights were 250MB - fixed now. \r\n\r\nI rechecked - fairseq inits them to the `config.max_position_embeddings + self.padding_idx + 1` - so we have 2 extras there. \r\n\r\nOr should I sync it with bart and just have `config.max_position_embeddings`, so it's consistent across `transformers`?\r\n\r\ncontext:\r\n\r\n```\r\n self.embed_positions = SinusoidalPositionalEmbedding(\r\n config.max_position_embeddings + self.padding_idx + 1, embed_dim, self.padding_idx\r\n )\r\n```\r\n\r\n@sshleifer?", "I split off the key naming discussion to https://github.com/huggingface/transformers/issues/7258 - this is again not a show-stopper for this PR, as it impacts `modeling_bart.py` too.", "Or, as mentioned here: https://github.com/huggingface/transformers/issues/6700#issuecomment-695859542, change the `persistent` attribute to `False` for the registered buffers.", "> Or, as mentioned here: #6700 (comment), change the persistent attribute to False for the registered buffers.\r\n\r\nIt won't work at the moment, since\r\n1. this functionality [was added just a few months ago](https://github.com/pytorch/pytorch/pull/37191) (can't require recent `torch`)\r\n2. [it doesn't yet work with torchscript](https://github.com/pytorch/pytorch/issues/45012)\r\n\r\n", "Indeed, thanks for clarifying!" ]
1,600
1,600
1,600
CONTRIBUTOR
null
These changes are in one PR as they all fix problems for `USE_CUDA=1` * fix a few `USE_CUDA=1` tests that got skipped previously (was missing `.to(device)`) * rewrite `SinusoidalPositionalEmbedding` to be a normal `nn.Embedding` subclass with a normal `self.weight` param, but exclude this param from being saved with the `state_dict` since it's not trained, but deterministic * adjust `PreTrainedModel.save_pretrained` to support models that don't want all of their params saved. (needed for fsmt's `SinusoidalPositionalEmbedding`) * add a new test using `TranslationPipeline` (well, this one is just a new test) @sshleifer This includes fixing: https://github.com/huggingface/transformers/issues/7229
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7224/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7224", "html_url": "https://github.com/huggingface/transformers/pull/7224", "diff_url": "https://github.com/huggingface/transformers/pull/7224.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7224.patch", "merged_at": 1600694016000 }
https://api.github.com/repos/huggingface/transformers/issues/7223
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7223/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7223/comments
https://api.github.com/repos/huggingface/transformers/issues/7223/events
https://github.com/huggingface/transformers/pull/7223
703,948,648
MDExOlB1bGxSZXF1ZXN0NDg4OTYyNjM0
7,223
[s2s] remove double assert
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
Sortish sampler works on multigpu!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7223/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7223", "html_url": "https://github.com/huggingface/transformers/pull/7223", "diff_url": "https://github.com/huggingface/transformers/pull/7223.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7223.patch", "merged_at": 1600381952000 }
https://api.github.com/repos/huggingface/transformers/issues/7222
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7222/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7222/comments
https://api.github.com/repos/huggingface/transformers/issues/7222/events
https://github.com/huggingface/transformers/issues/7222
703,946,055
MDU6SXNzdWU3MDM5NDYwNTU=
7,222
tokenizer.add_tokens conflict with MBart's tokenizer
{ "login": "znculee", "id": 15342165, "node_id": "MDQ6VXNlcjE1MzQyMTY1", "avatar_url": "https://avatars.githubusercontent.com/u/15342165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/znculee", "html_url": "https://github.com/znculee", "followers_url": "https://api.github.com/users/znculee/followers", "following_url": "https://api.github.com/users/znculee/following{/other_user}", "gists_url": "https://api.github.com/users/znculee/gists{/gist_id}", "starred_url": "https://api.github.com/users/znculee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/znculee/subscriptions", "organizations_url": "https://api.github.com/users/znculee/orgs", "repos_url": "https://api.github.com/users/znculee/repos", "events_url": "https://api.github.com/users/znculee/events{/privacy}", "received_events_url": "https://api.github.com/users/znculee/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "I love your workaround and would definitely be interested in a clean contribution that made in accessible to everyone.", "@sshleifer Thanks! I've created a pull request #7353 with this workaround.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @patil-suraj @mfuntowicz ## To reproduce Steps to reproduce the behavior: 1. import MBartTokenizer 2. try `tokenizer.add_tokens('__something_new__')` 3. `tokenizer('__something_new__')` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> `add_tokens()` should add new tokens after all existing tokens in the vocabulary, however, the `lang_code` will conflict with the added tokens. ## Current Workaround ``` new_token_to_id = {tok: vocab_size + i for i, tok in enumerate(new_tokens)} self.tokenizer.fairseq_tokens_to_ids.update(new_token_to_id) self.tokenizer.fairseq_ids_to_tokens = {v: k for k, v in self.tokenizer.fairseq_tokens_to_ids.items()} self.tokenizer.unique_no_split_tokens.extend(new_tokens) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7222/timeline
completed
null
null