url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/7221
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7221/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7221/comments
https://api.github.com/repos/huggingface/transformers/issues/7221/events
https://github.com/huggingface/transformers/pull/7221
703,910,638
MDExOlB1bGxSZXF1ZXN0NDg4OTMyNDUx
7,221
model card improvements
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7221?src=pr&el=h1) Report\n> Merging [#7221](https://codecov.io/gh/huggingface/transformers/pull/7221?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/edbaad2c5c94b88cc21b349a403e886bd0b2f156?el=desc) will **decrease** coverage by `0.37%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7221/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7221?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7221 +/- ##\n==========================================\n- Coverage 79.05% 78.67% -0.38% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n- Hits 26148 26024 -124 \n- Misses 6929 7053 +124 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7221?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.24% <0.00%> (-55.76%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `57.28% <0.00%> (-15.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.47% <0.00%> (-11.91%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-3.01%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/7221/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7221?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7221?src=pr&el=footer). Last update [edbaad2...d875346](https://codecov.io/gh/huggingface/transformers/pull/7221?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Tweaks to the cards to make them fix the evolving requirements. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7221/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7221", "html_url": "https://github.com/huggingface/transformers/pull/7221", "diff_url": "https://github.com/huggingface/transformers/pull/7221.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7221.patch", "merged_at": 1600549325000 }
https://api.github.com/repos/huggingface/transformers/issues/7220
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7220/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7220/comments
https://api.github.com/repos/huggingface/transformers/issues/7220/events
https://github.com/huggingface/transformers/pull/7220
703,881,750
MDExOlB1bGxSZXF1ZXN0NDg4OTA4NjMy
7,220
skip failing FSMT CUDA tests until investigated
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7220?src=pr&el=h1) Report\n> Merging [#7220](https://codecov.io/gh/huggingface/transformers/pull/7220?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/51c4adf54c4183f8eec25665b15456262a48f827?el=desc) will **increase** coverage by `0.55%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7220/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7220?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7220 +/- ##\n==========================================\n+ Coverage 78.57% 79.13% +0.55% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n+ Hits 25990 26174 +184 \n+ Misses 7087 6903 -184 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7220?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.24% <0.00%> (-55.76%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `57.28% <0.00%> (-15.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.47% <0.00%> (-11.91%)` | :arrow_down: |\n| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/7220/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7220?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7220?src=pr&el=footer). Last update [51c4adf...b2b0c1d](https://codecov.io/gh/huggingface/transformers/pull/7220?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #7217
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7220/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7220", "html_url": "https://github.com/huggingface/transformers/pull/7220", "diff_url": "https://github.com/huggingface/transformers/pull/7220.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7220.patch", "merged_at": 1600375994000 }
https://api.github.com/repos/huggingface/transformers/issues/7219
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7219/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7219/comments
https://api.github.com/repos/huggingface/transformers/issues/7219/events
https://github.com/huggingface/transformers/pull/7219
703,879,858
MDExOlB1bGxSZXF1ZXN0NDg4OTA3MDM4
7,219
Copy code from Bert to Roberta and add safeguard script
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7219?src=pr&el=h1) Report\n> Merging [#7219](https://codecov.io/gh/huggingface/transformers/pull/7219?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7a88ed6c2a740c45cafb2009a124ba056506d6a1?el=desc) will **increase** coverage by `1.53%`.\n> The diff coverage is `89.06%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7219/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7219?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7219 +/- ##\n==========================================\n+ Coverage 78.00% 79.54% +1.53% \n==========================================\n Files 174 174 \n Lines 33452 33671 +219 \n==========================================\n+ Hits 26095 26784 +689 \n+ Misses 7357 6887 -470 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7219?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.17% <89.06%> (-15.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `94.11% <0.00%> (+1.17%)` | :arrow_up: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.60% <0.00%> (+51.89%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/7219/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7219?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7219?src=pr&el=footer). Last update [7a88ed6...dba9e29](https://codecov.io/gh/huggingface/transformers/pull/7219?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "- For the tests, I'm not sure what you had in mind. I could create a dumb file that I then compare to the lib.\r\n- It does not matter if this is run before or after `black`: if there is a very long model name, the script will detect differences as soon as black as indeed been run (so it may pass if run before black once but fail at the next CI). Running it after makes sure we don't catch trailing spaces or bad empty lines (empty lines with spaces) in the diff. But this all won't work for replacement of short model names to very long model names (so sadly, we can't have `BertForGenerationAndComputerVisionBoundingBoxesForVisualQuestionAnswering` in the lib).", "- Regarding the tests, I think it would be a requirement given that the script modifies the code when ran with `overwrite`. Testing to ensure it works correctly seems necessary.\r\n- I think it could work with a bit of plumbing: given that the script registers the name change `Bert->Roberta`, the script could change the copied file from `Roberta` to `Bert`, run `black` to correctly format, and check equivalence then. I haven't though much about edge cases but it seems robust enough to me. What do you think?\r\n", "This is great! Very much in favor of this PR :-)" ]
1,600
1,600
1,600
COLLABORATOR
null
This PR proposes some tooling to automatically check copied code from one of the files in transformers to another is consistent with the original. To showcase an example, it uses `modeling_roberta.py` where I copied all code from `modeling_bert.py` instead of importing the `BertModel` (as a result only activations are imported from `modeling_bert` and an additional refactor to put them in `activations` would make the file fully independent from `modeling_bert` but that is not the scope of this PR). A new script called `check_copies` is introduced in the utils folder. It will check all code files for comments looking like: ```python # Copied from transformers.modeling_bert.BertEmbeddings ``` When it encounters such a comment, it will recover the code of the object (here `BertEmbeddings`) inside its module (here `modeling_bert`) and compare it to the code in the class/function where this comment is. If there is a difference, the script will replace and overwrite (in overwrite mode) or raise an error (in checking mode). This way we can be aware of any difference when a fix is made in bert and should be ported to roberta, and we can automatically copy that difference. The script works for classes, functions and methods. It can be called inside a function/method (if we want to add some specific initial code, see some examples below). I've tried to make the comments readable while ensuring some non-related comments could be caught (hence the `transformers.` that is not useful information), It also has a functionality where it can replace one pattern by another, for instance: ```python # Copied from transformers.modeling_bert.BertEmbeddings with Bert->Roberta ``` will look for a diff between the original code and the observed code after replacing all instance of Bert in the original code by Roberta (this is because we have `BertLayerNorm`, `BertPooler`, `BertEncoder`... that becomes `RobertaLayerNorm`, `RobertaPooler`, `RobertaEncoder`...). Again I tried to aim for readable comments that could still be used for the script. I added the script in overwriting mode in `make style` and in checking mode in `make quality`. Users have to be made aware that `make style` can make destructive changes when it's run (since it will erase any modification in copied parts). I can create a new alias if we think this is too dangerous for `make style`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7219/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7219", "html_url": "https://github.com/huggingface/transformers/pull/7219", "diff_url": "https://github.com/huggingface/transformers/pull/7219.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7219.patch", "merged_at": 1600765347000 }
https://api.github.com/repos/huggingface/transformers/issues/7218
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7218/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7218/comments
https://api.github.com/repos/huggingface/transformers/issues/7218/events
https://github.com/huggingface/transformers/pull/7218
703,870,732
MDExOlB1bGxSZXF1ZXN0NDg4ODk5NDc0
7,218
[model cards] fix metadata - 3rd attempt
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7218?src=pr&el=h1) Report\n> Merging [#7218](https://codecov.io/gh/huggingface/transformers/pull/7218?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/51c4adf54c4183f8eec25665b15456262a48f827?el=desc) will **increase** coverage by `3.49%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7218/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7218?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7218 +/- ##\n==========================================\n+ Coverage 78.57% 82.07% +3.49% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n+ Hits 25990 27147 +1157 \n+ Misses 7087 5930 -1157 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7218?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.52% <0.00%> (-34.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.31% <0.00%> (-14.52%)` | :arrow_down: |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `91.89% <0.00%> (-5.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.17% <0.00%> (-3.06%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (+0.35%)` | :arrow_up: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7218/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7218?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7218?src=pr&el=footer). Last update [51c4adf...5b87296](https://codecov.io/gh/huggingface/transformers/pull/7218?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Kindly one more attempt
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7218/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7218", "html_url": "https://github.com/huggingface/transformers/pull/7218", "diff_url": "https://github.com/huggingface/transformers/pull/7218.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7218.patch", "merged_at": 1600376227000 }
https://api.github.com/repos/huggingface/transformers/issues/7217
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7217/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7217/comments
https://api.github.com/repos/huggingface/transformers/issues/7217/events
https://github.com/huggingface/transformers/issues/7217
703,865,990
MDU6SXNzdWU3MDM4NjU5OTA=
7,217
FSMT Cuda CI Failures
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you for the heads up, @sshleifer \r\n\r\nI'm not able to reproduce those, but let's first thing first let's skip them so that CI is not affected. \r\nhttps://github.com/huggingface/transformers/pull/7220\r\nI will investigate and we can try to enable them again later.", "I've found some other tests failing with USE_CUDA=1, but which were fine on CI - so I will attend to those first.\r\n\r\nHope we could finalize this issue https://github.com/huggingface/transformers/issues/6349 and get USE_CUDA on by default." ]
1,600
1,600
1,600
CONTRIBUTOR
null
https://github.com/huggingface/transformers/runs/1130127719?check_suite_focus=true I'm not sure whether these are spurious, but I wanted to bring them to your attention @stas00 .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7217/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7216
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7216/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7216/comments
https://api.github.com/repos/huggingface/transformers/issues/7216/events
https://github.com/huggingface/transformers/pull/7216
703,850,768
MDExOlB1bGxSZXF1ZXN0NDg4ODgyNjk2
7,216
[model cards] fix dataset yaml
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
Sorting out the expected format - can't tell if I'm doing something wrong until it's uploaded to the site :(
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7216/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7216", "html_url": "https://github.com/huggingface/transformers/pull/7216", "diff_url": "https://github.com/huggingface/transformers/pull/7216.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7216.patch", "merged_at": 1600370979000 }
https://api.github.com/repos/huggingface/transformers/issues/7215
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7215/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7215/comments
https://api.github.com/repos/huggingface/transformers/issues/7215/events
https://github.com/huggingface/transformers/issues/7215
703,820,180
MDU6SXNzdWU3MDM4MjAxODA=
7,215
Links seem to be expired for German NER example
{ "login": "Santosh-Gupta", "id": 5524261, "node_id": "MDQ6VXNlcjU1MjQyNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Santosh-Gupta", "html_url": "https://github.com/Santosh-Gupta", "followers_url": "https://api.github.com/users/Santosh-Gupta/followers", "following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}", "gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions", "organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs", "repos_url": "https://api.github.com/users/Santosh-Gupta/repos", "events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}", "received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events", "type": "User", "site_admin": false }
[ { "id": 1834060867, "node_id": "MDU6TGFiZWwxODM0MDYwODY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition", "name": "Ex: Named Entity Recognition", "color": "06FFD8", "default": false, "description": "" } ]
closed
false
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[ { "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false } ]
[ "@stefan-it might know what's up!", "Oh, I think this PR should fix it:\r\n\r\nhttps://github.com/huggingface/transformers/pull/6571\r\n\r\nI will resolve some merge conflicts, then it should be ready to merge!", "Updated, it should work now :+1: \r\n\r\n@Santosh-Gupta Please try to use the new urls for the GermEval dataset in your Colab :)", "The links have been updated, thanks @stefan-it " ]
1,600
1,600
1,600
CONTRIBUTOR
null
There is a German Ner example here https://github.com/huggingface/transformers/tree/master/examples/token-classification It seems that the links for the training data are no longer working ``` curl -L 'https://sites.google.com/site/germeval2014ner/data/NER-de-train.tsv?attredirects=0&d=1' \ | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > train.txt.tmp curl -L 'https://sites.google.com/site/germeval2014ner/data/NER-de-dev.tsv?attredirects=0&d=1' \ | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > dev.txt.tmp curl -L 'https://sites.google.com/site/germeval2014ner/data/NER-de-test.tsv?attredirects=0&d=1' \ | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > test.txt.tmp ``` I tried going to the links directly and got 404. For convenience, here's a colab notebook that runs the whole example. The contents of the created downloaded and processed data files do not seem correct to me. https://colab.research.google.com/drive/1ox23ZuWNwh3pim8OQAW99TF2sTVHzKqH?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7215/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7214
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7214/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7214/comments
https://api.github.com/repos/huggingface/transformers/issues/7214/events
https://github.com/huggingface/transformers/issues/7214
703,803,947
MDU6SXNzdWU3MDM4MDM5NDc=
7,214
Does the default weight_decay of 0.0 in transformers.AdamW make sense?
{ "login": "julioasotodv", "id": 20630819, "node_id": "MDQ6VXNlcjIwNjMwODE5", "avatar_url": "https://avatars.githubusercontent.com/u/20630819?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julioasotodv", "html_url": "https://github.com/julioasotodv", "followers_url": "https://api.github.com/users/julioasotodv/followers", "following_url": "https://api.github.com/users/julioasotodv/following{/other_user}", "gists_url": "https://api.github.com/users/julioasotodv/gists{/gist_id}", "starred_url": "https://api.github.com/users/julioasotodv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julioasotodv/subscriptions", "organizations_url": "https://api.github.com/users/julioasotodv/orgs", "repos_url": "https://api.github.com/users/julioasotodv/repos", "events_url": "https://api.github.com/users/julioasotodv/events{/privacy}", "received_events_url": "https://api.github.com/users/julioasotodv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Too bad you didn't get an answer on SO. I think you would multiply your chances of getting a good answer if you asked it over at https://discuss.huggingface.co!", "Sure, I will ask there. Thank you!", "I use weight decay and not use weight and surprisingly find that they are the same, why?" ]
1,600
1,617
1,600
NONE
null
# ❓ Questions & Help ## Details Hi, I tried to ask in SO before, but apparently the question seems to be irrelevant. Anyways, here it is: In the [Docs](https://huggingface.co/transformers/main_classes/optimizer_schedules.html#transformers.AdamW) we can clearly see that the AdamW optimizer sets the default weight decay to 0.0. Given that the whole purpose of AdamW is to decouple the weight decay regularization, is my understanding that the results anyone can get with AdamW and Adam if both are used with `weight_decay=0.0` (this is, without weight decay) should be exactly the same. Therefore, shouldn't make more sense to have the default weight decay for AdamW > 0? Thank you so much!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7214/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7213
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7213/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7213/comments
https://api.github.com/repos/huggingface/transformers/issues/7213/events
https://github.com/huggingface/transformers/pull/7213
703,798,850
MDExOlB1bGxSZXF1ZXN0NDg4ODQwODA1
7,213
Return cross-attention from T5 models
{ "login": "noahtren", "id": 32682811, "node_id": "MDQ6VXNlcjMyNjgyODEx", "avatar_url": "https://avatars.githubusercontent.com/u/32682811?v=4", "gravatar_id": "", "url": "https://api.github.com/users/noahtren", "html_url": "https://github.com/noahtren", "followers_url": "https://api.github.com/users/noahtren/followers", "following_url": "https://api.github.com/users/noahtren/following{/other_user}", "gists_url": "https://api.github.com/users/noahtren/gists{/gist_id}", "starred_url": "https://api.github.com/users/noahtren/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/noahtren/subscriptions", "organizations_url": "https://api.github.com/users/noahtren/orgs", "repos_url": "https://api.github.com/users/noahtren/repos", "events_url": "https://api.github.com/users/noahtren/events{/privacy}", "received_events_url": "https://api.github.com/users/noahtren/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello,\r\n\r\nThank you for solution.\r\n\r\nI just think the change will only return the cross-attention scores for the first layer of the decoder, since len(layer_outputs) will be bigger than 5 only in that case, (because has_relative_attention_bias will be true for i=0 and length of layer_outputs will be 6.)\r\n\r\n if output_attentions:\r\n all_attentions = all_attentions + (layer_outputs[2],) # We keep only self-attention weights for now\r\n if len(layer_outputs) >= 5:\r\n all_attentions = all_attentions + (layer_outputs[2],) + (layer_outputs[4],)\r\n else:\r\n all_attentions = all_attentions + (layer_outputs[2],)\r\n\r\nFor the other layers of the decoder the len(layer_outputs) will be 4, so the cross-attention scores will be in 3rd index in those cases.\r\n\r\nSo I used the code like this, even I am not sure it's the nicest way to do this:\r\n\r\n if output_attentions:\r\n all_attentions = all_attentions + (layer_outputs[2],) # We keep only self-attention weights for now\r\n if self.is_decoder:\r\n if i==0:\r\n all_attentions = all_attentions + (layer_outputs[4],) # add cross-attention weights\r\n else:\r\n all_attentions = all_attentions + (layer_outputs[3],) # add cross-attention weights\r\n\r\n", "Thanks for catching that! With that fix we are able to get the cross-attention weights for each decoder layer.\r\n\r\nIt appears some tests need to be updated as well, but I'm not sure how best to do that.", "Hi, any updates on this issue? Or at least some recommended quick fixes to the code that I can make directly to the code to get the cross-attention? For the fix proposed by @mg9, what exactly did you add to modeling_tf.py? (Do you mind sharing your updated modeling_tf.py?) With @mg9's fix, will the following code return the cross-attention? Thanks for your help in advance.\r\n\r\n```\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\ntokenizer = T5Tokenizer.from_pretrained('t5-small')\r\ninput_ids = tokenizer(\"summarize: studies have shown that owning a dog is good for you \", return_tensors=\"pt\").input_ids\r\ndecoder_input_ids = tokenizer(\"<pad>\", add_special_tokens=False, return_tensors=\"pt\").input_ids\r\nmodel = T5ForConditionalGeneration.from_pretrained('t5-small', return_dict=True)\r\noutputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, output_attentions=True)\r\noutputs.encoder_attentions\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,600
1,619
1,619
CONTRIBUTOR
null
Small change to the PyTorch and TF models to support returning cross-attention alignment scores. Previously it would only return self-attention alignment scores. This should solve the problem mentioned here: https://discuss.huggingface.co/t/how-to-get-cross-attention-values-of-t5/970 The code in this PR checks to see if the outputs of T5Block contains cross-attention, and if it does, it is returned from the T5Stack.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7213/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7213", "html_url": "https://github.com/huggingface/transformers/pull/7213", "diff_url": "https://github.com/huggingface/transformers/pull/7213.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7213.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7212
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7212/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7212/comments
https://api.github.com/repos/huggingface/transformers/issues/7212/events
https://github.com/huggingface/transformers/pull/7212
703,784,155
MDExOlB1bGxSZXF1ZXN0NDg4ODI4OTQz
7,212
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7212?src=pr&el=h1) Report\n> Merging [#7212](https://codecov.io/gh/huggingface/transformers/pull/7212?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e643a297228c8cb2c189fe4c93e11125f938d20b?el=desc) will **increase** coverage by `0.29%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7212/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7212?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7212 +/- ##\n==========================================\n+ Coverage 81.47% 81.77% +0.29% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n+ Hits 26949 27048 +99 \n+ Misses 6128 6029 -99 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7212?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.10% <0.00%> (-29.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.87% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.01% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (+0.35%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7212/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7212?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7212?src=pr&el=footer). Last update [e643a29...9da9d68](https://codecov.io/gh/huggingface/transformers/pull/7212?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7212/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7212", "html_url": "https://github.com/huggingface/transformers/pull/7212", "diff_url": "https://github.com/huggingface/transformers/pull/7212.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7212.patch", "merged_at": 1600413830000 }
https://api.github.com/repos/huggingface/transformers/issues/7211
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7211/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7211/comments
https://api.github.com/repos/huggingface/transformers/issues/7211/events
https://github.com/huggingface/transformers/pull/7211
703,783,647
MDExOlB1bGxSZXF1ZXN0NDg4ODI4NTQw
7,211
Rewrites BERT in Flax to the new Linen API
{ "login": "marcvanzee", "id": 180100, "node_id": "MDQ6VXNlcjE4MDEwMA==", "avatar_url": "https://avatars.githubusercontent.com/u/180100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcvanzee", "html_url": "https://github.com/marcvanzee", "followers_url": "https://api.github.com/users/marcvanzee/followers", "following_url": "https://api.github.com/users/marcvanzee/following{/other_user}", "gists_url": "https://api.github.com/users/marcvanzee/gists{/gist_id}", "starred_url": "https://api.github.com/users/marcvanzee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcvanzee/subscriptions", "organizations_url": "https://api.github.com/users/marcvanzee/orgs", "repos_url": "https://api.github.com/users/marcvanzee/repos", "events_url": "https://api.github.com/users/marcvanzee/events{/privacy}", "received_events_url": "https://api.github.com/users/marcvanzee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
One small caveat: we renamed `param` in Module to `params`, so we may have to update this once we created a new pypi wheel, but I will make sure that this is the case. Otherwise it seems good to go!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7211/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7211", "html_url": "https://github.com/huggingface/transformers/pull/7211", "diff_url": "https://github.com/huggingface/transformers/pull/7211.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7211.patch", "merged_at": 1600420500000 }
https://api.github.com/repos/huggingface/transformers/issues/7210
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7210/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7210/comments
https://api.github.com/repos/huggingface/transformers/issues/7210/events
https://github.com/huggingface/transformers/pull/7210
703,780,443
MDExOlB1bGxSZXF1ZXN0NDg4ODI1OTYz
7,210
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7210?src=pr&el=h1) Report\n> Merging [#7210](https://codecov.io/gh/huggingface/transformers/pull/7210?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e643a297228c8cb2c189fe4c93e11125f938d20b?el=desc) will **decrease** coverage by `0.81%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7210/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7210?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7210 +/- ##\n==========================================\n- Coverage 81.47% 80.65% -0.82% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n- Hits 26949 26679 -270 \n- Misses 6128 6398 +270 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7210?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `62.15% <0.00%> (-36.75%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.17% <0.00%> (-3.06%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.36% <0.00%> (-0.98%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.40%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7210/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7210?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7210?src=pr&el=footer). Last update [e643a29...63d2ea4](https://codecov.io/gh/huggingface/transformers/pull/7210?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7210/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7210", "html_url": "https://github.com/huggingface/transformers/pull/7210", "diff_url": "https://github.com/huggingface/transformers/pull/7210.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7210.patch", "merged_at": 1600413838000 }
https://api.github.com/repos/huggingface/transformers/issues/7209
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7209/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7209/comments
https://api.github.com/repos/huggingface/transformers/issues/7209/events
https://github.com/huggingface/transformers/pull/7209
703,777,372
MDExOlB1bGxSZXF1ZXN0NDg4ODIzNjIw
7,209
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7209?src=pr&el=h1) Report\n> Merging [#7209](https://codecov.io/gh/huggingface/transformers/pull/7209?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e643a297228c8cb2c189fe4c93e11125f938d20b?el=desc) will **increase** coverage by `0.45%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7209/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7209?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7209 +/- ##\n==========================================\n+ Coverage 81.47% 81.92% +0.45% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n+ Hits 26949 27099 +150 \n+ Misses 6128 5978 -150 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7209?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.52% <0.00%> (-34.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.31% <0.00%> (-14.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `82.25% <0.00%> (-9.68%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.87% <0.00%> (-6.77%)` | :arrow_down: |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `91.89% <0.00%> (-5.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.17% <0.00%> (-3.06%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.00% <0.00%> (-0.67%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7209/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7209?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7209?src=pr&el=footer). Last update [e643a29...5701aee](https://codecov.io/gh/huggingface/transformers/pull/7209?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7209/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7209", "html_url": "https://github.com/huggingface/transformers/pull/7209", "diff_url": "https://github.com/huggingface/transformers/pull/7209.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7209.patch", "merged_at": 1600413851000 }
https://api.github.com/repos/huggingface/transformers/issues/7208
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7208/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7208/comments
https://api.github.com/repos/huggingface/transformers/issues/7208/events
https://github.com/huggingface/transformers/issues/7208
703,755,004
MDU6SXNzdWU3MDM3NTUwMDQ=
7,208
[model cards] yaml specs and related questions
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Addressed in https://github.com/huggingface/model_card/pull/3 so closing this one." ]
1,600
1,602
1,602
CONTRIBUTOR
null
Is it possible to write a spec for how model cards should declare languages when it's more than one? I sorted it out: ``` language: - ru - en ``` could we add this somewhere? I tried this https://github.com/huggingface/model_card/pull/2 - definitely easier to mimic, but perhaps not ideal. ------------ The array of datasets is unclear too. https://huggingface.co/allenai/wmt16-en-de-dist-12-1 I added: ``` - datasets: http://www.statmt.org/wmt16/ ([test-set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)) ``` but it rendered them as one entry - how can I make those into proper dataset links? I think it'd have helped a lot with having the template demo a few entries and let the user modify those. I found a hint doing: ``` grep -r -A2 datasets: model_cards ``` edit: ok, now I know it expects just a label for a dataset that is already in the transformers list., and not the urls to the datasets. probably a guide could say, pick one more dataset entries from https://huggingface.co/datasets? -------------- Do we have a tool that validates model_card's yaml section? This can probably be automated, right? ``` make card-check ``` ? ------------------------- Why at https://huggingface.co/allenai/wmt19-de-en-6-6-big it added a bunch of tags that are wrong and weren't in the yaml spec? `lm-head` and `masked-lm` shouldn't be there -------------------------- Also how can I tell the models site that the demo should be of a translation type and not masking? somehow it picked up the correct demo type for https://huggingface.co/allenai/wmt16-en-de-dist-12-1 but for https://huggingface.co/facebook/wmt19-en-ru it wants to do fill-mask ---- Related thread: https://discuss.huggingface.co/t/tips-for-debugging-model-cards/814 --- @julien-c
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7208/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7207
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7207/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7207/comments
https://api.github.com/repos/huggingface/transformers/issues/7207/events
https://github.com/huggingface/transformers/pull/7207
703,753,171
MDExOlB1bGxSZXF1ZXN0NDg4ODA0NTM4
7,207
[model cards] fix yaml in cards
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7207?src=pr&el=h1) Report\n> Merging [#7207](https://codecov.io/gh/huggingface/transformers/pull/7207?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0fe6e435b6890a5993047e13a3a38a2e5e6e4dde?el=desc) will **increase** coverage by `1.86%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7207/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7207?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7207 +/- ##\n==========================================\n+ Coverage 79.74% 81.60% +1.86% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n+ Hits 26376 26994 +618 \n+ Misses 6701 6083 -618 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7207?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mc210LnB5) | `93.58% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `62.15% <0.00%> (-36.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.03% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.45% <0.00%> (+1.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.58% <0.00%> (+2.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.87% <0.00%> (+2.69%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7207/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7207?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7207?src=pr&el=footer). Last update [0fe6e43...13a097a](https://codecov.io/gh/huggingface/transformers/pull/7207?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
- Fix the language section of yaml. - Correct link in a comment that links to all fsmt models
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7207/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7207", "html_url": "https://github.com/huggingface/transformers/pull/7207", "diff_url": "https://github.com/huggingface/transformers/pull/7207.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7207.patch", "merged_at": 1600366277000 }
https://api.github.com/repos/huggingface/transformers/issues/7206
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7206/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7206/comments
https://api.github.com/repos/huggingface/transformers/issues/7206/events
https://github.com/huggingface/transformers/issues/7206
703,743,681
MDU6SXNzdWU3MDM3NDM2ODE=
7,206
[models website] search UI issues
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Unstale - would be great to have some of these resolved.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,616
1,616
CONTRIBUTOR
null
@sshleifer showed me a secret API to finding all fsmt models: https://huggingface.co/models?filter=fsmt I have several why/how questions: 1. Why can't I do the same from the models page? Where does a user find access to the `filter=` key? It's the `Tags` dropdown, but there is no way to choose the tag I want - `fsmt` is not in the options. 2. How can I use the UI to combine keywords/tags so that I could get sub-sets, like https://huggingface.co/models?filter=fsmt&filter=allenai or perhaps https://huggingface.co/models?search=allenai&filter=fsmt? 3. I don't think `filter` can be repeated. If I reverse order: https://huggingface.co/models?filter=allenai&filter=fsmt I get different results from https://huggingface.co/models?filter=fsmt&filter=allenai - so it ignores the second key (this is sub-set related) 4. The left upper corner widget appears very unintuitive - it looks like a search input, it even says "Search", but it only does something (shows a dropdown list of choices) if the input matches something it finds behind the scenes. So if I type `fsmt` it just sits there quietly. I should be able to hit enter and it to tell me that it found nothing or something like that. Silence is not a great UI. 5. Why doesn't that search widget in the left upper corner find `fsmt`? What fields does it search? Thank you. @julien-c
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7206/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7205
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7205/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7205/comments
https://api.github.com/repos/huggingface/transformers/issues/7205/events
https://github.com/huggingface/transformers/pull/7205
703,724,502
MDExOlB1bGxSZXF1ZXN0NDg4NzgwNjc1
7,205
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7205/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7205", "html_url": "https://github.com/huggingface/transformers/pull/7205", "diff_url": "https://github.com/huggingface/transformers/pull/7205.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7205.patch", "merged_at": 1600413871000 }
https://api.github.com/repos/huggingface/transformers/issues/7204
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7204/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7204/comments
https://api.github.com/repos/huggingface/transformers/issues/7204/events
https://github.com/huggingface/transformers/pull/7204
703,717,702
MDExOlB1bGxSZXF1ZXN0NDg4Nzc0Nzcy
7,204
Add customized text to widget
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7204?src=pr&el=h1) Report\n> Merging [#7204](https://codecov.io/gh/huggingface/transformers/pull/7204?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0fe6e435b6890a5993047e13a3a38a2e5e6e4dde?el=desc) will **decrease** coverage by `1.14%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7204/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7204?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7204 +/- ##\n==========================================\n- Coverage 79.74% 78.59% -1.15% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n- Hits 26376 25996 -380 \n- Misses 6701 7081 +380 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7204?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `20.38% <0.00%> (-67.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `33.57% <0.00%> (-65.72%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.24% <0.00%> (-55.76%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `57.28% <0.00%> (-15.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.47% <0.00%> (-11.91%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/7204/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7204?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7204?src=pr&el=footer). Last update [0fe6e43...43ae8f0](https://codecov.io/gh/huggingface/transformers/pull/7204?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7204/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7204", "html_url": "https://github.com/huggingface/transformers/pull/7204", "diff_url": "https://github.com/huggingface/transformers/pull/7204.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7204.patch", "merged_at": 1600413863000 }
https://api.github.com/repos/huggingface/transformers/issues/7203
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7203/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7203/comments
https://api.github.com/repos/huggingface/transformers/issues/7203/events
https://github.com/huggingface/transformers/issues/7203
703,663,894
MDU6SXNzdWU3MDM2NjM4OTQ=
7,203
[s2s] reload dataloaders every epoch?
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
CONTRIBUTOR
null
We should make this default to True.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7203/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7202
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7202/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7202/comments
https://api.github.com/repos/huggingface/transformers/issues/7202/events
https://github.com/huggingface/transformers/pull/7202
703,518,123
MDExOlB1bGxSZXF1ZXN0NDg4NjEyNjY0
7,202
Change to use relative imports in some files & Add python prompt symbols to example codes
{ "login": "soheeyang", "id": 28291528, "node_id": "MDQ6VXNlcjI4MjkxNTI4", "avatar_url": "https://avatars.githubusercontent.com/u/28291528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soheeyang", "html_url": "https://github.com/soheeyang", "followers_url": "https://api.github.com/users/soheeyang/followers", "following_url": "https://api.github.com/users/soheeyang/following{/other_user}", "gists_url": "https://api.github.com/users/soheeyang/gists{/gist_id}", "starred_url": "https://api.github.com/users/soheeyang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soheeyang/subscriptions", "organizations_url": "https://api.github.com/users/soheeyang/orgs", "repos_url": "https://api.github.com/users/soheeyang/repos", "events_url": "https://api.github.com/users/soheeyang/events{/privacy}", "received_events_url": "https://api.github.com/users/soheeyang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=h1) Report\n> Merging [#7202](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0203ad43bcd0b29423dec6ca1a58ed58300f0d61?el=desc) will **decrease** coverage by `2.45%`.\n> The diff coverage is `71.42%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7202/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7202 +/- ##\n==========================================\n- Coverage 80.86% 78.41% -2.46% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26114 25322 -792 \n- Misses 6179 6971 +792 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.29% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <ø> (+0.67%)` | :arrow_up: |\n| [src/transformers/modeling\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.83% <ø> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <ø> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.92% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `72.36% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.90% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.33% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `92.25% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <0.00%> (ø)` | |\n| ... and [26 more](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=footer). Last update [0203ad4...da63e64](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
- Moved transformers package import statements to relative imports in some files as in https://github.com/huggingface/transformers/pull/5796 - Added python prompt symbols in front of the example codes as in most of the codebase This is a resubmitted version of https://github.com/huggingface/transformers/pull/7188; I closed the previous PR because I used the wrong versions of style libraries.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7202/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7202", "html_url": "https://github.com/huggingface/transformers/pull/7202", "diff_url": "https://github.com/huggingface/transformers/pull/7202.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7202.patch", "merged_at": 1600360246000 }
https://api.github.com/repos/huggingface/transformers/issues/7201
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7201/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7201/comments
https://api.github.com/repos/huggingface/transformers/issues/7201/events
https://github.com/huggingface/transformers/pull/7201
703,427,646
MDExOlB1bGxSZXF1ZXN0NDg4NTM3NDQx
7,201
added multilabel text classification notebook using distilbert to community notebooks
{ "login": "DhavalTaunk08", "id": 31320833, "node_id": "MDQ6VXNlcjMxMzIwODMz", "avatar_url": "https://avatars.githubusercontent.com/u/31320833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DhavalTaunk08", "html_url": "https://github.com/DhavalTaunk08", "followers_url": "https://api.github.com/users/DhavalTaunk08/followers", "following_url": "https://api.github.com/users/DhavalTaunk08/following{/other_user}", "gists_url": "https://api.github.com/users/DhavalTaunk08/gists{/gist_id}", "starred_url": "https://api.github.com/users/DhavalTaunk08/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DhavalTaunk08/subscriptions", "organizations_url": "https://api.github.com/users/DhavalTaunk08/orgs", "repos_url": "https://api.github.com/users/DhavalTaunk08/repos", "events_url": "https://api.github.com/users/DhavalTaunk08/events{/privacy}", "received_events_url": "https://api.github.com/users/DhavalTaunk08/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=h1) Report\n> Merging [#7201](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/45b0b1ff2f28d83c734b7505e1705b799ce2ff84?el=desc) will **increase** coverage by `2.45%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7201/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7201 +/- ##\n==========================================\n+ Coverage 78.41% 80.87% +2.45% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n+ Hits 25323 26116 +793 \n+ Misses 6970 6177 -793 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.68%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.77% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+10.00%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=footer). Last update [45b0b1f...6a1e8df](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7201/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7201", "html_url": "https://github.com/huggingface/transformers/pull/7201", "diff_url": "https://github.com/huggingface/transformers/pull/7201.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7201.patch", "merged_at": 1600336738000 }
https://api.github.com/repos/huggingface/transformers/issues/7200
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7200/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7200/comments
https://api.github.com/repos/huggingface/transformers/issues/7200/events
https://github.com/huggingface/transformers/pull/7200
703,399,271
MDExOlB1bGxSZXF1ZXN0NDg4NTE0MDgy
7,200
[RAG] PR to save status of previous RAG code
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,651
1,601
MEMBER
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7200/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7200", "html_url": "https://github.com/huggingface/transformers/pull/7200", "diff_url": "https://github.com/huggingface/transformers/pull/7200.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7200.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7199
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7199/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7199/comments
https://api.github.com/repos/huggingface/transformers/issues/7199/events
https://github.com/huggingface/transformers/issues/7199
703,379,328
MDU6SXNzdWU3MDMzNzkzMjg=
7,199
Why does RoBERTa not label custom tokens as special tokens?
{ "login": "rcap107", "id": 7548232, "node_id": "MDQ6VXNlcjc1NDgyMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/7548232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcap107", "html_url": "https://github.com/rcap107", "followers_url": "https://api.github.com/users/rcap107/followers", "following_url": "https://api.github.com/users/rcap107/following{/other_user}", "gists_url": "https://api.github.com/users/rcap107/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcap107/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcap107/subscriptions", "organizations_url": "https://api.github.com/users/rcap107/orgs", "repos_url": "https://api.github.com/users/rcap107/repos", "events_url": "https://api.github.com/users/rcap107/events{/privacy}", "received_events_url": "https://api.github.com/users/rcap107/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Hi! That's a very detailed issue, very cool! Thanks for providing so much information. I believe you have a misconception about how to use `add_special_tokens` and how the special token mask is created.\r\n\r\n## Adding special tokens\r\n\r\nFirstly, if you use the `add_special_tokens=True` flag, that means you expect the tokenizer to handle the special tokens on its own: you should not provide your special tokens. As you've shown in the example, if you encode a sequence with this flag, you'll get the encoded sequence surrounded by the `<bos>` and `<eos>` tokens:\r\n\r\n```py\r\n>>> from transformers import RobertaTokenizer\r\n>>> tokenizer = RobertaTokenizer.from_pretrained(\"roberta-base\")\r\n>>> input_ids_single_sequence = tokenizer.encode(\"This is a sequence\", add_special_tokens=True)\r\n>>> tokenizer.decode(input_ids_single_sequence)\r\n\"<s>This is a sequence</s>\"\r\n```\r\n\r\nIn your case, you're actually using a `<sep>` token as you're encoding two sequences. You can use the tokenizer to do the same, by passing two sequences to it:\r\n\r\n```py\r\n>>> input_ids_sequence_pair = tokenizer.encode(\"This is a sequence\", \"This is another\", add_special_tokens=True)\r\n>>> tokenizer.decode(input_ids_sequence_pair)\r\n'<s>This is a sequence</s></s>This is another</s>'\r\n```\r\n\r\n⚠️ : Please note that the RoBERTa tokenizer is built using only `<s>` (the BOS token) and `</s>` (the SEP token), with two `</s></s>` as the separator.\r\n\r\n## Special token mask\r\n\r\nIf you try generating the special token mask here, you'll see that it has no issue generating it, __as long as you use the `already_has_special_tokens=True` flag__. You've already built the sequences with special tokens, so you should set this flag to `True`.\r\n\r\n```py\r\n>>> tokenizer.get_special_tokens_mask(input_ids_single_sequence, already_has_special_tokens=True)\r\n[1, 0, 0, 0, 0, 1]\r\n>>> tokenizer.get_special_tokens_mask(input_ids_sequence_pair, already_has_special_tokens=True)\r\n[1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1]\r\n```\r\n\r\n## Custom special tokens\r\n\r\nIn your case you want to use different special tokens than what is done with the original RoBERTa implementation. That's okay, but then you should specify it to your tokenizer:\r\n\r\n```py\r\n>>> tokenizer.add_special_tokens({'sep_token': '<i>'})\r\n1 # Returns the number of added tokens\r\n```\r\n\r\nWhen calling the `encode` method, you'll now correctly have the specified separator token where it should be:\r\n\r\n```py\r\n>>> input_ids_sequence_pair = tokenizer.encode(\"This is a sequence\", \"This is another\", add_special_tokens=True)\r\n>>> tokenizer.decode(input_ids_sequence_pair)\r\n'<s>This is a sequence <i> <i> This is another <i>'\r\n```\r\n\r\n## Setting a custom encoding behavior\r\n\r\nIn your use-case you show different behavior than the original RoBERTa tokenizer implementation, where you want your sequences encoded as:\r\n\r\n`<s> [SEQ_A] <i> [SEQ_B] </s>`\r\n\r\nIf you still want to rely on the `add_special_tokens=True` flag in `encode` and the accurate special tokens mask even without respecting the original behavior, you can do so by subclassing the `RobertaTokenizer` (or the fast version) and overriding the following methods `build_inputs_with_special_tokens`, `get_special_tokens_mask`, `create_token_type_ids_from_sequences`:\r\n\r\n```py\r\nclass MyRobertaTokenizer(RobertaTokenizer):\r\n def build_inputs_with_special_tokens(\r\n self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None\r\n ) -> List[int]:\r\n if token_ids_1 is None:\r\n return [self.cls_token_id] + token_ids_0 + [self.eos_token_id]\r\n cls = [self.cls_token_id]\r\n sep = [self.sep_token_id]\r\n eos = [self.eos_token_id]\r\n return cls + token_ids_0 + sep + token_ids_1 + eos\r\n\r\n def get_special_tokens_mask(\r\n self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False\r\n ) -> List[int]:\r\n if already_has_special_tokens:\r\n if token_ids_1 is not None:\r\n raise ValueError(\r\n \"You should not supply a second sequence if the provided sequence of \"\r\n \"ids is already formatted with special tokens for the model.\"\r\n )\r\n return list(map(lambda x: 1 if x in [self.sep_token_id, self.cls_token_id] else 0, token_ids_0))\r\n\r\n if token_ids_1 is None:\r\n return [1] + ([0] * len(token_ids_0)) + [1]\r\n return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1]\r\n\r\n def create_token_type_ids_from_sequences(\r\n self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None\r\n ) -> List[int]:\r\n sep = [self.sep_token_id]\r\n cls = [self.cls_token_id]\r\n eos = [self.eos_token_id]\r\n\r\n if token_ids_1 is None:\r\n return len(cls + token_ids_0 + eos) * [0]\r\n return len(cls + token_ids_0 + sep + token_ids_1 + eos) * [0]\r\n```\r\n\r\nThis will then correctly encode your methods (given you initialize the correct special token as is done in the following snippet):\r\n\r\n```py\r\ntokenizer = MyRobertaTokenizer.from_pretrained(\"roberta-base\")\r\ntokenizer.add_special_tokens({'cls_token': '<s>', 'sep_token': '<i>', 'eos_token': '</s>'})\r\n\r\nprint(tokenizer.decode(tokenizer.encode(\"This is a sequence\", add_special_tokens=True)))\r\n# <s>This is a sequence</s>\r\n\r\nprint(tokenizer.decode(tokenizer.encode(\"This is a sequence\", \"This is another\", add_special_tokens=True)))\r\n# <s>This is a sequence <i> This is another</s>\r\n```\r\n\r\n## Conclusion\r\n\r\nI hope this helps you! If you need a better understanding of what's happening, I encourage you to read the following documentation pages:\r\n- [RobertaTokenizer](https://huggingface.co/transformers/model_doc/roberta.html#transformers.RobertaTokenizer)\r\n- [Tokenizer main class](https://huggingface.co/transformers/main_classes/tokenizer.html)", "Thanks a lot for the extremely detailed answer! That cleared up a lot of doubts. I worked with some of the code you provided and got to the conclusion that RoBERTa as it is right now is probably not the best fit for my problem, so I'll have to work around that. ", "What a wonderful question and answer!" ]
1,600
1,661
1,600
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> Recently I’ve been working on training from scratch (with no pre-training whatsoever) a RoBERTa model starting from the code in [this tutorial][1]. I am working with a specific corpus I prepared according to my own format <s> ID 10 <i> COUNTRY USA <i> CAPITAL Washington DC </s> Here, `<s>` is supposed to be the `<bos>` token, `<i>` is my own `<sep>` token and `</s>` is the `<eos>` token. I noticed that one of the parameters that can be passed to the `tokenizer.encoder_plus` function is `add_special_tokens`. If `add_special_tokens=True`, the encoding of the sentence <s> COUNTRY USA <i> CAPITAL Washington DC </s> becomes <s> <s> COUNTRY USA <i> CAPITAL Washington DC </s> </s> and the `special_tokens_mask` is `1 0 0 0 0 0 0 0 0 1` (1, 8 zeros, 1). When I tried `add_special_tokens=False` on the same sentence `<s> COUNTRY USA <i> CAPITAL Washington DC </s>` the result of the encoding was correct: `<s> COUNTRY USA <i> CAPITAL Washington DC </s>` However, the `special_tokens_mask` is `0 0 0 0 0 0 0 0` (8 zeros). I therefore suspected that (for some reason) my token `<i>` were not recognized as a special token, so this time I tried to encode a sentence setting `add_special_tokens=False` and `True`, using only `<s>` and `</s>`. Token 4 is `<s>`, which is both the `<cls>` token and the `<bos>` token, while token 6 is `</s>`, which acts as `sep_token` and `eos_token`. The first case has `add_special_tokens=False` and its special token mask is full of 0’s, the first case has `add_special_tokens=True` and as expected the `<bos>` and `<eos>` tokens were added by the algorithm. The special tokens mask only shows the first and last tokens as “1”, while all the other 4’s and 6’s are missing. # original sentence <s> ID 10 </s><s> NAME Trevor </s> <s> COUNTRY USA </s><s> CAPITAL Washington DC </s> # encoding with add_special_tokens=False [4, 0, 232, 28, 27, 6, 4, 1, 232, 63, 93, 80, 97, 90, 93, 6, 4, 2, 232, 64, 62, 44, 6, 4, 3, 232, 66, 76, 94, 83, 84, 89, 82, 95, 90, 89, 232, 47, 46, 6] # encoding with add_special_tokens=True [4, 4, 0, 232, 28, 27, 6, 4, 1, 232, 63, 93, 80, 97, 90, 93, 6, 4, 2, 232, 64, 62, 44, 6, 4, 3, 232, 66, 76, 94, 83, 84, 89, 82, 95, 90, 89, 232, 47, 46, 6, 6] # special_tokens_mask with add_special_tokens=True [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] After testing both versions, the result I got with `add_special_tokens=True` was very good, while the second with `add_special_tokens=False` failed. This raises a few issues that I wasn’t able to solve on my own: - How can I access the `special_tokens_mask` to correct it to what it should be? - Where does RoBERTa make use of that mask, if it does? - Is there a method for setting the mask to something I want? e.g. the mask for `<s> ID 10 <i> COUNTRY USA </s>` should be `1 0 0 1 0 0 1` if `<s>`, `</s>` and `<i>` are special tokens. - If RoBERTa is not the correct model to do this, what model should I go for? Thanks a lot! [1]: https://huggingface.co/blog/how-to-train <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**: https://stackoverflow.com/questions/63917142/why-does-roberta-not-label-custom-tokens-as-special-tokens https://discuss.huggingface.co/t/why-does-roberta-behave-differently-if-i-provide-a-corpus-that-contains-special-tokens/1083/2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7199/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7199/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7198
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7198/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7198/comments
https://api.github.com/repos/huggingface/transformers/issues/7198/events
https://github.com/huggingface/transformers/issues/7198
703,234,070
MDU6SXNzdWU3MDMyMzQwNzA=
7,198
how to continue training from a checkpoint with Trainer?
{ "login": "fumpe", "id": 37223285, "node_id": "MDQ6VXNlcjM3MjIzMjg1", "avatar_url": "https://avatars.githubusercontent.com/u/37223285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fumpe", "html_url": "https://github.com/fumpe", "followers_url": "https://api.github.com/users/fumpe/followers", "following_url": "https://api.github.com/users/fumpe/following{/other_user}", "gists_url": "https://api.github.com/users/fumpe/gists{/gist_id}", "starred_url": "https://api.github.com/users/fumpe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fumpe/subscriptions", "organizations_url": "https://api.github.com/users/fumpe/orgs", "repos_url": "https://api.github.com/users/fumpe/repos", "events_url": "https://api.github.com/users/fumpe/events{/privacy}", "received_events_url": "https://api.github.com/users/fumpe/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Hi there, you have to pass the checkpoint path to the method `Trainer.train` to resume training:\r\n```\r\ntrainer.train(\"checkpoint-9500\")\r\n```\r\nIf you set your logging verbosity to the INFO level (`transformers.logging.set_verbosity_info()`) you should then see information about the training resuming and the number of steps skipped.", "Great, thanks a lot for your help Sylvain.\r\n\r\nWorks perfect.\r\n", "Hey, @fumpe which version of transformers were you using here? Please do let me know.", "> Hi there, you have to pass the checkpoint path to the method `Trainer.train` to resume training:\r\n> \r\n> ```\r\n> trainer.train(\"checkpoint-9500\")\r\n> ```\r\n> \r\n> If you set your logging verbosity to the INFO level (`transformers.logging.set_verbosity_info()`) you should then see information about the training resuming and the number of steps skipped.\r\n\r\n@sgugger if training is over, `num_train_epochs`, is reached, how do you load the checkpoint and train say for the next n epochs? when I load the checkpoint, it only trains until that epoch is over \r\n\r\n```\r\ntrainer.num_train_epochs = trainer.num_train_epochs + 5\r\ntrainer.train(\"checkpoint-9500\")\r\n```\r\n", "`Trainer` does not have a `num_train_epochs` attribute that is used, you need to set `trainer.args.num_train_epochs`. To be sure everything is updated, you probably need to re-instantiate the `Trainer` with the new `TrainingArguments` though.", "@sgugger: I wanted to fine tune a language model using ```--resume_from_checkpoint``` since I had sharded the text file into multiple pieces. I noticed that the ```_save()``` in Trainer doesn't save the optimizer & the scheduler state dicts and so I added a couple of lines to save the state dicts. \r\nAnd I printed the learning rate from scheduler using ```lr_scheduler.get_last_lr()``` in ```_load_optimizer_and_scheduler()``` right after this [line](https://github.com/huggingface/transformers/blob/ac12a5ae47b352458694194ae7c8b971310015ee/src/transformers/trainer.py#L1678). It always prints 0 (as the last learning rate) although I initialize the learning rate with a non-zero value during the first run and I am also leaving the warmup args as default (which is 0). Here's the command I am using to run the script: \r\n\r\n```\r\nCUDA_VISIBLE_DEVICES=2 python lm_scripts/lm_wwm.py --model_name_or_path \"./output_dir/demo1/\" --train_file \\\r\n\"input_data_part2.csv\" --validation_file \"validation.csv\" --logging_dir \"./output_dir/demo_log2\" --logging_steps 1000 \\\r\n--save_steps 1000 --save_total_limit 2 --seed 42 --output_dir \"./output_dir/demo2\" --evaluation_strategy \"epoch\" \\\r\n--per_device_train_batch_size 5 --per_device_eval_batch_size 10 --do_train --do_eval --resume_from_checkpoint True \\\r\n--ignore_data_skip --num_train_epochs 2\r\n``` \r\n\r\nHere's the [gist](https://gist.github.com/adithya8/7823d111efec9f3ff5313f2438068c2e) of the changes I made to the trainer code (Lines 904, 905, 1602, 1603). It would be super helpful if you could point me in the right direction to fix this?", "@adithya8 The optimizer, learning rate scheduler, state of the scalers and all the RNGs are saved when doing a checkpoint, you should check the list of files you have in your `checkpoint-xxx` folder.", "Gotcha! Thanks", "When I resume training from a checkpoint, I use a new batch size different from the previous training and it seems that the number of the skipped epoch is wrong.\r\n\r\nFor example, I trained a model for 10 epochs with `per_device_train_batch_size=10` and generate a checkpoint. Then I resume training from this checkpoint with `per_device_train_batch_size=10`, the trainer will skip 12 epochs.\r\n\r\nWhat should I do if I want to resume training with a different batch size?", "That feature is not supported, you should resume training with the exact same hyperparameters or start a new training if you want to change them.", "I have trained a model and saved the checkpoint,Now I want to increase the number of epochs to continue training, but I have a problem.My training loss is not able to match the previous training, there is a very big difference(First up, then down, but not down to the original level).I increased the number of epochs from 20 to 30.\r\n`training_args = TrainingArguments( \r\n output_dir= models_path + '/' + chain + '_' + modeltype + 'bert/two/log',\r\n save_total_limit=5,\r\n do_train=True,\r\n do_eval=True,\r\n evaluation_strategy='epoch',\r\n #eval_steps=1,\r\n overwrite_output_dir=True,\r\n num_train_epochs=30,\r\n per_device_train_batch_size=128,\r\n learning_rate=9.282980634054497e-05,\r\n weight_decay=0.1531103573951345,\r\n warmup_ratio=0.05549995455206759,\r\n logging_first_step=True,\r\n ignore_data_skip=True,\r\n save_steps=100,\r\n logging_dir = models_path + '/' + chain + '_' + modeltype + 'bert/two/runs/' + datetime.datetime.strftime(datetime.datetime.now(),'%Y-%m-%d_%H:%M:%S'),\r\n logging_steps=10,\r\n seed = 19930606\r\n )`\r\n`trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=train_dataset,\r\n eval_dataset = eva_dataset\r\n )`\r\n` trainer.train(resume_from_checkpoint=True)`\r\nThanks a lot for the help.", "> have trained a model and saved the checkpoint,Now I want to increase the number of epochs to continue training, but I have a problem.My training loss is not able to match the previous training, there is a very big difference(First up, then down, but not down to the original level).I increased the number of epochs\r\n\r\n\r\n\r\n> That feature is not supported, you should resume training with the exact same hyperparameters or start a new training if you want to change them.\r\n\r\nFor example i want to freeze some layers in the checkpointed model. Is it possible?", "I have two bin files like pytorch_model-00001-of-00002.bin and pytorch_model-00002-of-00002.bin in the checkpoint-1000 folder but it won't load! It still says that \"ValueError: Can't find a valid checkpoint at checkpoint-1000.\"", "Make sure you are using a source install of Transformers. This is a bug that was recently fixed @oscar-eaglewatch ", "How do I differentiate between,\r\n\r\n1. continue pre-training from a checkpoint with the previous optimizer/scheduler state\r\n2. continue pre-training from a checkpoint but reset the optimizer and the scheduler \r\n\r\n", "@sgugger I have been changing my hyperparameters and got `Can't find a valid checkpoint`. Now it seems I cannot find the original hyperparameter settings (esp. `for` `per_device_train_batch_size`, `gradient_accumulation_steps`, and `max_steps`). I did find the `max_steps` in some json file in the checkpoint, but not the other hyperparameter settings. How can I find out the settings I should pick?", "Hi, there, I got same error with deepspeed. training noramlly and resume got `Can't find a valid checkpoint`\r\n\r\nfirst I try resume_from_checkpoint with `out/lora-Vicuna-chat` (output_path) got `Can't find a valid checkpoint`\r\nthen I send `out/lora-Vicuna-chat/checkpoint-6000` I can not load the lora weights........\r\n\r\n```\r\n\"base_model.model.model.layers.31.self_attn.k_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.self_attn.v_proj.weight\", \"base_model.model.model.layers.31.self_attn.v_proj.lora_A.default.weight\", \r\n\"base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.self_attn.o_proj.weight\", \"base_model.model.model.layers.31.self_attn.o_proj.lora_A.default.weight\", \r\n\"base_model.model.model.layers.31.self_attn.o_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.self_attn.rotary_emb.inv_freq\", \"base_model.model.model.layers.31.mlp.gate_proj.weight\", \r\n\"base_model.model.model.layers.31.mlp.gate_proj.lora_A.default.weight\", \"base_model.model.model.layers.31.mlp.gate_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.mlp.down_proj.weight\", \r\n\"base_model.model.model.layers.31.mlp.down_proj.lora_A.default.weight\", \"base_model.model.model.layers.31.mlp.down_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.mlp.up_proj.weight\", \r\n\"base_model.model.model.layers.31.mlp.up_proj.lora_A.default.weight\", \"base_model.model.model.layers.31.mlp.up_proj.lora_B.default.weight\", \"base_model.model.model.layers.31.input_layernorm.weight\", \r\n Unexpected key(s) in state_dict:\"base_model.model.model.layers.31.self_attn.q_proj.lora_A.weight\", \"base_model.model.model.layers.31.self_attn.q_proj.lora_B.weight\", \"base_model.model.model.layers.31.self_attn.k_proj.lora_A.weight\", \r\n\"base_model.model.model.layers.31.self_attn.k_proj.lora_B.weight\", \"base_model.model.model.layers.31.self_attn.v_proj.lora_A.weight\", \"base_model.model.model.layers.31.self_attn.v_proj.lora_B.weight\", \r\n\"base_model.model.model.layers.31.self_attn.o_proj.lora_A.weight\", \"base_model.model.model.layers.31.self_attn.o_proj.lora_B.weight\", \"base_model.model.model.layers.31.mlp.gate_proj.lora_A.weight\", \r\n\"base_model.model.model.layers.31.mlp.gate_proj.lora_B.weight\", \"base_model.model.model.layers.31.mlp.down_proj.lora_A.weight\", \"base_model.model.model.layers.31.mlp.down_proj.lora_B.weight\", \r\n```\r\n\r\nthe model with some suffix `default`, but samed model didn't have......\r\n\r\nI am confused so much", "Transformers have been a few years old, I thought that it should be mature by now, atleast 1 core version that works flawlessly all the way. \r\n\r\nBut man, basic and simple issues like saving checkpoints, weight reload, resume, skip dataset etc... with pure transformers or deepspeed...still bumping around, real pain in the b** :( ... I must say that I admired Yolov5 most in the way they described things, annoucing news, bugs fixed. My 2 cents for this great repo, it deserves much more than this.", "Falling back to 4.28.1 :((", "I am having the same problem with Deepspeed, it seems that a different format of checkpoint is expected for Deepspeed. Did you manage to resolve the issue? @lucasjinreal lucasjinreal " ]
1,600
1,699
1,600
NONE
null
# ❓ Questions & Help ## Details I am trying to continue training my model (gpt-2) from a checkpoint, using Trainer. However when I try to do it the model starts training from 0, not from the checkpoint. I share my code because I don't know where I'm making the mistake. ``` import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') from transformers import TextDataset,DataCollatorForLanguageModeling, AutoTokenizer, GPT2LMHeadModel, Trainer, TrainingArguments tokenizer = AutoTokenizer.from_pretrained("gpt2-large") train_dataset = TextDataset( tokenizer=tokenizer, file_path='textfile (1).txt', block_size=128) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False, ) model = GPT2LMHeadModel.from_pretrained("checkpoint-9500").to(device) ##HERE I LOAD FROM CHECKPOINT training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=4, # total # of training epochs per_device_train_batch_size=1, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, #eval_dataset=validation_dataset, prediction_loss_only=True, ) trainer.train() ``` Thanks a lot for the help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7198/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7198/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7197
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7197/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7197/comments
https://api.github.com/repos/huggingface/transformers/issues/7197/events
https://github.com/huggingface/transformers/issues/7197
703,221,117
MDU6SXNzdWU3MDMyMjExMTc=
7,197
[s2s]add wandb summary metric that tracks best val bleu/rouge
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
CONTRIBUTOR
null
right now, wandb just shows the last value, not the best value (associated with the checkpoint). So if you overtrained something you can't see that it was good on the wandb page.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7197/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7196
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7196/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7196/comments
https://api.github.com/repos/huggingface/transformers/issues/7196/events
https://github.com/huggingface/transformers/pull/7196
703,205,121
MDExOlB1bGxSZXF1ZXN0NDg4MzUzOTgx
7,196
[s2s] fix kwarg typo
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7196/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7196", "html_url": "https://github.com/huggingface/transformers/pull/7196", "diff_url": "https://github.com/huggingface/transformers/pull/7196.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7196.patch", "merged_at": 1600307938000 }
https://api.github.com/repos/huggingface/transformers/issues/7195
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7195/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7195/comments
https://api.github.com/repos/huggingface/transformers/issues/7195/events
https://github.com/huggingface/transformers/issues/7195
703,197,136
MDU6SXNzdWU3MDMxOTcxMzY=
7,195
GPT2 Heatmap Error: 'Parameter' object has no attribute 'get_shape'
{ "login": "BigSalmon2", "id": 61605789, "node_id": "MDQ6VXNlcjYxNjA1Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BigSalmon2", "html_url": "https://github.com/BigSalmon2", "followers_url": "https://api.github.com/users/BigSalmon2/followers", "following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}", "gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}", "starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions", "organizations_url": "https://api.github.com/users/BigSalmon2/orgs", "repos_url": "https://api.github.com/users/BigSalmon2/repos", "events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}", "received_events_url": "https://api.github.com/users/BigSalmon2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As the error says, `get_shape` does not exist in PyTorch parameters. Use `[...].shape` instead of `[...].get_shape()`." ]
1,600
1,600
1,600
NONE
null
``` import numpy as np import tensorflow as tf def compute_textual_saliency(model, embedding_matrix, tokenizer, text): token_ids = tokenizer.encode(text, add_special_tokens=True) vocab_size = embedding_matrix.get_shape()[0] heatmap_data = [] for masked_token_index in range(len(token_ids)): # print(f'processing token {masked_token_index + 1} / {len(token_ids)}') if masked_token_index == 0: heatmap_data.append({ 'token': '[CLR]', 'meta': ['', '', ''], 'heat': [1] + [0] * (len(token_ids) - 1) }) elif masked_token_index == len(token_ids) - 1: heatmap_data.append({ 'token': ' ', 'format': True }) heatmap_data.append({ 'token': '[SEP]', 'meta': ['', '', ''], 'heat': [0] * (len(token_ids) - 1) + [1] }) else: # Get the actual token target_token = tokenizer.convert_ids_to_tokens( token_ids[masked_token_index]) if target_token[0:2] == '##': target_token = target_token[2:] else: heatmap_data.append({ 'token': ' ', 'format': True }) # integers are not differentable, so use a one-hot encoding # of the intput token_ids_tensor = tf.constant([ token_ids[0:masked_token_index] + [tokenizer.mask_token_id] + token_ids[masked_token_index + 1:] ], dtype='int32') token_ids_tensor_one_hot = tf.one_hot(token_ids_tensor, vocab_size) # To select, the correct output witch is what the importance # measure targets, create a masking tensor. tf.gather_nd could also # be used, but this is easier. output_mask = np.zeros((1, len(token_ids), vocab_size)) output_mask[0, masked_token_index, token_ids[masked_token_index]] = 1 output_mask_tensor = tf.constant(output_mask, dtype='float32') # Compute gradient of the logits of the correct target, w.r.t. the # input with tf.GradientTape(watch_accessed_variables=False) as tape: tape.watch(token_ids_tensor_one_hot) inputs_embeds = tf.matmul(token_ids_tensor_one_hot,embedding_matrix) predict, = model({"inputs_embeds": inputs_embeds}) predict_mask_correct_token = tf.reduce_sum(predict * output_mask_tensor) # Get the top-3 predictions (_, top_3_indices) = tf.math.top_k(predict[0, masked_token_index, :], 3) top_3_predicted_tokens = tokenizer.convert_ids_to_tokens(top_3_indices) # compute the connectivity connectivity_non_normalized = tf.norm( tape.gradient(predict_mask_correct_token, token_ids_tensor_one_hot), axis=2) connectivity_tensor = ( connectivity_non_normalized / tf.reduce_max(connectivity_non_normalized) ) connectivity = connectivity_tensor[0].numpy().tolist() heatmap_data.append({ 'token': target_token, 'meta': top_3_predicted_tokens, 'heat': connectivity }) return heatmap_data ``` ``` text = ("context the formal study of grammar is an important part of education" " from a young age through advanced learning though the rules taught" " in schools are not a grammar in the sense most linguists use") from transformers import TFDistilBertForMaskedLM, DistilBertTokenizer dbert_model = TFDistilBertForMaskedLM.from_pretrained('distilbert-base-uncased') dbert_tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') dbert_embmat = dbert_model.distilbert.embeddings.word_embeddings from transformers import AutoTokenizer, AutoModelWithLMHead gpt2_model = AutoModelWithLMHead.from_pretrained("gpt2") gpt2_tokenizer = AutoTokenizer.from_pretrained("gpt2") gpt2_embmat = gpt2_model.transformer.wte.weight from textualheatmap import TextualHeatmap heatmap = TextualHeatmap(facet_titles = ['BERT', 'Distil BERT'], show_meta=True) heatmap.set_data([ compute_textual_saliency(dbert_model, dbert_embmat, dbert_tokenizer, text), compute_textual_saliency(gpt2_model, gpt2_embmat, gpt2_tokenizer, text) ]) ``` `AttributeError: 'Parameter' object has no attribute 'get_shape'`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7195/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7194
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7194/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7194/comments
https://api.github.com/repos/huggingface/transformers/issues/7194/events
https://github.com/huggingface/transformers/pull/7194
703,185,894
MDExOlB1bGxSZXF1ZXN0NDg4MzM4NTE5
7,194
examples/seq2seq/__init__.py mutates sys.path
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=h1) Report\n> Merging [#7194](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4f6e52574248636352a746cfe6cc0b13cf3eb7f9?el=desc) will **increase** coverage by `2.14%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7194/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7194 +/- ##\n==========================================\n+ Coverage 78.63% 80.78% +2.14% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n+ Hits 26300 27018 +718 \n+ Misses 7146 6428 -718 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.10% <0.00%> (-29.80%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.87% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=footer). Last update [4f6e525...2c2aa9e](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Have you tested invoking run_eval.py/finetune.py from examples/seq2seq? \r\n\r\nThat is the workflow documented in `seq2seq/README.md`\r\n\r\nThis command is cheap and if it starts running you know you are fine:\r\n\r\nhttps://discuss.huggingface.co/t/good-command-to-test-examples-seq2seq-refactors/982", "Ah, we need tests that aren't tests too :) \r\n\r\nGood catch!\r\n```\r\ncd examples/seq2seq/\r\npython ./run_eval.py -h\r\nTraceback (most recent call last):\r\n File \"./run_eval.py\", line 15, in <module>\r\n from .utils import calculate_bleu, calculate_rouge, parse_numeric_n_bool_cl_kwargs, use_task_specific_params\r\nImportError: attempted relative import with no known parent package\r\n```\r\n\r\nback to the drawing board.\r\n\r\nThanks for your vigilance, @sshleifer ", "OK, so while `conftest.py` worked for tests, it left the scripts broken. So in the 2nd attempt I replaced `conftest.py` with `__init__.py` that does the same and works for both types. \r\n\r\n@sshleifer, @LysandreJik \r\n\r\n\r\n#### Header added by sam because what is below is tangentially related to this PR :)\r\n\r\np.s. it looks like importing a local module from a script is a very controversial thing from reading SO. If using the retry approach, it's still probably more readable to do:\r\n\r\n```\r\nif __name__ == '__main__':\r\n from mymodule import aaa\r\nelse:\r\n from .mymodule import aaa\r\n```\r\nwhich is more telling.\r\n", "> This command is cheap and if it starts running you know you are fine:\r\n\r\nMuch faster to test is to:\r\n```\r\npython finetune.py -h\r\n```", "Pycharm users who develop on seq2seq a lot (I can think of 0 others haha) may need to change Preferences/Project Structure/Content Root to accommodate this. I posted [instructions](https://discuss.huggingface.co/t/pycharm-project-structure-seq2seq/1206) on the forums that I'll delete if this doesn't get merged: .", "Merging this with the idea that there is a 10% chance we will have to revert because of some unforeseen dealbreaking issue." ]
1,600
1,600
1,600
CONTRIBUTOR
null
(**edit**: the PR went through a change - i updated the description here). Fixes part of the issue raised here: https://github.com/huggingface/transformers/pull/7109#issuecomment-693213029 This PR replaces the hack of retrying to import modules via `foo` and if failed via `.foo` depending on where the test is invoked from, so now it's possible to do: ``` pytest ./examples/seq2seq/test_seq2seq_examples.py --collect-only -q cd examples pytest ./seq2seq/test_seq2seq_examples.py --collect-only -q cd seq2seq/ pytest ./test_seq2seq_examples.py --collect-only -q ``` and all the imports just work. (I used `--collect-only -q` just to validate that it works, as it's a fast run) For scripts invocations work too: ``` python ./examples/seq2seq/run_eval.py -h cd examples python ./seq2seq/run_eval.py -h cd seq2seq/ python ./run_eval.py -h ``` While `conftest.py` worked for tests, it left the scripts broken. So in the 2nd attempt I replaced `conftest.py` with `__init__.py` that does the same and works for both types. If it's fitting, we can do the same for the rest of the `examples` sub-dirs. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7194/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7194", "html_url": "https://github.com/huggingface/transformers/pull/7194", "diff_url": "https://github.com/huggingface/transformers/pull/7194.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7194.patch", "merged_at": 1600635283000 }
https://api.github.com/repos/huggingface/transformers/issues/7193
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7193/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7193/comments
https://api.github.com/repos/huggingface/transformers/issues/7193/events
https://github.com/huggingface/transformers/pull/7193
703,176,217
MDExOlB1bGxSZXF1ZXN0NDg4MzMwNjUw
7,193
Add bos and eos to gpt2 tokenizer
{ "login": "zhujl1991", "id": 1834838, "node_id": "MDQ6VXNlcjE4MzQ4Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1834838?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhujl1991", "html_url": "https://github.com/zhujl1991", "followers_url": "https://api.github.com/users/zhujl1991/followers", "following_url": "https://api.github.com/users/zhujl1991/following{/other_user}", "gists_url": "https://api.github.com/users/zhujl1991/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhujl1991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhujl1991/subscriptions", "organizations_url": "https://api.github.com/users/zhujl1991/orgs", "repos_url": "https://api.github.com/users/zhujl1991/repos", "events_url": "https://api.github.com/users/zhujl1991/events{/privacy}", "received_events_url": "https://api.github.com/users/zhujl1991/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
Fix https://github.com/huggingface/transformers/issues/3311 Test: ``` tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') tokenizer.add_special_tokens({'pad_token': '<pad>'}) inputs = tokenizer.batch_encode_plus(["My cute dog", "My cute dog is is"], add_special_tokens=True, padding=True, return_tensors="pt") ``` output: before: ``` {'input_ids': tensor([[ 3666, 13779, 3290, 50257, 50257], [ 3666, 13779, 3290, 318, 318]]), 'attention_mask': tensor([[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]])} ``` after: ``` {'input_ids': tensor([[50256, 3666, 13779, 3290, 50256, 50257, 50257], [50256, 3666, 13779, 3290, 318, 318, 50256]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1]])} ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7193/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7193", "html_url": "https://github.com/huggingface/transformers/pull/7193", "diff_url": "https://github.com/huggingface/transformers/pull/7193.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7193.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7192
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7192/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7192/comments
https://api.github.com/repos/huggingface/transformers/issues/7192/events
https://github.com/huggingface/transformers/pull/7192
703,174,386
MDExOlB1bGxSZXF1ZXN0NDg4MzI5MTMx
7,192
[s2s] run_eval/run_eval_search tweaks
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "sadly I think `pytest.param` was needed or each entry needs to be a list or something:\r\n\r\nhttps://circle-production-customer-artifacts.s3.amazonaws.com/picard/forks/5bdabdd888af1f000130874a/278214696/5f62bdadb33de21283aa7519-0-build/artifacts/test_output.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200917T014628Z&X-Amz-SignedHeaders=host&X-Amz-Expires=59&X-Amz-Credential=AKIAJR3Q6CR467H7Z55A%2F20200917%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=4ec6c5af819aab91ed8103865ad61af3617a43c8ffa34fe5192a45e3067e3cc0\r\n\r\n\r\n```\r\n__________ ERROR collecting examples/seq2seq/test_seq2seq_examples.py __________\r\nexamples/seq2seq/test_seq2seq_examples.py::test_finetune: in \"parametrize\" the number of names (1):\r\n ['model']\r\nmust be equal to the number of values (31):\r\n patrickvonplaten/t5-tiny-random\r\n__________ ERROR collecting examples/seq2seq/test_seq2seq_examples.py __________\r\n```", "> sadly I think `pytest.param` was needed or each entry needs to be a list or something:\r\nright I didn't run all tests, didn't see the other entries used an inefficient use of `parametrize`.\r\n\r\nInstead of:\r\n```\r\[email protected](\"model\", [BART_TINY, MBART_TINY])\r\n```\r\nThey were using:\r\n```\r\[email protected]([\"model\"], ...\r\n```\r\nnotice the argument names are normally a string and not a list. Since it was made into the list, that's why all the extra code.\r\n\r\na recent addition: https://huggingface.co/transformers/master/testing.html#parametrization\r\n\r\nBTW, if we switch to `parametrized` (which is now part of the dependencies for `dev`) it works identically with `unittest` and `pytest` tests - same API.\r\n\r\nI fixed it and now:\r\n\r\n```\r\npytest --disable-warnings examples/seq2seq/test_seq2seq_examples.py --collect-only -q\r\n[...]\r\nexamples/seq2seq/test_seq2seq_examples.py::test_run_eval_slow[sshleifer/bart-tiny-random]\r\nexamples/seq2seq/test_seq2seq_examples.py::test_run_eval\r\nexamples/seq2seq/test_seq2seq_examples.py::test_run_eval_slow[sshleifer/tiny-mbart]\r\n[...]\r\n```\r\n", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=h1) Report\n> Merging [#7192](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/45b0b1ff2f28d83c734b7505e1705b799ce2ff84?el=desc) will **increase** coverage by `1.08%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7192/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7192 +/- ##\n==========================================\n+ Coverage 78.41% 79.50% +1.08% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n+ Hits 25323 25673 +350 \n+ Misses 6970 6620 -350 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `93.23% <0.00%> (+74.20%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=footer). Last update [45b0b1f...d329cf2](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
working with @sshleifer on making run_eval/run_eval_search and tests better.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7192/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7192", "html_url": "https://github.com/huggingface/transformers/pull/7192", "diff_url": "https://github.com/huggingface/transformers/pull/7192.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7192.patch", "merged_at": 1600367199000 }
https://api.github.com/repos/huggingface/transformers/issues/7191
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7191/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7191/comments
https://api.github.com/repos/huggingface/transformers/issues/7191/events
https://github.com/huggingface/transformers/pull/7191
703,097,723
MDExOlB1bGxSZXF1ZXN0NDg4MjYzNzc2
7,191
Trainer multi label
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=h1) Report\n> Merging [#7191](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.45%`.\n> The diff coverage is `70.83%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7191/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7191 +/- ##\n==========================================\n- Coverage 80.86% 79.41% -1.46% \n==========================================\n Files 169 169 \n Lines 32293 32322 +29 \n==========================================\n- Hits 26115 25668 -447 \n- Misses 6178 6654 +476 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `58.88% <56.52%> (-1.41%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.72% <80.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.74% <100.00%> (+0.07%)` | :arrow_up: |\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=footer). Last update [3babef8...29ce623](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
COLLABORATOR
null
This is a follow-up from #7126. The same kinds of models that can output multiple predictions expect multiple labels (not named "labels") so the evaluation code needs to be changed for this. To support models built by users, I added a `label_names` field in the `TrainingArguments` which contain the label names. It then defaults to `["labels"]` for most models, `["start_positions", "end_positions"]` for question answering models if the user does not set it to work seamlessly for all Transformers models. I ended up writing a few util functions that concat/numpify for tensors or nested lists/tuples of tensors to avoid testing everywhere in `Trainer`, I think the design is cleaner this way and it also supports model with crazy outputs (if we set `output_attentions=True` for instance). I also added a test for the multiple labels predictions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7191/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7191", "html_url": "https://github.com/huggingface/transformers/pull/7191", "diff_url": "https://github.com/huggingface/transformers/pull/7191.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7191.patch", "merged_at": 1600344937000 }
https://api.github.com/repos/huggingface/transformers/issues/7190
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7190/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7190/comments
https://api.github.com/repos/huggingface/transformers/issues/7190/events
https://github.com/huggingface/transformers/pull/7190
703,093,227
MDExOlB1bGxSZXF1ZXN0NDg4MjU5OTgy
7,190
[build scripts] update
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=h1) Report\n> Merging [#7190](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0203ad43bcd0b29423dec6ca1a58ed58300f0d61?el=desc) will **decrease** coverage by `2.21%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7190/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7190 +/- ##\n==========================================\n- Coverage 80.86% 78.65% -2.22% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26114 25399 -715 \n- Misses 6179 6894 +715 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `33.57% <0.00%> (-65.72%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.85% <0.00%> (-55.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `57.28% <0.00%> (-15.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=footer). Last update [0203ad4...dd95009](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I'm going to add it here https://github.com/huggingface/transformers/pull/7153 where it really belongs" ]
1,600
1,600
1,600
CONTRIBUTOR
null
fill-in the missing info for the build script as provided by the searcher. (this is also an experiment in re-using the branch for a follow up PR, it shows the old commits that have been merged already, but the diff is new. not sure if there is a better way.)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7190/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7190", "html_url": "https://github.com/huggingface/transformers/pull/7190", "diff_url": "https://github.com/huggingface/transformers/pull/7190.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7190.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7189
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7189/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7189/comments
https://api.github.com/repos/huggingface/transformers/issues/7189/events
https://github.com/huggingface/transformers/issues/7189
703,080,710
MDU6SXNzdWU3MDMwODA3MTA=
7,189
When trying to train LongformerModel got **forward() got an unexpected keyword argument 'labels'**
{ "login": "MutuLawyer", "id": 48943483, "node_id": "MDQ6VXNlcjQ4OTQzNDgz", "avatar_url": "https://avatars.githubusercontent.com/u/48943483?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MutuLawyer", "html_url": "https://github.com/MutuLawyer", "followers_url": "https://api.github.com/users/MutuLawyer/followers", "following_url": "https://api.github.com/users/MutuLawyer/following{/other_user}", "gists_url": "https://api.github.com/users/MutuLawyer/gists{/gist_id}", "starred_url": "https://api.github.com/users/MutuLawyer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MutuLawyer/subscriptions", "organizations_url": "https://api.github.com/users/MutuLawyer/orgs", "repos_url": "https://api.github.com/users/MutuLawyer/repos", "events_url": "https://api.github.com/users/MutuLawyer/events{/privacy}", "received_events_url": "https://api.github.com/users/MutuLawyer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "The base models for all architectures are not usable with `Trainer` since they don't have a training objective. You need to pick the model class adapted to your task.", "> The base models for all architectures are not usable with `Trainer` since they don't have a training objective. You need to pick the model class adapted to your task.\r\n\r\nThank you!\r\nI replaced the model class \"LongformerModel\" with the \"LongformerForSequenceClassification\".\r\nand got an error. \r\nValueError: Expected input batch_size (2) to match target batch_size (256).\r\nWhich class of Longformer model is suitable for me (if any)?\r\nActually I'd like to perform a multi-classification task with my dataset.\r\n\r\n```\r\nValueError Traceback (most recent call last)\r\n<ipython-input-76-0c647bc3a8b8> in <module>()\r\n----> 1 get_ipython().run_cell_magic('time', '', 'trainer.train()')\r\n\r\n11 frames\r\n<decorator-gen-60> in time(self, line, cell, local_ns)\r\n\r\n<timed eval> in <module>()\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)\r\n 2214 if input.size(0) != target.size(0):\r\n 2215 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'\r\n-> 2216 .format(input.size(0), target.size(0)))\r\n 2217 if dim == 2:\r\n 2218 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\r\n\r\nValueError: Expected input batch_size (2) to match target batch_size (256).\r\n```", "You can't use a `DataCollatorForLanguageModeling` for a classification task, this data collator prepares the data for language modeling (with masking since you set that option to `True`).", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
@sgugger @patrickvonplaten ## Environment info - `transformers` version: 3.1.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False The model I am using: Longformer The problem arises when using my own modified scripts (based on [How to train a new language model from scratch using Transformers and Tokenizers](https://github.com/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb) ) ``` !pip uninstall -y tensorflow !pip install git+https://github.com/huggingface/transformers !pip list | grep -E 'transformers|tokenizers' from pathlib import Path from tokenizers import (ByteLevelBPETokenizer, CharBPETokenizer, SentencePieceBPETokenizer, BertWordPieceTokenizer) tokenizer = ByteLevelBPETokenizer() tokenizer.train(files="/content/drive/My Drive/summa/news_text.txt", vocab_size=5000, min_frequency=5, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) !mkdir LongformerL tokenizer.save_model("./LongformerL") from tokenizers.implementations import CharBPETokenizer from tokenizers.processors import BertProcessing tokenizer = ByteLevelBPETokenizer( "./LongformerL/vocab.json", "./LongformerL/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) from transformers import LongformerConfig, LongformerModel config = LongformerConfig( vocab_size=5_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) from transformers import LongformerTokenizerFast tokenizer = LongformerTokenizerFast.from_pretrained("./LongformerL", max_len=512) from transformers import LongformerModel model = LongformerModel(config=config) model.num_parameters() from transformers import LineByLineTextDataset dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="/content/drive/My Drive/summa/news_text.txt", block_size=128, ) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="/content/drive/My Drive/dataset_contract", overwrite_output_dir=True, per_device_train_batch_size=2, num_train_epochs=5, save_steps=1_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) trainer.train() ``` The tasks I am working on is my own task or dataset. I'd like to train a Longformer from scratch. ``` Epoch: 0% 0/5 [00:00<?, ?it/s] Iteration: 0% 0/50 [00:00<?, ?it/s] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-47-0c647bc3a8b8> in <module>() ----> 1 get_ipython().run_cell_magic('time', '', 'trainer.train()') 6 frames <decorator-gen-60> in time(self, line, cell, local_ns) <timed eval> in <module>() /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), **TypeError: forward() got an unexpected keyword argument 'labels'** ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7189/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7188
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7188/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7188/comments
https://api.github.com/repos/huggingface/transformers/issues/7188/events
https://github.com/huggingface/transformers/pull/7188
703,060,916
MDExOlB1bGxSZXF1ZXN0NDg4MjM0NTAw
7,188
Change to use relative imports in some files & Add python prompt symbols to example codes
{ "login": "soheeyang", "id": 28291528, "node_id": "MDQ6VXNlcjI4MjkxNTI4", "avatar_url": "https://avatars.githubusercontent.com/u/28291528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soheeyang", "html_url": "https://github.com/soheeyang", "followers_url": "https://api.github.com/users/soheeyang/followers", "following_url": "https://api.github.com/users/soheeyang/following{/other_user}", "gists_url": "https://api.github.com/users/soheeyang/gists{/gist_id}", "starred_url": "https://api.github.com/users/soheeyang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soheeyang/subscriptions", "organizations_url": "https://api.github.com/users/soheeyang/orgs", "repos_url": "https://api.github.com/users/soheeyang/repos", "events_url": "https://api.github.com/users/soheeyang/events{/privacy}", "received_events_url": "https://api.github.com/users/soheeyang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thank you for the review!\r\n\r\nNow the PR passes all the tests, but I think I messed up many commits by using a wrong version of style libraries.\r\n\r\nWould it be okay to close and resubmit this PR? (https://github.com/huggingface/transformers/pull/7202)", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=h1) Report\n> Merging [#7188](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0203ad43bcd0b29423dec6ca1a58ed58300f0d61?el=desc) will **increase** coverage by `0.46%`.\n> The diff coverage is `71.42%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7188/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7188 +/- ##\n==========================================\n+ Coverage 80.86% 81.33% +0.46% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n+ Hits 26114 26264 +150 \n+ Misses 6179 6029 -150 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.29% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <ø> (+0.67%)` | :arrow_up: |\n| [src/transformers/modeling\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.83% <ø> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.92% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `72.36% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.90% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.33% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `92.25% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <0.00%> (ø)` | |\n| ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=footer). Last update [0203ad4...4bc0e5f](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
- Moved transformers package import statements to relative imports in some files as in https://github.com/huggingface/transformers/pull/5796 - Added python prompt symbols in front of the example codes as in most of the codebase
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7188/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7188", "html_url": "https://github.com/huggingface/transformers/pull/7188", "diff_url": "https://github.com/huggingface/transformers/pull/7188.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7188.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7187
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7187/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7187/comments
https://api.github.com/repos/huggingface/transformers/issues/7187/events
https://github.com/huggingface/transformers/pull/7187
703,047,177
MDExOlB1bGxSZXF1ZXN0NDg4MjIzNTU3
7,187
[s2s-wip] ray instructions
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #7170 ### TODO - test - finish instructions - modify ray_tune_cli to not hardcode all possible lightning args. - probably more things
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7187/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7187", "html_url": "https://github.com/huggingface/transformers/pull/7187", "diff_url": "https://github.com/huggingface/transformers/pull/7187.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7187.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7186
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7186/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7186/comments
https://api.github.com/repos/huggingface/transformers/issues/7186/events
https://github.com/huggingface/transformers/pull/7186
703,012,120
MDExOlB1bGxSZXF1ZXN0NDg4MTk0MzYx
7,186
[s2s] distributed eval cleanup
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @stas00 " ]
1,600
1,600
1,600
CONTRIBUTOR
null
Fixes #7176 - [x] Fix determinism issue caused by hardcoding num_beams - [x] local_rank 0 logging - [x] local_rank 0 tqdm - [x] same results as `run_eval.py` - [x] deeper investigation of non-determinism. Gens the same. Labels different? - [x] save json to one line.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7186/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7186", "html_url": "https://github.com/huggingface/transformers/pull/7186", "diff_url": "https://github.com/huggingface/transformers/pull/7186.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7186.patch", "merged_at": 1600285118000 }
https://api.github.com/repos/huggingface/transformers/issues/7185
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7185/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7185/comments
https://api.github.com/repos/huggingface/transformers/issues/7185/events
https://github.com/huggingface/transformers/pull/7185
702,996,115
MDExOlB1bGxSZXF1ZXN0NDg4MTgxMTM1
7,185
Create README.md for indobert-lite-large-p2
{ "login": "gentaiscool", "id": 2089264, "node_id": "MDQ6VXNlcjIwODkyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2089264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gentaiscool", "html_url": "https://github.com/gentaiscool", "followers_url": "https://api.github.com/users/gentaiscool/followers", "following_url": "https://api.github.com/users/gentaiscool/following{/other_user}", "gists_url": "https://api.github.com/users/gentaiscool/gists{/gist_id}", "starred_url": "https://api.github.com/users/gentaiscool/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gentaiscool/subscriptions", "organizations_url": "https://api.github.com/users/gentaiscool/orgs", "repos_url": "https://api.github.com/users/gentaiscool/repos", "events_url": "https://api.github.com/users/gentaiscool/events{/privacy}", "received_events_url": "https://api.github.com/users/gentaiscool/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=h1) Report\n> Merging [#7185](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.43%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7185/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7185 +/- ##\n==========================================\n- Coverage 80.86% 79.43% -1.44% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26115 25653 -462 \n- Misses 6178 6640 +462 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=footer). Last update [3babef8...57bf96a](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Create README.md for indobert-lite-large-p2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7185/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7185", "html_url": "https://github.com/huggingface/transformers/pull/7185", "diff_url": "https://github.com/huggingface/transformers/pull/7185.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7185.patch", "merged_at": 1600413700000 }
https://api.github.com/repos/huggingface/transformers/issues/7184
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7184/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7184/comments
https://api.github.com/repos/huggingface/transformers/issues/7184/events
https://github.com/huggingface/transformers/pull/7184
702,995,635
MDExOlB1bGxSZXF1ZXN0NDg4MTgwNzQ3
7,184
Create README.md for indobert-lite-large-p1
{ "login": "gentaiscool", "id": 2089264, "node_id": "MDQ6VXNlcjIwODkyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2089264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gentaiscool", "html_url": "https://github.com/gentaiscool", "followers_url": "https://api.github.com/users/gentaiscool/followers", "following_url": "https://api.github.com/users/gentaiscool/following{/other_user}", "gists_url": "https://api.github.com/users/gentaiscool/gists{/gist_id}", "starred_url": "https://api.github.com/users/gentaiscool/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gentaiscool/subscriptions", "organizations_url": "https://api.github.com/users/gentaiscool/orgs", "repos_url": "https://api.github.com/users/gentaiscool/repos", "events_url": "https://api.github.com/users/gentaiscool/events{/privacy}", "received_events_url": "https://api.github.com/users/gentaiscool/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=h1) Report\n> Merging [#7184](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.08%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7184/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7184 +/- ##\n==========================================\n- Coverage 80.86% 79.78% -1.09% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26115 25764 -351 \n- Misses 6178 6529 +351 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-72.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `60.39% <0.00%> (-34.66%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `67.79% <0.00%> (-31.36%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `75.91% <0.00%> (-21.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `88.17% <0.00%> (-5.02%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.45% <0.00%> (-3.01%)` | :arrow_down: |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=footer). Last update [3babef8...a932393](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Create README.md for indobert-lite-large-p1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7184/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7184", "html_url": "https://github.com/huggingface/transformers/pull/7184", "diff_url": "https://github.com/huggingface/transformers/pull/7184.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7184.patch", "merged_at": 1600413732000 }
https://api.github.com/repos/huggingface/transformers/issues/7183
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7183/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7183/comments
https://api.github.com/repos/huggingface/transformers/issues/7183/events
https://github.com/huggingface/transformers/pull/7183
702,994,993
MDExOlB1bGxSZXF1ZXN0NDg4MTgwMjIx
7,183
Create README.md for indobert-lite-base phase 2 model card
{ "login": "gentaiscool", "id": 2089264, "node_id": "MDQ6VXNlcjIwODkyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2089264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gentaiscool", "html_url": "https://github.com/gentaiscool", "followers_url": "https://api.github.com/users/gentaiscool/followers", "following_url": "https://api.github.com/users/gentaiscool/following{/other_user}", "gists_url": "https://api.github.com/users/gentaiscool/gists{/gist_id}", "starred_url": "https://api.github.com/users/gentaiscool/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gentaiscool/subscriptions", "organizations_url": "https://api.github.com/users/gentaiscool/orgs", "repos_url": "https://api.github.com/users/gentaiscool/repos", "events_url": "https://api.github.com/users/gentaiscool/events{/privacy}", "received_events_url": "https://api.github.com/users/gentaiscool/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=h1) Report\n> Merging [#7183](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.41%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7183/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7183 +/- ##\n==========================================\n- Coverage 80.86% 79.45% -1.42% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26115 25657 -458 \n- Misses 6178 6636 +458 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-4.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=footer). Last update [3babef8...bac886f](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Create README.md for indobert-lite-base phase 2 model card
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7183/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7183", "html_url": "https://github.com/huggingface/transformers/pull/7183", "diff_url": "https://github.com/huggingface/transformers/pull/7183.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7183.patch", "merged_at": 1600413714000 }
https://api.github.com/repos/huggingface/transformers/issues/7182
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7182/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7182/comments
https://api.github.com/repos/huggingface/transformers/issues/7182/events
https://github.com/huggingface/transformers/pull/7182
702,994,297
MDExOlB1bGxSZXF1ZXN0NDg4MTc5NjU2
7,182
Create README.md for indobert-lite-base-p1
{ "login": "gentaiscool", "id": 2089264, "node_id": "MDQ6VXNlcjIwODkyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2089264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gentaiscool", "html_url": "https://github.com/gentaiscool", "followers_url": "https://api.github.com/users/gentaiscool/followers", "following_url": "https://api.github.com/users/gentaiscool/following{/other_user}", "gists_url": "https://api.github.com/users/gentaiscool/gists{/gist_id}", "starred_url": "https://api.github.com/users/gentaiscool/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gentaiscool/subscriptions", "organizations_url": "https://api.github.com/users/gentaiscool/orgs", "repos_url": "https://api.github.com/users/gentaiscool/repos", "events_url": "https://api.github.com/users/gentaiscool/events{/privacy}", "received_events_url": "https://api.github.com/users/gentaiscool/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=h1) Report\n> Merging [#7182](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `3.82%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7182/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7182 +/- ##\n==========================================\n- Coverage 80.86% 77.04% -3.83% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26115 24880 -1235 \n- Misses 6178 7413 +1235 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.37% <0.00%> (-42.10%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.60% <0.00%> (-20.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `47.05% <0.00%> (-13.24%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=footer). Last update [3babef8...9f70c36](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Create README.md for indobert-lite-base-p1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7182/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7182", "html_url": "https://github.com/huggingface/transformers/pull/7182", "diff_url": "https://github.com/huggingface/transformers/pull/7182.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7182.patch", "merged_at": 1600413752000 }
https://api.github.com/repos/huggingface/transformers/issues/7181
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7181/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7181/comments
https://api.github.com/repos/huggingface/transformers/issues/7181/events
https://github.com/huggingface/transformers/pull/7181
702,992,788
MDExOlB1bGxSZXF1ZXN0NDg4MTc4NDQz
7,181
Create README.md for indobert-large-p2 model card
{ "login": "gentaiscool", "id": 2089264, "node_id": "MDQ6VXNlcjIwODkyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2089264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gentaiscool", "html_url": "https://github.com/gentaiscool", "followers_url": "https://api.github.com/users/gentaiscool/followers", "following_url": "https://api.github.com/users/gentaiscool/following{/other_user}", "gists_url": "https://api.github.com/users/gentaiscool/gists{/gist_id}", "starred_url": "https://api.github.com/users/gentaiscool/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gentaiscool/subscriptions", "organizations_url": "https://api.github.com/users/gentaiscool/orgs", "repos_url": "https://api.github.com/users/gentaiscool/repos", "events_url": "https://api.github.com/users/gentaiscool/events{/privacy}", "received_events_url": "https://api.github.com/users/gentaiscool/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=h1) Report\n> Merging [#7181](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.42%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7181/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7181 +/- ##\n==========================================\n- Coverage 80.86% 79.44% -1.43% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26115 25656 -459 \n- Misses 6178 6637 +459 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-4.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=footer). Last update [3babef8...9d0a80a](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Create README.md for indobert-large-p2 model card
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7181/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7181", "html_url": "https://github.com/huggingface/transformers/pull/7181", "diff_url": "https://github.com/huggingface/transformers/pull/7181.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7181.patch", "merged_at": 1600413689000 }
https://api.github.com/repos/huggingface/transformers/issues/7180
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7180/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7180/comments
https://api.github.com/repos/huggingface/transformers/issues/7180/events
https://github.com/huggingface/transformers/pull/7180
702,991,303
MDExOlB1bGxSZXF1ZXN0NDg4MTc3MjQz
7,180
Create README.md for indobert-large-p1 model card
{ "login": "gentaiscool", "id": 2089264, "node_id": "MDQ6VXNlcjIwODkyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2089264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gentaiscool", "html_url": "https://github.com/gentaiscool", "followers_url": "https://api.github.com/users/gentaiscool/followers", "following_url": "https://api.github.com/users/gentaiscool/following{/other_user}", "gists_url": "https://api.github.com/users/gentaiscool/gists{/gist_id}", "starred_url": "https://api.github.com/users/gentaiscool/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gentaiscool/subscriptions", "organizations_url": "https://api.github.com/users/gentaiscool/orgs", "repos_url": "https://api.github.com/users/gentaiscool/repos", "events_url": "https://api.github.com/users/gentaiscool/events{/privacy}", "received_events_url": "https://api.github.com/users/gentaiscool/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=h1) Report\n> Merging [#7180](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **increase** coverage by `1.04%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7180/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7180 +/- ##\n==========================================\n+ Coverage 80.86% 81.91% +1.04% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n+ Hits 26115 26453 +338 \n+ Misses 6178 5840 -338 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.60% <0.00%> (-20.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.69% <0.00%> (-0.54%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.54% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <0.00%> (+0.67%)` | :arrow_up: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `94.11% <0.00%> (+1.17%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=footer). Last update [3babef8...61fe03d](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Create README.md for indobert-large-p1 model card
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7180/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7180", "html_url": "https://github.com/huggingface/transformers/pull/7180", "diff_url": "https://github.com/huggingface/transformers/pull/7180.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7180.patch", "merged_at": 1600413677000 }
https://api.github.com/repos/huggingface/transformers/issues/7179
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7179/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7179/comments
https://api.github.com/repos/huggingface/transformers/issues/7179/events
https://github.com/huggingface/transformers/pull/7179
702,990,099
MDExOlB1bGxSZXF1ZXN0NDg4MTc2MjMw
7,179
Create README.md for indobert-base-p1 model card
{ "login": "gentaiscool", "id": 2089264, "node_id": "MDQ6VXNlcjIwODkyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2089264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gentaiscool", "html_url": "https://github.com/gentaiscool", "followers_url": "https://api.github.com/users/gentaiscool/followers", "following_url": "https://api.github.com/users/gentaiscool/following{/other_user}", "gists_url": "https://api.github.com/users/gentaiscool/gists{/gist_id}", "starred_url": "https://api.github.com/users/gentaiscool/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gentaiscool/subscriptions", "organizations_url": "https://api.github.com/users/gentaiscool/orgs", "repos_url": "https://api.github.com/users/gentaiscool/repos", "events_url": "https://api.github.com/users/gentaiscool/events{/privacy}", "received_events_url": "https://api.github.com/users/gentaiscool/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=h1) Report\n> Merging [#7179](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.42%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7179/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7179 +/- ##\n==========================================\n- Coverage 80.86% 79.44% -1.43% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26115 25656 -459 \n- Misses 6178 6637 +459 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-4.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=footer). Last update [3babef8...c5157d5](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for sharing – Next time please open just one PR it will be easier to fix things!" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Create README.md for indobert-base-p1 model card
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7179/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7179", "html_url": "https://github.com/huggingface/transformers/pull/7179", "diff_url": "https://github.com/huggingface/transformers/pull/7179.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7179.patch", "merged_at": 1600413660000 }
https://api.github.com/repos/huggingface/transformers/issues/7178
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7178/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7178/comments
https://api.github.com/repos/huggingface/transformers/issues/7178/events
https://github.com/huggingface/transformers/pull/7178
702,989,818
MDExOlB1bGxSZXF1ZXN0NDg4MTc2MDAx
7,178
Create README.md for indobert-base-p2
{ "login": "gentaiscool", "id": 2089264, "node_id": "MDQ6VXNlcjIwODkyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2089264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gentaiscool", "html_url": "https://github.com/gentaiscool", "followers_url": "https://api.github.com/users/gentaiscool/followers", "following_url": "https://api.github.com/users/gentaiscool/following{/other_user}", "gists_url": "https://api.github.com/users/gentaiscool/gists{/gist_id}", "starred_url": "https://api.github.com/users/gentaiscool/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gentaiscool/subscriptions", "organizations_url": "https://api.github.com/users/gentaiscool/orgs", "repos_url": "https://api.github.com/users/gentaiscool/repos", "events_url": "https://api.github.com/users/gentaiscool/events{/privacy}", "received_events_url": "https://api.github.com/users/gentaiscool/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
Create README.md for indobert-base-p2 model card
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7178/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7178", "html_url": "https://github.com/huggingface/transformers/pull/7178", "diff_url": "https://github.com/huggingface/transformers/pull/7178.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7178.patch", "merged_at": 1600413629000 }
https://api.github.com/repos/huggingface/transformers/issues/7177
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7177/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7177/comments
https://api.github.com/repos/huggingface/transformers/issues/7177/events
https://github.com/huggingface/transformers/issues/7177
702,986,106
MDU6SXNzdWU3MDI5ODYxMDY=
7,177
RuntimeError: CUDA out of memory.
{ "login": "jasonyliang", "id": 19767870, "node_id": "MDQ6VXNlcjE5NzY3ODcw", "avatar_url": "https://avatars.githubusercontent.com/u/19767870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jasonyliang", "html_url": "https://github.com/jasonyliang", "followers_url": "https://api.github.com/users/jasonyliang/followers", "following_url": "https://api.github.com/users/jasonyliang/following{/other_user}", "gists_url": "https://api.github.com/users/jasonyliang/gists{/gist_id}", "starred_url": "https://api.github.com/users/jasonyliang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jasonyliang/subscriptions", "organizations_url": "https://api.github.com/users/jasonyliang/orgs", "repos_url": "https://api.github.com/users/jasonyliang/repos", "events_url": "https://api.github.com/users/jasonyliang/events{/privacy}", "received_events_url": "https://api.github.com/users/jasonyliang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Found solution:\r\n --per_gpu_train_batch_size=1 \\\r\nI will be closing this issue now, thank you so much!\r\nSorry for the trouble!" ]
1,600
1,600
1,600
NONE
null
I'm trying to run run_language_modeling.py and fine-tune the GPT-2 model with my own text dataset with the following command: %%bash export TRAIN_FILE=Models/Data/train.txt export TEST_FILE=Models/Data/valid.txt export MODEL_NAME=gpt2 export OUTPUT_DIR=output python run_language_modeling.py \ --output_dir=output \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --cache_dir=None I am currently using the Colab environment to run the script on GPU but encountered the following error: RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 15.90 GiB total capacity; 14.75 GiB already allocated; 225.81 MiB free; 14.81 GiB reserved in total by PyTorch) I noticed that it might be possible to resolve this issue by changing the batch but am unsure how I can do this? Would love any pointers available ( --train_batch_size 16 wasn't an option with this file)! Thank you so much!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7177/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7176
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7176/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7176/comments
https://api.github.com/repos/huggingface/transformers/issues/7176/events
https://github.com/huggingface/transformers/issues/7176
702,955,667
MDU6SXNzdWU3MDI5NTU2Njc=
7,176
distributed eval cleanup
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
- [x] local_rank 0 logging - [x] local_rank 0 tqdm - [x] same scores as `run_eval.py` - [x] deeper investigation of non-determinism. Gens the same. Labels different? - [x] save json to one line.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7176/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7175
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7175/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7175/comments
https://api.github.com/repos/huggingface/transformers/issues/7175/events
https://github.com/huggingface/transformers/pull/7175
702,952,003
MDExOlB1bGxSZXF1ZXN0NDg4MTQ0MTU1
7,175
Fix a few countings (steps / epochs) in trainer_tf.py
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=h1) Report\n> Merging [#7175](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/df165065c37329880f490025972f81b4daa6e5bd?el=desc) will **increase** coverage by `0.27%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7175/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7175 +/- ##\n==========================================\n+ Coverage 80.71% 80.99% +0.27% \n==========================================\n Files 169 169 \n Lines 32293 32305 +12 \n==========================================\n+ Hits 26066 26165 +99 \n+ Misses 6227 6140 -87 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.01% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `80.75% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (+0.33%)` | :arrow_up: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `94.11% <0.00%> (+1.17%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=footer). Last update [df16506...b72f0d9](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Nice catch!! LGTM.", "@LysandreJik I fully agree on adding tests for the trainer. I put this immediately on top of my todo list.", "> \r\n> \r\n> @LysandreJik I fully agree on adding tests for the trainer. I put this immediately on top of my todo list.\r\n\r\n@LysandreJik @jplu If you agree, may I help to build tests for the TFTrainer? If so, probably I need to discuss with @jplu how to proceed though.", "@chiapas I will definitely be glad to have your help on this! Can you send me an email (it is displayed on my Github profile) please, and I will let you know how we can proceed.", "If you could copy and adapt the tests in trainer (so that they run quickly) on a regression problem, it would be awesome!", "> \r\n> \r\n> If you could copy and adapt the tests in trainer (so that they run quickly) on a regression problem, it would be awesome!\r\n\r\nWe are going to build the tests for TFTrainer soon. But the HF team can decide if to merge this PR now or after the tests are built.", "LGTM for merge, what do you think @sgugger?", "Yes, good to merge. Thanks @chiapas!" ]
1,600
1,651
1,600
COLLABORATOR
null
Mainly for @jplu , but @sgugger might have some comments, especially for issue 2. ## Description This PR fixes a few countings (steps / epochs) in [trainer_tf.py](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py). Since these issues are tight together, I decided to fix them in one PR. BTW, I didn't find a `test_tf_trainer.py` similar to `test_trainer.py`, so no test added. I just relaunched the commands to make sure the fixed code give the desired outputs. I put some comments in the code --> Some of them might be removed after the code is reviewed. Here is the list of issues and their causes: 1. Potential `ZeroDivisionError` for lines 498 - 501 (similar to PR #7125 ) epochs_trained = self.global_step // (self.num_train_examples // self.args.gradient_accumulation_steps) steps_trained_in_current_epoch = self.global_step % ( self.num_train_examples // self.args.gradient_accumulation_steps ) 2. The calculation of `epochs_trained` and `steps_trained_in_current_epoch` in 1. should be something like self.global_step // (self.num_train_examples // self.total_train_batch_size) instead, rather than using `self.args.gradient_accumulation_steps`. In pytorch trainer, we have (line 609) num_update_steps_per_epoch = len(train_dataloader) // self.args.gradient_accumulation_steps but there, `len(train_dataloader)` is the number of batches, not the number of examples. 3. Line 492, 498 and 499 self.global_step = iterations.numpy() .... epochs_trained = self.global_step // (self.num_train_examples // self.args.gradient_accumulation_steps) steps_trained_in_current_epoch = self.global_step % ( self.num_train_examples // self.args.gradient_accumulation_steps ) count some training progresses before the checkpoint is restored (line 511) ckpt.restore(self.model.ckpt_manager.latest_checkpoint).expect_partial() These countings should be done after the optimizer is restored, otherewise they will be always zero at those lines. 4. Line 517 epochs = 1 if self.args.max_steps > 0 else self.args.num_train_epochs make epochs to be `1` if `max_steps` is specified. This won't be a real bug, but when `max_steps` is larger than the optimizations steps per epochs, this is somehow confusing, especially when the log is shown. 5. Extra epoch issue: Line 542 for epoch_iter in range(epochs_trained, int(epochs + 1)): If `max_step` is not specified, and `num_train_epochs` is set to 1. Suppose in a first run, the model is trained for 1 step. Then when we resume the training, we will get `epochs_trained = 0` (see issue 2. above) and `epochs = 1`, there for the range becomes `range(0, 2)` which gives 2 epochs. ## Code showing the bugs 1. First download glue tasks python utils/download_glue_data.py --data_dir ./examples/text-classification/glue/ --tasks all 2. This code python3 run_tf_glue.py \ --task_name wnli \ --data_dir ./glue/WNLI \ --model_name_or_path distilbert-base-uncased \ --output_dir ./glue/WNLI/ \ --max_seq_length 16 \ --per_device_train_batch_size 16 \ --gradient_accumulation_steps 4 \ --max_steps 10 \ --logging_steps 1 \ --save_steps 5 \ --seed 1 \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir gives logs like {'loss': 0.69778687, 'learning_rate': 4.4999997e-05, 'epoch': 0.2, 'step': 1} {'loss': 0.6953752, 'learning_rate': 4e-05, 'epoch': 0.3, 'step': 2} where `'epoch': 0.2, 'step': 1` is not correct for `epoch`, because they are only 10 steps. 3. Remove the checkpoints in the output_dir. Launch python3 run_tf_glue.py \ --task_name wnli \ --data_dir ./glue/WNLI \ --model_name_or_path distilbert-base-uncased \ --output_dir ./glue/WNLI/ \ --max_seq_length 16 \ --num_train_epochs 1 \ --per_device_train_batch_size 16 \ --gradient_accumulation_steps 4 \ --max_steps 2 \ --logging_steps 1 \ --save_steps 1 \ --seed 1 \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir but stop it after 1 step. Then resume the training, we get logs like {'loss': 0.7005814, 'learning_rate': 0.0, 'epoch': 2.0, 'step': 1} and {'loss': 0.66841304, 'learning_rate': 0.0, 'epoch': -0.5, 'step': 2} Saving checkpoint for step 2 at ./glue/WNLI/checkpoint/ckpt-2 {'loss': 0.7016809, 'learning_rate': 0.0, 'epoch': 0.5, 'step': 3} Saving checkpoint for step 3 at ./glue/WNLI/checkpoint/ckpt-3 {'loss': 0.693136, 'learning_rate': 0.0, 'epoch': 1.0, 'step': 4} The final steps became 4, and we got something like `'epoch': -0.5`. 4. There are other codes to show the issues, but I hope the above two examples already make it clear that something is wrong in the current code. If you want, I can put more code snippet here.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7175/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7175", "html_url": "https://github.com/huggingface/transformers/pull/7175", "diff_url": "https://github.com/huggingface/transformers/pull/7175.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7175.patch", "merged_at": 1600435737000 }
https://api.github.com/repos/huggingface/transformers/issues/7174
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7174/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7174/comments
https://api.github.com/repos/huggingface/transformers/issues/7174/events
https://github.com/huggingface/transformers/pull/7174
702,948,815
MDExOlB1bGxSZXF1ZXN0NDg4MTQxNTgz
7,174
use the correct add_start_docstrings
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=h1) Report\n> Merging [#7174](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/df165065c37329880f490025972f81b4daa6e5bd?el=desc) will **decrease** coverage by `0.40%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7174/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7174 +/- ##\n==========================================\n- Coverage 80.71% 80.31% -0.41% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26066 25936 -130 \n- Misses 6227 6357 +130 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.15% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `96.82% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `95.23% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.01% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <0.00%> (+10.24%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=footer). Last update [df16506...e73ba1f](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
As discussed at https://github.com/huggingface/transformers/pull/6940#discussion_r489266992 fixing to use the correct decorator add_start_docstrings @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7174/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7174", "html_url": "https://github.com/huggingface/transformers/pull/7174", "diff_url": "https://github.com/huggingface/transformers/pull/7174.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7174.patch", "merged_at": 1600281636000 }
https://api.github.com/repos/huggingface/transformers/issues/7173
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7173/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7173/comments
https://api.github.com/repos/huggingface/transformers/issues/7173/events
https://github.com/huggingface/transformers/pull/7173
702,946,613
MDExOlB1bGxSZXF1ZXN0NDg4MTM5Nzc1
7,173
remove duplicated code
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=h1) Report\n> Merging [#7173](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/df165065c37329880f490025972f81b4daa6e5bd?el=desc) will **decrease** coverage by `0.21%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7173/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7173 +/- ##\n==========================================\n- Coverage 80.71% 80.50% -0.22% \n==========================================\n Files 169 169 \n Lines 32293 32291 -2 \n==========================================\n- Hits 26066 25996 -70 \n- Misses 6227 6295 +68 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.13% <ø> (-0.02%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.37% <0.00%> (-42.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.10% <0.00%> (-29.80%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `47.05% <0.00%> (-13.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=footer). Last update [df16506...66eb6ea](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7173/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7173", "html_url": "https://github.com/huggingface/transformers/pull/7173", "diff_url": "https://github.com/huggingface/transformers/pull/7173.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7173.patch", "merged_at": 1600336301000 }
https://api.github.com/repos/huggingface/transformers/issues/7172
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7172/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7172/comments
https://api.github.com/repos/huggingface/transformers/issues/7172/events
https://github.com/huggingface/transformers/issues/7172
702,944,173
MDU6SXNzdWU3MDI5NDQxNzM=
7,172
Bug in finetuning ALBERT on text-classification in GLUE
{ "login": "ReyonRen", "id": 46014149, "node_id": "MDQ6VXNlcjQ2MDE0MTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/46014149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ReyonRen", "html_url": "https://github.com/ReyonRen", "followers_url": "https://api.github.com/users/ReyonRen/followers", "following_url": "https://api.github.com/users/ReyonRen/following{/other_user}", "gists_url": "https://api.github.com/users/ReyonRen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ReyonRen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ReyonRen/subscriptions", "organizations_url": "https://api.github.com/users/ReyonRen/orgs", "repos_url": "https://api.github.com/users/ReyonRen/repos", "events_url": "https://api.github.com/users/ReyonRen/events{/privacy}", "received_events_url": "https://api.github.com/users/ReyonRen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have the same problem, if you have solved, cound you tell me how? thx!" ]
1,600
1,601
1,600
NONE
null
Thanks for your beautiful work! I use transformers-3.1.0, and when I run `run_glue.py` in `examples/text-classification/`, it return a bug info, ``` Traceback (most recent call last): File "run_glue.py", line 246, in <module> main() File "run_glue.py", line 127, in main cache_dir=model_args.cache_dir, File "/mnt/dqa/yinheju/env/python3.7_torch1.1/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 220, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/mnt/dqa/yinheju/env/python3.7_torch1.1/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1425, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/mnt/dqa/yinheju/env/python3.7_torch1.1/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1531, in _from_pretrained list(cls.vocab_files_names.values()), OSError: Model name 'albert-base-v2' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'albert-base-v2' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url. ``` Please help me to solve this problem, thank you!!!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7172/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7171
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7171/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7171/comments
https://api.github.com/repos/huggingface/transformers/issues/7171/events
https://github.com/huggingface/transformers/pull/7171
702,932,994
MDExOlB1bGxSZXF1ZXN0NDg4MTI4NDM1
7,171
remove deprecated flag
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=h1) Report\n> Merging [#7171](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/df165065c37329880f490025972f81b4daa6e5bd?el=desc) will **decrease** coverage by `1.25%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7171/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7171 +/- ##\n==========================================\n- Coverage 80.71% 79.46% -1.26% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26066 25661 -405 \n- Misses 6227 6632 +405 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `88.17% <0.00%> (-5.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=footer). Last update [df16506...fc2d6ab](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
``` /home/circleci/.local/lib/python3.6/site-packages/isort/main.py:915: UserWarning: W0501: The following deprecated CLI flags were used and ignored: --recursive! "W0501: The following deprecated CLI flags were used and ignored: " ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7171/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7171", "html_url": "https://github.com/huggingface/transformers/pull/7171", "diff_url": "https://github.com/huggingface/transformers/pull/7171.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7171.patch", "merged_at": 1600336333000 }
https://api.github.com/repos/huggingface/transformers/issues/7170
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7170/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7170/comments
https://api.github.com/repos/huggingface/transformers/issues/7170/events
https://github.com/huggingface/transformers/issues/7170
702,906,189
MDU6SXNzdWU3MDI5MDYxODk=
7,170
[s2s] Try to get ray/optuna + examples/seq2seq working
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "It works on a branch, will cleanup+share shortly!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
CONTRIBUTOR
null
I tried for 2h and failed. My initial attempt just hangs (I put it at `examples/seq2seq/run_ray_tune.py`) ```python from ray.tune.schedulers import ASHAScheduler, PopulationBasedTraining from ray import tune from ray.tune import CLIReporter from ray.tune.schedulers import ASHAScheduler, PopulationBasedTraining from functools import partial from durbango import * from finetune import main as ft_main from pathlib import Path import os def get_ray_slug(cfg): strang = '' for k,v in cfg.items(): strang += f'{k}_{v}' for i in range(10000): test = f'rayruns/run_{i}' try: Path(test).mkdir(exist_ok=True,parents=True) break except Exception: continue return os.path.expanduser(test) def ray_main(args, config): for k,v in config.items(): #assert hasattr(args, k), k setattr(args, k, v) args.n_train = 64 args.output_dir = get_ray_slug(config) args.num_train_epochs = 3 ft_main(args) def tune_helsinki_(args, num_samples=4, num_epochs=3): search_space = { "learning_rate": tune.sample_from(lambda spec: 10**(-10 * np.random.rand())), "gradient_accumulation_steps": tune.choice([1, 8, 32, 128, 256]), "dropout": tune.choice([0, 0.1, 0.2, 0.4]), } scheduler = ASHAScheduler( metric="val_avg_bleu", mode="min", max_t=3, grace_period=1, reduction_factor=2) reporter = CLIReporter( parameter_columns=list(search_space.keys()), metric_columns=["val_avg_loss", "val_avg_bleu", "global_step"]) tune.run( partial( ray_main, args, ), resources_per_trial={"cpu": 0, "gpu": 1}, config=search_space, num_samples=num_samples, scheduler=scheduler, progress_reporter=reporter, name="tune_helsinki_asha") # Make default args args = {'logger': True, 'checkpoint_callback': True, 'early_stop_callback': False, 'default_root_dir': None, 'gradient_clip_val': 0, 'process_position': 0, 'num_nodes': 1, 'num_processes': 1, 'gpus': 1, 'auto_select_gpus': False, 'tpu_cores': 0, 'log_gpu_memory': None, 'progress_bar_refresh_rate': 1, 'overfit_batches': 0.0, 'track_grad_norm': -1, 'check_val_every_n_epoch': 1, 'fast_dev_run': False, 'accumulate_grad_batches': 1, 'max_epochs': 1000, 'min_epochs': 1, 'max_steps': None, 'min_steps': None, 'limit_train_batches': 1.0, 'limit_val_batches': 1.0, 'limit_test_batches': 1.0, 'val_check_interval': 0.25, 'log_save_interval': 100, 'row_log_interval': 50, 'distributed_backend': None, 'precision': 32, 'print_nan_grads': False, 'weights_summary': 'top', 'weights_save_path': None, 'num_sanity_val_steps': 0, 'truncated_bptt_steps': None, 'resume_from_checkpoint': None, 'profiler': None, 'benchmark': False, 'deterministic': False, 'reload_dataloaders_every_epoch': False, 'auto_lr_find': False, 'replace_sampler_ddp': True, 'terminate_on_nan': False, 'auto_scale_batch_size': False, 'prepare_data_per_node': True, 'amp_level': 'O2', 'val_percent_check': None, 'test_percent_check': None, 'train_percent_check': None, 'overfit_pct': None, 'model_name_or_path': 'sshleifer/student_marian_en_ro_6_3', 'config_name': '', 'tokenizer_name': 'sshleifer/student_marian_en_ro_6_3', 'cache_dir': '', 'encoder_layerdrop': None, 'decoder_layerdrop': None, 'dropout': None, 'attention_dropout': None, 'learning_rate': 0.0003, 'lr_scheduler': 'linear', 'weight_decay': 0.0, 'adam_epsilon': 1e-08, 'warmup_steps': 500, 'num_workers': 4, 'train_batch_size': 32, 'eval_batch_size': 32, 'output_dir': 'tmp', 'fp16': True, 'fp16_opt_level': 'O1', 'do_train': True, 'do_predict': True, 'seed': 42, 'data_dir': '/home/shleifer/transformers_fork/examples/seq2seq//dbart/wmt_en_ro', 'max_source_length': 128, 'max_target_length': 128, 'val_max_target_length': 128, 'test_max_target_length': 128, 'freeze_encoder': True, 'freeze_embeds': True, 'sortish_sampler': True, 'logger_name': 'wandb', 'n_train': -1, 'n_val': 500, 'n_test': -1, 'task': 'translation', 'label_smoothing': 0.1, 'src_lang': '', 'tgt_lang': '', 'early_stopping_patience': -1} tune_helsinki_(args) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7170/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7169
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7169/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7169/comments
https://api.github.com/repos/huggingface/transformers/issues/7169/events
https://github.com/huggingface/transformers/issues/7169
702,898,102
MDU6SXNzdWU3MDI4OTgxMDI=
7,169
BERT Trainer.train() CUDA out of memory error
{ "login": "choidongyeon", "id": 54914459, "node_id": "MDQ6VXNlcjU0OTE0NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/54914459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/choidongyeon", "html_url": "https://github.com/choidongyeon", "followers_url": "https://api.github.com/users/choidongyeon/followers", "following_url": "https://api.github.com/users/choidongyeon/following{/other_user}", "gists_url": "https://api.github.com/users/choidongyeon/gists{/gist_id}", "starred_url": "https://api.github.com/users/choidongyeon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/choidongyeon/subscriptions", "organizations_url": "https://api.github.com/users/choidongyeon/orgs", "repos_url": "https://api.github.com/users/choidongyeon/repos", "events_url": "https://api.github.com/users/choidongyeon/events{/privacy}", "received_events_url": "https://api.github.com/users/choidongyeon/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Haven’t we fixed a memory leak on master recently?", "Yes we did, though this might be a different problem.\r\n\r\nTo make sure, can you please check if you have the bug with an install from source?", "@sgugger yeah, I'm installing from source (not sure why the script yielded 3.0.2)", "I encountered the problem, too, especially using NSP objective, the memory usage much higher than MLM.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I encounter this problem too when I try to train on multi-nodes and multi GPUs. \r\nAnd my transformers version is 4.15.0, so does this issue have the answer?\r\n@choidongyeon @julien-c @sgugger " ]
1,600
1,642
1,606
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.3.0 - Python version: 3.6.10 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Distributed ### Who can help @LysandreJik, @sgugger, @patrickvonplaten ## Information I am using 8xV100s (32GB). The script (`run_training.py`) works when running on a single machine but I am running into the ```CUDA out of memory``` when trying to run distributed training. The behavior is consistent whether or not `fp16` is `True`. I am using the publicly available wikitext data. Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. My code is `run_training.py`: ``` bert_config = BertConfig(hidden_size=768, num_attention_heads=12) tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = BertForPreTraining(config=bert_config) train_dataset = TextDatasetForNextSentencePrediction(tokenizer=tokenizer, file_path=args.train_data_file, block_size=args.max_length) data_collator = DataCollatorForNextSentencePrediction(tokenizer=tokenizer, mlm_probability=args.mlm_probability) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset ) global_steps, training_loss = trainer.train() ``` 2. Run the script in distributed mode: ``` python -m torch.distributed.launch \ --nnodes={num_nodes} \ --node_rank={rank} \ --nproc_per_node=8 \ --run_training.py \ --max_steps 50 " \ --train_data_file {train_data_file} " \ --eval_data_file {test_data_file} " \ --logging_dir {output_dir} \ --fp16 False " \ --gradient_accumulation_steps 1 " \ --per_gpu_train_batch_size 1 \ --per_gpu_eval_batch_size 1 ``` ## Expected behavior Given that the script works fine (i.e., not run into the out of memory issue) on a single machine, I would expect multi-node to be the same. Any insight into what might be going on is appreciated!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7169/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7169/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7168
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7168/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7168/comments
https://api.github.com/repos/huggingface/transformers/issues/7168/events
https://github.com/huggingface/transformers/issues/7168
702,846,934
MDU6SXNzdWU3MDI4NDY5MzQ=
7,168
DistilBERT for token classification (pytorch) predicts wrong classes for <PAD> tokens
{ "login": "vgrabovets", "id": 22475959, "node_id": "MDQ6VXNlcjIyNDc1OTU5", "avatar_url": "https://avatars.githubusercontent.com/u/22475959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vgrabovets", "html_url": "https://github.com/vgrabovets", "followers_url": "https://api.github.com/users/vgrabovets/followers", "following_url": "https://api.github.com/users/vgrabovets/following{/other_user}", "gists_url": "https://api.github.com/users/vgrabovets/gists{/gist_id}", "starred_url": "https://api.github.com/users/vgrabovets/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vgrabovets/subscriptions", "organizations_url": "https://api.github.com/users/vgrabovets/orgs", "repos_url": "https://api.github.com/users/vgrabovets/repos", "events_url": "https://api.github.com/users/vgrabovets/events{/privacy}", "received_events_url": "https://api.github.com/users/vgrabovets/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The [PAD] tokens are not used for loss calculation if I remembered correctly, and therefore its predictions has no meaning. You should just ignore the [PAD] tokens results.\r\n\r\nPut [PAD] tokens into loss calculation will make the model biased toward [PAD] tokens, which is undesirable." ]
1,600
1,604
1,604
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-47-generic-x86_64-with-debian-buster-sid - Python version: 3.7.4 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @stefan-it @LysandreJik ## Information Model I am using: DistilBERT (pytorch), but I guess, the same is true for BERT. The problem arises when using: * my own modified scripts: (give details below) The tasks I am working on is: * my own task or dataset: (give details below) ## To reproduce This is NER task on specially compiled dataset which I cannot provide, but it is not that much different from CoNLL. I tried DistillBERT tensorflow and pytorch versions. Input is `inputs_ids` and `attention_mask` in both cases. Results for DistillBERT tensorflow: ![image](https://user-images.githubusercontent.com/22475959/93342115-9d0d3680-f837-11ea-8b0b-5001856bd1e7.png) model was compiled as: ``` model = TFDistilBertForTokenClassification.from_pretrained(MODEL, num_labels=len(LABELS)) model.layers[-1].activation = tf.keras.activations.softmax optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=loss, metrics=[metric]) ``` code for report (I didn't remove prediction for padding from the model output as I assumed that model should not have any difficulties predicting it as `O`. And it does in this case) ``` predictions_classes = [LABELS_MAP[_id] for _id in predictions_normalized.reshape(-1)] true_classes = [LABELS_MAP[_id] for _id in label_ids_test.reshape(-1)] print( classification_report( true_classes, predictions_classes, labels=sorted(list(set(true_classes) - {'O'})) ) ) ``` Due to some issues with conversion to onnx format, I had to create pytorch version of the same model. Dataset split is the same in both versions. Here is output of DistillBERT pytorch: ![image](https://user-images.githubusercontent.com/22475959/93352453-879e0980-f843-11ea-883f-9207fbfe603a.png) Notice that support is the same as above, because it is the same data. Recall is comparable to tf. Precision is way off. Model is compiled as: ``` training_args = TrainingArguments( output_dir='results', num_train_epochs=4, per_device_train_batch_size=4, per_device_eval_batch_size=32, warmup_steps=0, weight_decay=0, logging_dir=None, logging_steps=100, eval_steps=500, save_steps=1e6, learning_rate=1e-5, seed=8, evaluate_during_training=True, do_eval=True, disable_tqdm=True, ) model = DistilBertForTokenClassification.from_pretrained('distilbert-base-multilingual-cased', num_labels=len(LABELS)) adam = Adam(model.parameters(), lr=1e-5) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset, optimizers=(adam, get_constant_schedule(adam)) ) trainer.train() ``` The reason for huge drop in precision is that model predicts classes other than `O` for pad tokens, because their loss is ignored in loss function: ![image](https://user-images.githubusercontent.com/22475959/93343751-8b2c9300-f839-11ea-949b-a7a36f4a1342.png) If I remove predictions for all special tokens then result looks like: ![image](https://user-images.githubusercontent.com/22475959/93352634-b7e5a800-f843-11ea-90a7-a1540c33cbad.png) Now, it looks like results from tf version. If I comment out these lines: https://github.com/huggingface/transformers/blob/9e376e156a78aa08f802d569d829064aff930c58/src/transformers/modeling_distilbert.py#L822-L830 and keep only `loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))` taking into account loss for pad tokens and still using attention mask as the second input than model doesn't predict PAD tokens as classes other than `O` anymore. Output: ![image](https://user-images.githubusercontent.com/22475959/93354973-40654800-f846-11ea-88a1-47203adcaa67.png) So, performance definitely didn't suffer and model stopped predicting classes other than `O` for PAD tokens. ![image](https://user-images.githubusercontent.com/22475959/93355153-6b4f9c00-f846-11ea-8f9a-f75e43da6fb3.png) ## Expected behavior I think, SOTA models shouldn't predict classes other than `O` for PAD tokens (or some special class).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7168/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7167
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7167/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7167/comments
https://api.github.com/repos/huggingface/transformers/issues/7167/events
https://github.com/huggingface/transformers/issues/7167
702,714,208
MDU6SXNzdWU3MDI3MTQyMDg=
7,167
weights partially missing for CamembertForMaskedLM
{ "login": "raphael0202", "id": 9609923, "node_id": "MDQ6VXNlcjk2MDk5MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/9609923?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raphael0202", "html_url": "https://github.com/raphael0202", "followers_url": "https://api.github.com/users/raphael0202/followers", "following_url": "https://api.github.com/users/raphael0202/following{/other_user}", "gists_url": "https://api.github.com/users/raphael0202/gists{/gist_id}", "starred_url": "https://api.github.com/users/raphael0202/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raphael0202/subscriptions", "organizations_url": "https://api.github.com/users/raphael0202/orgs", "repos_url": "https://api.github.com/users/raphael0202/repos", "events_url": "https://api.github.com/users/raphael0202/events{/privacy}", "received_events_url": "https://api.github.com/users/raphael0202/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @raphael0202 , we converted the fairseq checkpoints using [this script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py), maybe the language model head was not transfered during the conversion. \r\nYou can either use the model in fairseq as per [our website](https://camembert-model.fr/) or try to understand if and why the LM head hasn't transfered from fairseq to transformers.", "Check this tutorial as well for huggingface: https://huggingface.co/camembert-base", "After a bit of search I found out the warning comes from the way the decoder bias is defined in the code, the same warning is issued when loading the roberta model (see issue 6193). I checked that the LM head decoder bias is the same as the one in the original pytorch checkpoint.\r\nThank you for your help!", "Ok perfect, thanks for fixing this." ]
1,600
1,600
1,600
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.1.0 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @louismartin ## Information When loading "camembert-base" with `CamembertForMaskedLM` with: from transformers import CamembertForMaskedLM model = CamembertForMaskedLM.from_pretrained("camembert-base") the bias of the LM head decoder is not loaded: Some weights of CamembertForMaskedLM were not initialized from the model checkpoint at camembert-base and are newly initialized: ['lm_head.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. As I understand `lm_head.decoder.bias` is therefore initialized randomly. I checked the original `camembert-base` model as published by the author, and the lm_head decoder bias is missing too, which is not discussed in the camembert or roberta publication.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7167/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7166
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7166/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7166/comments
https://api.github.com/repos/huggingface/transformers/issues/7166/events
https://github.com/huggingface/transformers/pull/7166
702,634,646
MDExOlB1bGxSZXF1ZXN0NDg3ODc4ODMz
7,166
Create README.md
{ "login": "ant-louis", "id": 32681432, "node_id": "MDQ6VXNlcjMyNjgxNDMy", "avatar_url": "https://avatars.githubusercontent.com/u/32681432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ant-louis", "html_url": "https://github.com/ant-louis", "followers_url": "https://api.github.com/users/ant-louis/followers", "following_url": "https://api.github.com/users/ant-louis/following{/other_user}", "gists_url": "https://api.github.com/users/ant-louis/gists{/gist_id}", "starred_url": "https://api.github.com/users/ant-louis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ant-louis/subscriptions", "organizations_url": "https://api.github.com/users/ant-louis/orgs", "repos_url": "https://api.github.com/users/ant-louis/repos", "events_url": "https://api.github.com/users/ant-louis/events{/privacy}", "received_events_url": "https://api.github.com/users/ant-louis/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=h1) Report\n> Merging [#7166](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/85ffda96fcadf70d2558ba0a59c84b9f5a2d6f0f?el=desc) will **increase** coverage by `0.98%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7166/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7166 +/- ##\n==========================================\n+ Coverage 78.44% 79.42% +0.98% \n==========================================\n Files 168 168 \n Lines 32309 32309 \n==========================================\n+ Hits 25346 25663 +317 \n+ Misses 6963 6646 -317 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=footer). Last update [b00cafb...8fe976a](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Can you add a language tag? `fr` I guess or `fr-BE` if you want to use BCP 47 (cc @yjernite 😛) which we are thinking of supporting", "Sure! I'm not sure exactly where to add it, could you enlighten me?", "see https://huggingface.co/docs#what-metadata-can-i-add-to-my-model-card and let me know if it's not clear!", "I guess in that case the BCP-47 code would still just be `fr` since the text it's trained on isn't specifically Belgian afaiu :) (Unless I'm missing something @antoiloui )", "Yes you're right @yjernite, I put the `fr`tag :)" ]
1,600
1,600
1,600
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7166/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7166", "html_url": "https://github.com/huggingface/transformers/pull/7166", "diff_url": "https://github.com/huggingface/transformers/pull/7166.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7166.patch", "merged_at": 1600272961000 }
https://api.github.com/repos/huggingface/transformers/issues/7165
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7165/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7165/comments
https://api.github.com/repos/huggingface/transformers/issues/7165/events
https://github.com/huggingface/transformers/issues/7165
702,623,015
MDU6SXNzdWU3MDI2MjMwMTU=
7,165
__init__() got an unexpected keyword argument 'cache_dir'
{ "login": "jasonyliang", "id": 19767870, "node_id": "MDQ6VXNlcjE5NzY3ODcw", "avatar_url": "https://avatars.githubusercontent.com/u/19767870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jasonyliang", "html_url": "https://github.com/jasonyliang", "followers_url": "https://api.github.com/users/jasonyliang/followers", "following_url": "https://api.github.com/users/jasonyliang/following{/other_user}", "gists_url": "https://api.github.com/users/jasonyliang/gists{/gist_id}", "starred_url": "https://api.github.com/users/jasonyliang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jasonyliang/subscriptions", "organizations_url": "https://api.github.com/users/jasonyliang/orgs", "repos_url": "https://api.github.com/users/jasonyliang/repos", "events_url": "https://api.github.com/users/jasonyliang/events{/privacy}", "received_events_url": "https://api.github.com/users/jasonyliang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Do you have a link to the colab?", "I was following the tutorial by Joey S and tried to include my own dataset instead. This is the notebook that he has provided:\r\nhttps://colab.research.google.com/drive/1odn0VHb4SmQnUqtOwK6XQTLpoTSbneCf?usp=sharing\r\n\r\nThe notebook he provided already has the error.\r\nThank you so much for taking a look!", "The error seems to not be here anymore when doing `!pip3 install git+https://github.com/huggingface/transformers` instead of `pip3 install transformers` and resetting to factory settings before doing the run.", "Thank you so much! It's working now!", "Glad I could help!" ]
1,600
1,600
1,600
NONE
null
After reviewing issue #7006 and following the steps (updating Transformers to the latest version or even the 3.1.0 branch), I still get the error: TypeError: init() got an unexpected keyword argument 'cache_dir' when running the latest version for transformers (3.1.0). I'm also running on Colab environment: Command: !pip3 install transformers (also tried #! pip3 install git+git://github.com/huggingface/transformers/) !wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/language-modeling/run_language_modeling.py %%bash export TRAIN_FILE=train_path export TEST_FILE=valid_path export MODEL_NAME=gpt2 export OUTPUT_DIR=output python run_language_modeling.py --output_dir=output --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$TEST_FILE --cache_dir=None Output: Traceback (most recent call last): File "run_language_modeling.py", line 313, in main() File "run_language_modeling.py", line 242, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "run_language_modeling.py", line 143, in get_dataset cache_dir=cache_dir, TypeError: init() got an unexpected keyword argument 'cache_dir'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7165/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7164
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7164/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7164/comments
https://api.github.com/repos/huggingface/transformers/issues/7164/events
https://github.com/huggingface/transformers/issues/7164
702,583,139
MDU6SXNzdWU3MDI1ODMxMzk=
7,164
tf.keras.models.load_model() does not load saved model that includes TFOpenAIGPTLMHeadModel layer
{ "login": "wulikai1993", "id": 62692175, "node_id": "MDQ6VXNlcjYyNjkyMTc1", "avatar_url": "https://avatars.githubusercontent.com/u/62692175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wulikai1993", "html_url": "https://github.com/wulikai1993", "followers_url": "https://api.github.com/users/wulikai1993/followers", "following_url": "https://api.github.com/users/wulikai1993/following{/other_user}", "gists_url": "https://api.github.com/users/wulikai1993/gists{/gist_id}", "starred_url": "https://api.github.com/users/wulikai1993/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wulikai1993/subscriptions", "organizations_url": "https://api.github.com/users/wulikai1993/orgs", "repos_url": "https://api.github.com/users/wulikai1993/repos", "events_url": "https://api.github.com/users/wulikai1993/events{/privacy}", "received_events_url": "https://api.github.com/users/wulikai1993/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I tried the resolution in #3627, but the output shape changed from 13088 to 768", "@jplu might be interested in this.", "I cannot really test because I don't have your `trans_model` but as far as I can say it is not working because you are using the high level API (with Keras) to create a saved_model. For models with custom layers it is recommended to use the low level way, like this:\r\n\r\n```\r\nfrom transformers import TFOpenAIGPTLMHeadModel\r\nimport tensorflow as tf\r\n\r\ntf_model = TFOpenAIGPTLMHeadModel.from_pretrained('openai-gpt')\r\nmax_len = None\r\n\r\ninput_ids = tf.keras.layers.Input(shape=(max_len,), name='input_ids_layer', dtype='int32')\r\ntoken_type_ids = tf.keras.layers.Input(shape=(max_len,), name='token_type_ids_layer', dtype='int32')\r\nkeras_input = [input_ids, token_type_ids]\r\n\r\nqa_output = tf_model(input_ids, token_type_ids=token_type_ids)[0]\r\nkeras_model = tf.keras.Model(inputs= keras_input, outputs = qa_output)\r\nkeras_model.summary()\r\ntf.saved_model.save(\"./saved_model\")\r\nprint('**************************')\r\nmodel = tf.saved_model.load(\"./saved_model\")\r\n```\r\n\r\nFor me this works well.", "> tf.saved_model.save(\"./saved_model\")\r\n\r\nyou mean `tf.saved_model.save(keras_model, \"./saved_model\")` ?\r\nI try it\r\n```python\r\ntf.saved_model.save(\"./saved_model\")\r\nprint('**************************')\r\nmodel = tf.saved_model.load(\"./saved_model\")\r\ntf_logits = model.predict([tf_input_ids, tf_token_type_ids])\r\n```\r\nand the error:\r\n\r\n```bash\r\ntf_logits = model.predict([tf_input_ids, tf_token_type_ids])\r\nAttributeError: '_UserObject' object has no attribute 'predict'\r\n```", "> you mean tf.saved_model.save(keras_model, \"./saved_model\")\r\n\r\nYes sorry.\r\n\r\nThe error you get is normal because you are not loading a Keras model, but a Tensorflow model. To get a prediction you have to do something like:\r\n```\r\nmodel([tf_input_ids, None, tf_token_type_ids])\r\n```\r\n", "I changed the code as following:\r\n```python\r\ntf_logits = model([tf_input_ids, tf_token_type_ids])\r\n```\r\nand the error:\r\n```bash\r\nValueError: Could not find matching function to call loaded from the SavedModel. Got:\r\n Positional arguments (3 total):\r\n * [<tf.Tensor 'inputs:0' shape=(1, 5) dtype=int64>, <tf.Tensor 'inputs_1:0' shape=(1, 5) dtype=int64>]\r\n * False\r\n * None\r\n Keyword arguments: {}\r\n\r\nExpected these arguments to match one of the following 4 option(s):\r\n\r\nOption 1:\r\n Positional arguments (3 total):\r\n * [TensorSpec(shape=(None, None), dtype=tf.int32, name='input_ids_layer'), TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids_layer')]\r\n * False\r\n * None\r\n Keyword arguments: {}\r\n\r\nOption 2:\r\n Positional arguments (3 total):\r\n * [TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/1')]\r\n * True\r\n * None\r\n Keyword arguments: {}\r\n\r\nOption 3:\r\n Positional arguments (3 total):\r\n * [TensorSpec(shape=(None, None), dtype=tf.int32, name='input_ids_layer'), TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids_layer')]\r\n * True\r\n * None\r\n Keyword arguments: {}\r\n\r\nOption 4:\r\n Positional arguments (3 total):\r\n * [TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/1')]\r\n * False\r\n * None\r\n Keyword arguments: {}\r\n```", "Can you try:\r\n```\r\nmodel([tf_input_ids, tf_token_type_ids], False, None)\r\n```", "I try it, and the same error.\r\n\r\nMore information: In face, my ultimate objective is to use the savedmodel for tfserving. But I get the error when querying the serving:\r\n```bash\r\nInput to reshape is a tensor with 3840 values, but the requested shape has 768\\n\\t [[{{node functional_1/tf_open_aigptlm_head_model/transformer/Reshape_3}}]]\\n\\t [[StatefulPartitionedCall/StatefulPartitionedCall]]'}\r\n```\r\nIt means getting 3840(768*5) values instead of 768.\r\nAnd considering the original error:\r\n```bash\r\nFirst structure: type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs')\r\n\r\nSecond structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}\r\n\r\nMore specifically: Substructure \"type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}\" is a sequence, while substructure \"type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs')\" is not\r\nEntire first structure:\r\n.\r\nEntire second structure:\r\n{'input_ids': .}\r\n```\r\nDoes it mean the input is replicated 5 times? And where the shape (None, 5) come from?", "5 is the default input when a None input shape is given.\r\n\r\nIt means there is certainly a bug with handling Keras symbolic Tensors. In order to be sure can you run:\r\n\r\n```\r\nsaved_model_cli show --dir ./saved_model --tag_set serve --signature_def serving_default\r\n```", "```bash\r\nsaved_model_cli show --dir ./saved_model --tag_set serve --signature_def serving_default\r\n\r\nThe given SavedModel SignatureDef contains the following input(s):\r\n inputs['input_ids_layer'] tensor_info:\r\n dtype: DT_INT32\r\n shape: (-1, -1)\r\n name: serving_default_input_ids_layer:0\r\n inputs['token_type_ids_layer'] tensor_info:\r\n dtype: DT_INT32\r\n shape: (-1, -1)\r\n name: serving_default_token_type_ids_layer:0\r\nThe given SavedModel SignatureDef contains the following output(s):\r\n outputs['tf_open_aigptlm_head_model'] tensor_info:\r\n dtype: DT_FLOAT\r\n shape: (-1, -1, 13088)\r\n name: StatefulPartitionedCall:0\r\nMethod name is: tensorflow/serving/predict\r\n```\r\nIn the code\r\n```python\r\nkeras_model = tf.keras.Model(inputs= keras_input, outputs = qa_output)\r\nkeras_model.summary()\r\nkeras_model.save(\"./saved_model\")\r\nprint('**************************')\r\nmodel = tf.keras.models.load_model(\"./saved_model\")\r\n```\r\nWhen I use the `keras_model` directly, it works perfectly in a dialogue task. But after saving it to savedmodel, everything collapse.", "Ok, the first thing I see is that the names doesn't correspond to the name we use internally, it might be one of the cause that brings the issue.", "Sorry, which names do you mean? Is it a `transformers` library bug or my code mistake?", "> Sorry, which names do you mean? Is it a transformers library bug or my code mistake?\r\n\r\nIn the lib.\r\n\r\nAlso by name I mean that you are using `input_ids_layer` and `token_type_ids_layer` while internally they are `input_ids` and `token_type_ids`.", "So what should I do? Remove the `_layer` string?", "It won't solve your problem. For now there is not a \"practical\" solution to get a saved model unless you set everything yourself:\r\n```\r\nfrom transformers import TFOpenAIGPTLMHeadModel, OpenAIGPTTokenizer\r\nimport tensorflow as tf\r\n\r\ntokenizer = OpenAIGPTTokenizer.from_pretrained(\"openai-gpt\")\r\nmodel = TFOpenAIGPTLMHeadModel.from_pretrained(\"openai-gpt\")\r\ninputs = tokenizer(\"Put a sentence here\", return_tensors=\"tf\")\r\nmodel._saved_model_inputs_spec = None\r\nmodel._set_save_spec(dict(inputs))\r\ntf.saved_model.save(model, \"./saved_model\")\r\n``` \r\n\r\nAnd then using it normally with TF serving. This solution has a big constraint, you have to set manually the size of your input sequence. This is for now the only solution I can give you because we haven't make the TF models fully TF Serving compliant yet. This is planed for a future release.", "Thanks! Looking forward to the new release.", "Or this should do the trick:\r\n\r\n```\r\ntf_model = TFOpenAIGPTLMHeadModel.from_pretrained('openai-gpt')\r\ninput_ids = tf.keras.layers.Input(shape=(128,), name='input_ids', dtype='int32')\r\ntoken_type_ids = tf.keras.layers.Input(shape=(128,), name='token_type_ids', dtype='int32')\r\nkeras_input = [input_ids, token_type_ids]\r\n\r\nqa_output = tf_model(input_ids, token_type_ids=token_type_ids)[0]\r\nkeras_model = tf.keras.Model(inputs= keras_input, outputs = [qa_output])\r\nkeras_model.trainable = False\r\nkeras_model.summary()\r\nkeras_model.save(\"./saved_model\", save_format=\"tf\")\r\n```\r\nWith this I can run the saved_model inside the TF serving Docker image, but for all the cases you have to set yourself your sequence length.", "Sorry, I need a dynamic input length for the dialogue task.", "As a temporary solution you can set the size of your inputs to the max length of the tokenizer. As you won't be able to get bigger sequences from the tokenizer you can be safe.", "I use the following code:\r\n\r\n```python\r\ntf_model = TFOpenAIGPTLMHeadModel.from_pretrained('./trans_model', from_pt=True)\r\nmax_len = 128\r\ninput_ids = tf.keras.layers.Input(shape=(max_len,), name='input_ids', dtype='int32')\r\ntoken_type_ids = tf.keras.layers.Input(shape=(max_len,), name='token_type_ids', dtype='int32')\r\nkeras_input = [input_ids, token_type_ids]\r\n\r\nqa_output = tf_model(input_ids, token_type_ids=token_type_ids)[0]\r\nkeras_model = tf.keras.Model(inputs= keras_input, outputs = qa_output)\r\nkeras_model.trainable = False\r\nkeras_model.summary()\r\nkeras_model.save(\"./saved_model\", save_format=\"tf\")\r\n```\r\nI use `keras_model` directly to predict.\r\nThe strange thing is the first 2 predicting always worked fine, while the third predicting collapse (the model always uses the previous sequence to predict the next put, so the input shape will plus 1 each time).\r\n```bash\r\n>>> 你现在做什么工作呢\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"input_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 12).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"input_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 12).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"token_type_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 12).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"token_type_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 12).\r\nlogits shape: (1, 12, 13088)\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"input_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 13).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"input_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 13).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"token_type_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 13).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"token_type_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 13).\r\nlogits shape: (1, 13, 13088)\r\nTraceback (most recent call last):\r\n File \"interact_test.py\", line 233, in <module>\r\n run()\r\n File \"interact_test.py\", line 225, in run\r\n out_ids = sample_sequence(history, tokenizer, keras_model, args)\r\n File \"interact_test.py\", line 121, in sample_sequence\r\n tf_logits = model.predict([tf_input_ids, tf_token_type_ids])\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 130, in _method_wrapper\r\n return method(self, *args, **kwargs)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 1599, in predict\r\n tmp_batch_outputs = predict_function(iterator)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 780, in __call__\r\n result = self._call(*args, **kwds)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 814, in _call\r\n results = self._stateful_fn(*args, **kwds)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 2829, in __call__\r\n return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 1848, in _filtered_call\r\n cancellation_manager=cancellation_manager)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 1924, in _call_flat\r\n ctx, args, cancellation_manager=cancellation_manager))\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 550, in call\r\n ctx=ctx)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/execute.py\", line 60, in quick_execute\r\n inputs, attrs, num_outputs)\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 14 values, but the requested shape requires a multiple of 128\r\n [[node functional_1/tf_open_aigptlm_head_model/transformer/Reshape_2 (defined at /home/t9kuser/.local/lib/python3.6/site-packages/transformers/modeling_tf_openai.py:342) ]] [Op:__inference_predict_function_15804]\r\n\r\nErrors may have originated from an input operation.\r\nInput Source operations connected to node functional_1/tf_open_aigptlm_head_model/transformer/Reshape_2:\r\n functional_1/Cast_1 (defined at interact_test.py:121)\r\n\r\nFunction call stack:\r\npredict_function\r\n```\r\n\r\n```bash\r\n>>> 你好\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"input_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 5).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"input_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 5).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"token_type_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 5).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"token_type_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 5).\r\nlogits shape: (1, 5, 13088)\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"input_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 6).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"input_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 6).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"token_type_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 6).\r\nWARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor(\"token_type_ids:0\", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 6).\r\nlogits shape: (1, 6, 13088)\r\nTraceback (most recent call last):\r\n File \"interact_test.py\", line 233, in <module>\r\n run()\r\n File \"interact_test.py\", line 225, in run\r\n out_ids = sample_sequence(history, tokenizer, keras_model, args)\r\n File \"interact_test.py\", line 121, in sample_sequence\r\n tf_logits = model.predict([tf_input_ids, tf_token_type_ids])\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 130, in _method_wrapper\r\n return method(self, *args, **kwargs)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 1599, in predict\r\n tmp_batch_outputs = predict_function(iterator)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 780, in __call__\r\n result = self._call(*args, **kwds)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py\", line 814, in _call\r\n results = self._stateful_fn(*args, **kwds)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 2829, in __call__\r\n return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 1848, in _filtered_call\r\n cancellation_manager=cancellation_manager)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 1924, in _call_flat\r\n ctx, args, cancellation_manager=cancellation_manager))\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py\", line 550, in call\r\n ctx=ctx)\r\n File \"/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/execute.py\", line 60, in quick_execute\r\n inputs, attrs, num_outputs)\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 7 values, but the requested shape requires a multiple of 128\r\n [[node functional_1/tf_open_aigptlm_head_model/transformer/Reshape_2 (defined at /home/t9kuser/.local/lib/python3.6/site-packages/transformers/modeling_tf_openai.py:342) ]] [Op:__inference_predict_function_15804]\r\n\r\nErrors may have originated from an input operation.\r\nInput Source operations connected to node functional_1/tf_open_aigptlm_head_model/transformer/Reshape_2:\r\n functional_1/Cast_1 (defined at interact_test.py:121)\r\n\r\nFunction call stack:\r\npredict_function\r\n```", "And the savedmodel error:\r\n\r\n```bash\r\n{'error': '{{function_node __inference__wrapped_model_10271}} {{function_node __inference__wrapped_model_10271}} Incompatible shapes: [1,128,768] vs. [1,5,768]\\n\\t [[{{node functional_1/tf_open_aigptlm_head_model/transformer/add}}]]\\n\\t [[StatefulPartitionedCall/StatefulPartitionedCall]]'}\r\n```", "The pretrained model can be downloaded here: https://drive.google.com/file/d/1Wyr-fD4KuF0gWMtZ7STF09O2Ebtly-yq/view?usp=sharing\r\nThis is my source code:\r\nYou can reproduce the error, thanks a lot!\r\n```python\r\n# # Copyright (c) 2019-present, HuggingFace Inc.\r\n# All rights reserved.\r\n# This source code is licensed under the BSD-style license found in the\r\n# LICENSE file in the root directory of this source tree.\r\nimport os\r\nimport logging\r\nimport random\r\nfrom itertools import chain\r\nfrom argparse import ArgumentParser\r\nfrom pprint import pformat\r\nimport torch\r\nimport tensorflow as tf\r\nimport torch.nn.functional as F\r\nimport sys\r\nimport numpy as np\r\n\r\nfrom transformers import OpenAIGPTLMHeadModel, GPT2LMHeadModel, BertTokenizer, TFOpenAIGPTLMHeadModel\r\n\r\nSPECIAL_TOKENS = [\"[CLS]\", \"[SEP]\", \"[PAD]\", \"[speaker1]\", \"[speaker2]\"]\r\n\r\n\r\ndef top_filtering(logits, top_k=0, top_p=0.0, threshold=-float('Inf'), filter_value=-float('Inf')):\r\n \"\"\" Filter a distribution of logits using top-k, top-p (nucleus) and/or threshold filtering\r\n Args:\r\n logits: logits distribution shape (vocabulary size)\r\n top_k: <=0: no filtering, >0: keep only top k tokens with highest probability.\r\n top_p: <=0.0: no filtering, >0.0: keep only a subset S of candidates, where S is the smallest subset\r\n whose total probability mass is greater than or equal to the threshold top_p.\r\n In practice, we select the highest probability tokens whose cumulative probability mass exceeds\r\n the threshold top_p.\r\n threshold: a minimal threshold to keep logits\r\n \"\"\"\r\n assert logits.dim() == 1 # Only work for batch size 1 for now - could update but it would obfuscate a bit the code\r\n top_k = min(top_k, logits.size(-1))\r\n if top_k > 0:\r\n # Remove all tokens with a probability less than the last token in the top-k tokens\r\n indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]\r\n logits[indices_to_remove] = filter_value\r\n\r\n if top_p > 0.0:\r\n # Compute cumulative probabilities of sorted tokens\r\n sorted_logits, sorted_indices = torch.sort(logits, descending=True)\r\n cumulative_probabilities = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)\r\n\r\n # Remove tokens with cumulative probability above the threshold\r\n sorted_indices_to_remove = cumulative_probabilities > top_p\r\n # Shift the indices to the right to keep also the first token above the threshold\r\n sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()\r\n sorted_indices_to_remove[..., 0] = 0\r\n\r\n # Back to unsorted indices and set them to -infinity\r\n indices_to_remove = sorted_indices[sorted_indices_to_remove]\r\n logits[indices_to_remove] = filter_value\r\n\r\n indices_to_remove = logits < threshold\r\n logits[indices_to_remove] = filter_value\r\n\r\n return logits\r\n\r\n\r\ndef build_input_from_segments(history, reply, tokenizer, with_eos=True):\r\n \"\"\" Build a sequence of input from 3 segments: persona, history and last reply \"\"\"\r\n bos, eos, pad, speaker1, speaker2 = tokenizer.convert_tokens_to_ids(SPECIAL_TOKENS)\r\n sequence = [[bos]] + history + [reply + ([eos] if with_eos else [])]\r\n# print('sequence 1', sequence)\r\n sequence = [sequence[0]] + [[speaker2 if i % 2 else speaker1] + s for i, s in enumerate(sequence[1:])]\r\n# print('sequence 2', sequence)\r\n instance = {}\r\n instance[\"input_ids\"] = list(chain(*sequence))\r\n instance[\"token_type_ids\"] = [bos] + [speaker2 if i % 2 else speaker1 for i, s in enumerate(sequence[1:])\r\n for _ in s]\r\n return instance, sequence\r\n\r\n\r\ndef sample_sequence(history, tokenizer, model, args, current_output=None):\r\n special_tokens_ids = tokenizer.convert_tokens_to_ids(SPECIAL_TOKENS)\r\n# print(special_tokens_ids)\r\n if current_output is None:\r\n current_output = []\r\n\r\n for i in range(args.max_length):\r\n instance, sequence = build_input_from_segments(history, current_output, tokenizer, with_eos=False)\r\n input_ids = torch.tensor(instance[\"input_ids\"], dtype=torch.long, device=args.device).unsqueeze(0)\r\n token_type_ids = torch.tensor(instance[\"token_type_ids\"], dtype=torch.long, device=args.device).unsqueeze(0)\r\n# print(type(input_ids))\r\n# print(input_ids.shape)\r\n# print('input_ids', input_ids)\r\n# print('token_type_ids', token_type_ids)\r\n# logits, *_ = model(input_ids, token_type_ids=token_type_ids)\r\n# print(type(logits))\r\n# print(logits.shape)\r\n# print(logits)\r\n tf_input_ids = input_ids.numpy()\r\n tf_token_type_ids = token_type_ids.numpy()\r\n# tf_input_ids_pad = np.pad(tf_input_ids, ((0, 0), (0, 128 - tf_input_ids.shape[1])), 'constant')\r\n# print(tf_input_ids_pad.shape)\r\n# print(tf_input_ids_pad)\r\n tf_logits = model.predict([tf_input_ids, tf_token_type_ids])\r\n logits = torch.from_numpy(tf_logits)\r\n\r\n# tf_logits, *_ = model(tf_input_ids, token_type_ids=tf.constant(tf_token_type_ids))\r\n# logits = torch.from_numpy(tf_logits.numpy())\r\n# print(type(tf_logits))\r\n print('logits shape: ', tf_logits.shape)\r\n# print(tf_logits)\r\n \r\n logits = logits[0, -1, :] / args.temperature\r\n print('logits tmp shape: ', logits.shape)\r\n logits = top_filtering(logits, top_k=args.top_k, top_p=args.top_p)\r\n print('logits filter shape: ', logits.shape)\r\n probs = F.softmax(logits, dim=-1)\r\n print('probs shape: ', probs.shape)\r\n\r\n prev = torch.topk(probs, 1)[1] if args.no_sample else torch.multinomial(probs, 1)\r\n print('prev: ', prev)\r\n if i < args.min_length and prev.item() in special_tokens_ids:\r\n while prev.item() in special_tokens_ids:\r\n prev = torch.multinomial(probs, num_samples=1)\r\n\r\n if prev.item() in special_tokens_ids:\r\n break\r\n current_output.append(prev.item())\r\n\r\n return current_output\r\n\r\n\r\ndef run():\r\n parser = ArgumentParser()\r\n parser.add_argument('--gpt2', action='store_true', help=\"use gpt2\")\r\n parser.add_argument(\"--model_checkpoint\", type=str, default=\"./LCCD_GPT\", help=\"Path, url or short name of the model\")\r\n parser.add_argument(\"--max_history\", type=int, default=2, help=\"Number of previous utterances to keep in history\")\r\n parser.add_argument(\"--device\", type=str, default=\"cpu\",\r\n help=\"Device (cuda or cpu)\")\r\n\r\n parser.add_argument(\"--no_sample\", action='store_true', help=\"Set to use greedy decoding instead of sampling\")\r\n parser.add_argument(\"--max_length\", type=int, default=30, help=\"Maximum length of the output utterances\")\r\n parser.add_argument(\"--min_length\", type=int, default=1, help=\"Minimum length of the output utterances\")\r\n parser.add_argument(\"--seed\", type=int, default=42, help=\"Seed\")\r\n parser.add_argument(\"--temperature\", type=int, default=0.7, help=\"Sampling softmax temperature\")\r\n parser.add_argument(\"--top_k\", type=int, default=0, help=\"Filter top-k tokens before sampling (<=0: no filtering)\")\r\n parser.add_argument(\"--top_p\", type=float, default=0.9,\r\n help=\"Nucleus filtering (top-p) before sampling (<=0.0: no filtering)\")\r\n args = parser.parse_args()\r\n\r\n logging.basicConfig(level=logging.INFO)\r\n logger = logging.getLogger(__file__)\r\n logger.info(pformat(args))\r\n\r\n if args.model_checkpoint == \"\":\r\n logging.error(\"Checkpoint needed!\")\r\n return\r\n\r\n random.seed(args.seed)\r\n torch.random.manual_seed(args.seed)\r\n torch.cuda.manual_seed(args.seed)\r\n\r\n logger.info(\"Get pretrained model and tokenizer\")\r\n tokenizer_class = BertTokenizer\r\n tokenizer = tokenizer_class.from_pretrained(args.model_checkpoint, do_lower_case=True)\r\n tf_model = TFOpenAIGPTLMHeadModel.from_pretrained('./trans_model', from_pt=True)\r\n max_len = 128\r\n\r\n input_ids = tf.keras.layers.Input(shape=(max_len,), name='input_ids', dtype='int32')\r\n token_type_ids = tf.keras.layers.Input(shape=(max_len,), name='token_type_ids', dtype='int32')\r\n keras_input = [input_ids, token_type_ids]\r\n\r\n qa_output = tf_model(input_ids, token_type_ids=token_type_ids)[0]\r\n print('**************************')\r\n print(type(qa_output))\r\n print(qa_output)\r\n keras_model = tf.keras.Model(inputs= keras_input, outputs = [qa_output])\r\n keras_model.trainable = False\r\n keras_model.summary()\r\n# keras_model.save(\"./saved_model\", save_format=\"tf\")\r\n# tf.saved_model.save(keras_model, \"./saved_model\")\r\n# model = tf.saved_model.load(\"./saved_model\")\r\n# keras_model.save(\"./saved_model\")\r\n# print('**************************')\r\n# model = tf.keras.models.load_model(\"./saved_model\")\r\n\r\n def tokenize(obj):\r\n if isinstance(obj, str):\r\n return tokenizer.convert_tokens_to_ids(tokenizer.tokenize(obj))\r\n if isinstance(obj, dict):\r\n return dict((n, tokenize(o)) for n, o in obj.items())\r\n return list(tokenize(o) for o in obj)\r\n\r\n history = []\r\n while True:\r\n raw_text = input(\">>> \")\r\n while not raw_text:\r\n print('Prompt should not be empty!')\r\n raw_text = input(\">>> \")\r\n sys.stdout.flush()\r\n# print(raw_text)\r\n raw_text = \" \".join(list(raw_text.replace(\" \", \"\")))\r\n# print(raw_text)\r\n sys.stdout.flush()\r\n history.append(tokenize(raw_text))\r\n# print('history', history)\r\n with torch.no_grad():\r\n out_ids = sample_sequence(history, tokenizer, keras_model, args)\r\n history.append(out_ids)\r\n history = history[-(2 * args.max_history + 1):]\r\n out_text = tokenizer.decode(out_ids, skip_special_tokens=True)\r\n print(out_text)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n run()\r\n\r\n```", "For now I don't really have time to check this, but as far as I can see, the issue is that you are not giving a sequence of 128, your sequences have to be padded to 128. If at each step you get one more element to your sequence, you can remove one padding everytime. Example:\r\n\r\n1st iteration, shape [1, 128, 13088]:\r\n[[\r\n [embed char 1],\r\n [embed char 2],\r\n [embed char 3],\r\n [embed char 4],\r\n [embed char 5],\r\n [embed padding],\r\n [embed padding],\r\n [embed padding],\r\n ...\r\n [embed padding]\r\n]]\r\n\r\n2nd iteration, shape [1, 128, 13088]:\r\n[[\r\n [embed char 1],\r\n [embed char 2],\r\n [embed char 3],\r\n [embed char 4],\r\n [embed char 5],\r\n [embed char 6],\r\n [embed padding],\r\n [embed padding],\r\n ...\r\n [embed padding]\r\n]]\r\n\r\nAnd so on.\r\n", "Yes, I tried this before. But the performance declined a lot. Thanks for your patience!", "Same Issue in Electra.\r\n\r\ni think, the TensorSpec below seems to be a dummy input for building TF2 model in transformers library.\r\n\r\n`{'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}`\r\n\r\nbut, i don't know why, that dummy is alive after saving and loading.", "All the models are initialized with this input. If you want to change it you have to recompile it with your own input as I shown in my previous posts.", "@wulikai1993 \r\n> Yes, I tried this before. But the performance declined a lot. Thanks for your patience!\r\n\r\nIndeed the perf will decline, but you still get the same issue?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@jplu any updates? Default exporting or dynamic shapes are still failing for me", "@jplu Even by using your approach, still get the error on XLM models (max_len set to 192 in the `Input`):\r\n```\r\nValueError: The two structures don't have the same nested structure.\r\n\r\nFirst structure: type=TensorSpec str=TensorSpec(shape=(None, 192), dtype=tf.int32, name='inputs')\r\n\r\nSecond structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}\r\n\r\nMore specifically: Substructure \"type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}\" is a sequence, while substructure \"type=TensorSpec str=TensorSpec(shape=(None, 192), dtype=tf.int32, name='inputs')\" is not\r\n```\r\n\r\nHere is the model template, which works perfectly before exporting and trying to load it again:\r\n```\r\ndef get_model(transformer, num_classes=1, max_len=512):\r\n input_word_ids = Input(shape=(192,), dtype=tf.int32, name=\"input_word_ids\")\r\n sequence_output = transformer(input_word_ids)[0]\r\n cls_token = sequence_output[:, 0, :]\r\n out = Dense(num_classes, activation='sigmoid')(cls_token)\r\n \r\n model = Model(inputs=input_word_ids, outputs=out)\r\n model.compile(Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])\r\n \r\n return model\r\n```" ]
1,600
1,614
1,614
NONE
null
- `transformers` version: 3.1.0 - Platform: linux - Python version: 3 - Tensorflow version: 2.3.0 ## To reproduce Steps to reproduce the behavior: 1. Load the model with TFOpenAIGPTLMHeadModel 2. Add input layers 3. save the model 4. Load saved model ```python from transformers import TFOpenAIGPTLMHeadModel import tensorflow as tf tf_model = TFOpenAIGPTLMHeadModel.from_pretrained('./trans_model', from_pt=True) # ./trans_model is the directory including pre-trained model from pytorch max_len = None input_ids = tf.keras.layers.Input(shape=(max_len,), name='input_ids_layer', dtype='int32') token_type_ids = tf.keras.layers.Input(shape=(max_len,), name='token_type_ids_layer', dtype='int32') keras_input = [input_ids, token_type_ids] qa_output = tf_model(input_ids, token_type_ids=token_type_ids)[0] keras_model = tf.keras.Model(inputs= keras_input, outputs = qa_output) keras_model.summary() keras_model.save("./saved_model") print('**************************') model = tf.keras.models.load_model("./saved_model") ``` ```bash Traceback (most recent call last): File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 395, in assert_same_structure expand_composites) ValueError: The two structures don't have the same nested structure. First structure: type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs') Second structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs')" is not During handling of the above exception, another exception occurred: Traceback (most recent call last): File "interact_test.py", line 208, in <module> run() File "interact_test.py", line 180, in run model = tf.keras.models.load_model("./saved_model") File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py", line 187, in load_model return saved_model_load.load(filepath, compile, options) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 121, in load path, options=options, loader_cls=KerasObjectLoader) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py", line 633, in load_internal ckpt_options) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 194, in __init__ super(KerasObjectLoader, self).__init__(*args, **kwargs) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py", line 130, in __init__ self._load_all() File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 221, in _load_all self._finalize_objects() File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 526, in _finalize_objects _finalize_saved_model_layers(layers_revived_from_saved_model) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 706, in _finalize_saved_model_layers inputs = infer_inputs_from_restored_call_function(call_fn) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 985, in infer_inputs_from_restored_call_function spec = nest.map_structure(common_spec, spec, spec2) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 629, in map_structure expand_composites=expand_composites) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 402, in assert_same_structure % (str(e), str1, str2)) ValueError: The two structures don't have the same nested structure. First structure: type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs') Second structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs')" is not Entire first structure: . Entire second structure: {'input_ids': .} ````
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7164/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7164/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7163
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7163/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7163/comments
https://api.github.com/repos/huggingface/transformers/issues/7163/events
https://github.com/huggingface/transformers/issues/7163
702,553,868
MDU6SXNzdWU3MDI1NTM4Njg=
7,163
Pegasus- Arxiv predicts random text
{ "login": "MichaelJanz", "id": 66110831, "node_id": "MDQ6VXNlcjY2MTEwODMx", "avatar_url": "https://avatars.githubusercontent.com/u/66110831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichaelJanz", "html_url": "https://github.com/MichaelJanz", "followers_url": "https://api.github.com/users/MichaelJanz/followers", "following_url": "https://api.github.com/users/MichaelJanz/following{/other_user}", "gists_url": "https://api.github.com/users/MichaelJanz/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichaelJanz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichaelJanz/subscriptions", "organizations_url": "https://api.github.com/users/MichaelJanz/orgs", "repos_url": "https://api.github.com/users/MichaelJanz/repos", "events_url": "https://api.github.com/users/MichaelJanz/events{/privacy}", "received_events_url": "https://api.github.com/users/MichaelJanz/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Seems to be a problem with the 'google/pegasus-arxiv' model, when you use 'google/pegasus-xsum' you get:\r\n`Harry Potter and the Philosopher’s Stone is the seventh and final book in JK Rowling’s Harry Potter series.`\r\nas output", "Yes I tried different pegasus models (including alot of other models) and pegasus-large e.G. outputs this (which I think is really good1):\r\n_In this sequel to the phenomenally popular Harry Potter and the Sorcerer’s Stone, Harry returns to Hogwarts School of Witchcraft and Wizardry for his second year after a miserable summer with his Muggle (nonmagical) relatives. Rowling clearly hit on a winning formula with the first Harry Potter book; the second book — though still great fun — feels a tad, well, formulaic.'_\r\n\r\nwhile pegasus-multinews outputs pretty well generated texts, but unfortunately wrong in the content:\r\n_– The seventh and final book in the Harry Potter series, Harry Potter and the Sorcerer\\'s Stone, is out today. The sixth book in the series, Harry Potter and the Deathly Hallows, was released in the US in advance of tomorrow\\'s release in the UK. Here\\'s what critics are saying about the seventh and final book in the series: The plot is still compelling, but the book \"feels a tad, well, formulaic,\" writes James Poniewozik in Time. \"The atmosphere Rowling creates is unique; the story whizzes along; Harry is an unassuming and completely sympathetic hero. But, truth to tell, you may feel as if you\\'ve read it all before. Rowling clearly hit on a winning formula with the first Harry Potter book; the second book—though still great fun—feels a tad, well, formulaic.\"'_\r\n\r\nGigaword and billsum are both also outputting non useful texts. \r\n\r\nAlso another question, while pegasus-large and pegasus-cnn_dailymail both only return the most important sentences, pegasus-multinews generates even new text. I was hoping the same for the arxiv model, is there a reason that it differs in that way?", "`pegasus-arxiv` is trained on and expects scientific text.\r\n`pegasus-multinews` expects news I presume.\r\n\r\nIf you want to prove a bug, try running an evaluation on a public dataset from the datasets package, and posting the result #6844 .", "# Environment info\r\ntransformers version: 3.1.0\r\nPlatform: Windows - 10\r\nPython version: 3.7.6\r\nPyTorch version (GPU?): 1.5.0 (False)\r\nUsing GPU in script?: no\r\nUsing distributed or parallel set-up in script?: no\r\n\r\n\r\n## To Reproduce\r\n\r\nI found unexpected behaviour when using Pegasus-Pubmed on Pubmed document.\r\n\r\n```\r\nimport torch\r\nfrom transformers import PegasusForConditionalGeneration, PegasusTokenizer\r\n\r\nsrc_text =\"\"\"although the association is modest , it is important because of the increasing prevalence of metabolic syndrome and the effect that depression can have on the ability of patients to successfully make lifestyle changes and comply with medication required for hypertension and dyslipidemia . the association is demonstrated here in a general population to our knowledge for the first time , whereas earlier studies ( table 1 ) used subgroups of populations ( 813,17 ) . this distinction is important because many individuals with metabolic syndrome have diabetes , which itself is known to be associated with depression ( 5 ) . metabolic syndrome has been defined in several ways that involve quantitative anthropometric , clinical , and laboratory measurements ( 1,2 ) . for the primary assessment , we chose ncep atp iii ( 1 ) criteria , since these criteria were used in most of the previously reported studies ( 8,9,1113,17 ) .\"\"\"\r\n\r\nmodel_name = 'google/pegasus-pubmed'\r\ntorch_device = 'cuda' if torch.cuda.is_available() else 'cpu'\r\ntokenizer = PegasusTokenizer.from_pretrained(model_name)\r\nmodel = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)\r\n\r\nbatch = tokenizer.prepare_seq2seq_batch([src_text], truncation=True, padding='longest').to(torch_device)\r\ntranslated = model.generate(**batch)\r\ntgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\nprint(tgt_text)\r\n```\r\n\r\n## Expected behaviour\r\nI expect a summary of the input but i received a longer (relative to the input) version of the text, input has length of 929 vs 1129 of predicted summary.\r\nIn particular Pegasus generate new knowledge (bold text) which isn't inside the input text.\r\n\r\nInput:\r\n\r\nalthough the association is modest , it is important because of the increasing prevalence of metabolic syndrome and the effect that depression can have on the ability of patients to successfully make lifestyle changes and comply with medication required for hypertension and dyslipidemia . the association is demonstrated here in a general population to our knowledge for the first time , whereas earlier studies ( table 1 ) used subgroups of populations ( 813,17 ) . this distinction is important because many individuals with metabolic syndrome have diabetes , which itself is known to be associated with depression ( 5 ) . metabolic syndrome has been defined in several ways that involve quantitative anthropometric , clinical , and laboratory measurements ( 1,2 ) . for the primary assessment , we chose ncep atp iii ( 1 ) criteria , since these criteria were used in most of the previously reported studies ( 8,9,1113,17 ) .\r\n\r\nOutput:\r\n\r\n['depression is known to be associated with metabolic syndrome, but its association with metabolic syndrome has not been studied in a general population. <n> we examined the association between depression and metabolic syndrome using ncep atp iii criteria in a population - based sample ( n = 3,018 ). <n> **metabolic syndrome was defined as having three or more of the following : body mass index 25 kg / m2, waist circumference 90 cm, and triglyceride 130 mg / dl.** <n> **depression was assessed using the center for epidemiologic studies depression scale ( cesds ).** <n> **multivariate logistic regression was used to estimate odds ratios ( ors ) and 95% confidence intervals ( cis ) for the association between depression and metabolic syndrome.** <n> we found a significant association between depression and metabolic syndrome in a general population. <n> **after adjustment for age, sex, race / ethnicity, education, smoking, physical activity, alcohol intake, and body mass index, metabolic syndrome was associated with increased odds of depression ( or = 1.16, 95% ci 1.041.32 ).** <n> **the association was stronger in women than in men.**']\r\n\r\nIs that behaviour correct?", "Output should be < 256 tokens (not characters).\r\nInput should probably be longer (closer to 1024 tokens).\r\nTry copying something from the leftmost column of the [dataset](https://huggingface.co/nlp/viewer/?dataset=scientific_papers&config=pubmed)", "We've now replicated that our pegasus port performs similarly well to the authors implementation on 11 datasets, including arxiv.\r\n\r\n![image](https://user-images.githubusercontent.com/6045025/96278960-21bcb300-0fa4-11eb-8ab3-8fb46c721818.png)\r\n\r\n\r\n[Link to Spreadsheet](\r\nhttps://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit#gid=0)", "\r\n\r\nhttps://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit#gid=0" ]
1,600
1,602
1,602
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-47-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @sshleifer ## Information Model I am using (Pegasus-Arxiv): The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Download the pegasus-arxiv model 2. Use the sample script below 3. You will get the result below: ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer src_text ="""In this sequel to the phenomenally popular Harry Potter and the Sorcerer’s Stone, Harry returns to Hogwarts School of Witchcraft and Wizardry for his second year after a miserable summer with his Muggle (nonmagical) relatives. Once again, Harry’s school experiences are colored by encounters with genial ghosts and antagonistic teachers, by the rivalry between good-guy Gryffindor House and slimy Slytherin House, and by an ominous mystery to be solved involving Harry’s archenemy, the dark sorcerer Lord Voldemort. Once again, the attraction of Rowling’s traditional British school story is magnified tenfold by the fantasy elements superimposed upon it. The atmosphere Rowling creates is unique; the story whizzes along; Harry is an unassuming and completely sympathetic hero. But, truth to tell, you may feel as if you’ve read it all before. Rowling clearly hit on a winning formula with the first Harry Potter book; the second book — though still great fun — feels a tad, well, formulaic.""" model_name = 'google/pegasus-arxiv' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) batch = tokenizer.prepare_seq2seq_batch([src_text], truncation=True, padding='longest').to(torch_device) translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) print(tgt_text) ``` ## Expected behavior I expect a clear summary of the text, but I receive a text with no connection to the input, written as a scientific paper: _['this is the first of a series of papers in which we address the question of whether or not the laws of thermodynamics are valid in the limit of infinitely many degrees of freedom. <n> we show that the laws of thermodynamics are valid in the limit of infinitely many degrees of freedom. <n> this is the first of a series of papers in which we address the question of whether or not the laws of thermodynamics are valid in the limit of infinitely many degrees of freedom. <n> we show that the laws of thermodynamics are valid in the limit of infinitely many degrees of freedom. <n> [ theorem]acknowledgement [ theorem]algorithm [ theorem]axiom [ theorem]claim [ theorem]conclusion [ theorem]condition [ theorem]conjecture [ theorem]corollary [ theorem]criterion [ theorem]definition [ theorem]example [ theorem]exercise [ theorem]lemma [ theorem]notation [ theorem]problem [ theorem]proposition [ theorem]remark [ theorem]solution [ theorem]summary this is the first of a series of papers in which we address the question of whether or not the laws of thermodynamics are valid in the limit of infinitely many degrees of freedom.']_ Am I doing something wrong or is it the model? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7163/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7162
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7162/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7162/comments
https://api.github.com/repos/huggingface/transformers/issues/7162/events
https://github.com/huggingface/transformers/issues/7162
702,518,042
MDU6SXNzdWU3MDI1MTgwNDI=
7,162
Create larger summaries by using Summarization models like T5 or Pegasus
{ "login": "MichaelJanz", "id": 66110831, "node_id": "MDQ6VXNlcjY2MTEwODMx", "avatar_url": "https://avatars.githubusercontent.com/u/66110831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichaelJanz", "html_url": "https://github.com/MichaelJanz", "followers_url": "https://api.github.com/users/MichaelJanz/followers", "following_url": "https://api.github.com/users/MichaelJanz/following{/other_user}", "gists_url": "https://api.github.com/users/MichaelJanz/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichaelJanz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichaelJanz/subscriptions", "organizations_url": "https://api.github.com/users/MichaelJanz/orgs", "repos_url": "https://api.github.com/users/MichaelJanz/repos", "events_url": "https://api.github.com/users/MichaelJanz/events{/privacy}", "received_events_url": "https://api.github.com/users/MichaelJanz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging the summarization expert here @sshleifer ", "Can we move [here](https://discuss.huggingface.co/t/summarization-on-long-documents/920/5)\r\n\r\nI am super happy to help, but want to help everybody at the same time. @ibeltagy is working hard on summarizing longer docs, but besides his efforts we don't have what you are looking for.", "Thank you for posting, I will have a look at it." ]
1,600
1,600
1,600
CONTRIBUTOR
null
Hi, I want to create summaries of book-reviews, which should be about half the size of the original text corpus. My goal is to compare how well the summaries are created and if the important content of the original texts are taken over. For this I just had a look at different models and saw, that the models have a very fixed max length (often about 256 tokens), which corresponds in creating only 1-2 sentences, while my goal is for example to have 8 sentences, when the original text corpus has 16. Of course I could split my text corpus in smaller ones, but then I am afraid that the context of the previous sentences gets lost. (Is there maybe a way to pass the context information on into the next predictions, as it could be done in stateful RNNs? Atleast I dont think so) For sure many others already had the issue, since there are tasks as summarizing whole documents or even books, which cannot be done in one sentence. I already [asked](https://stackoverflow.com/questions/63904821/using-transformer-for-text-summarization) on SO, but unfortunately I didnt get an answer there. So I would appreciate any hint on how I could continue on my task. Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7162/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7161
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7161/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7161/comments
https://api.github.com/repos/huggingface/transformers/issues/7161/events
https://github.com/huggingface/transformers/pull/7161
702,510,707
MDExOlB1bGxSZXF1ZXN0NDg3Nzc2ODcz
7,161
Add empty random document case to DataCollatorForNextSentencePrediction
{ "login": "choidongyeon", "id": 54914459, "node_id": "MDQ6VXNlcjU0OTE0NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/54914459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/choidongyeon", "html_url": "https://github.com/choidongyeon", "followers_url": "https://api.github.com/users/choidongyeon/followers", "following_url": "https://api.github.com/users/choidongyeon/following{/other_user}", "gists_url": "https://api.github.com/users/choidongyeon/gists{/gist_id}", "starred_url": "https://api.github.com/users/choidongyeon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/choidongyeon/subscriptions", "organizations_url": "https://api.github.com/users/choidongyeon/orgs", "repos_url": "https://api.github.com/users/choidongyeon/repos", "events_url": "https://api.github.com/users/choidongyeon/events{/privacy}", "received_events_url": "https://api.github.com/users/choidongyeon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=h1) Report\n> Merging [#7161](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/85ffda96fcadf70d2558ba0a59c84b9f5a2d6f0f?el=desc) will **increase** coverage by `2.43%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7161/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7161 +/- ##\n==========================================\n+ Coverage 78.44% 80.88% +2.43% \n==========================================\n Files 168 168 \n Lines 32309 32309 \n==========================================\n+ Hits 25346 26134 +788 \n+ Misses 6963 6175 -788 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.54% <100.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=footer). Last update [b00cafb...021caf3](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks, @LysandreJik!" ]
1,600
1,600
1,600
CONTRIBUTOR
null
This PR proposes a simple fix for an issue in `DataCollatorForNextSentencePrediction` that occurs when a document chosen at random for Token B for the NSP job turns out to be an empty document. The issue that occurs in this case shows up here, where we look for a random starting place to start Token B. https://github.com/huggingface/transformers/blob/b00cafbde575b21ff21f2664e297c50b4c5bb63a/src/transformers/data/data_collator.py#L514-L516 Although this issue should not arise if the data has been cleaned and formatted perfectly, it seems like a good precautionary measure to have in place.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7161/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7161", "html_url": "https://github.com/huggingface/transformers/pull/7161", "diff_url": "https://github.com/huggingface/transformers/pull/7161.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7161.patch", "merged_at": 1600262111000 }
https://api.github.com/repos/huggingface/transformers/issues/7160
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7160/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7160/comments
https://api.github.com/repos/huggingface/transformers/issues/7160/events
https://github.com/huggingface/transformers/issues/7160
702,489,128
MDU6SXNzdWU3MDI0ODkxMjg=
7,160
distributed launch raise Error
{ "login": "xixiaoyao", "id": 24541791, "node_id": "MDQ6VXNlcjI0NTQxNzkx", "avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xixiaoyao", "html_url": "https://github.com/xixiaoyao", "followers_url": "https://api.github.com/users/xixiaoyao/followers", "following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}", "gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}", "starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions", "organizations_url": "https://api.github.com/users/xixiaoyao/orgs", "repos_url": "https://api.github.com/users/xixiaoyao/repos", "events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}", "received_events_url": "https://api.github.com/users/xixiaoyao/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @xixiaoyao,\r\n\r\nCould you please copy paste the command you used to run squad so that I can be 100% sure we are running the same command? How did you enable `gradient_checkpointing` ? Did you change the `run_squad.py` script? \r\n\r\nWould be great if you can copy-paste a runnable code snippet here :-) ", "Getting the same issue when using Reformer with pytorch-lightning's distributeddataparallel, although not using one of the official training scripts.", "I am also getting this exact same error with Reformer, but only when I wrap it with DDP and then train across multiple GPUs on the same box. I do not get this error with Longformer. If I don't use DDP with Reformer, then it works fine. Am doing vanilla AR language model training using a custom script. But my script works fine when used on a single GPU with no DDP. The error seems to indicate that there's something about Reformer which DDP does not yet support:\r\n\r\n\"RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet.\"\r\n\r\nAm using transformers 3.5.1.", "> I am also getting this exact same error with Reformer, but only when I wrap it with DDP and then train across multiple GPUs on the same box. I do not get this error with Longformer. If I don't use DDP with Reformer, then it works fine. Am doing vanilla AR language model training using a custom script. But my script works fine when used on a single GPU with no DDP. The error seems to indicate that there's something about Reformer which DDP does not yet support:\r\n> \r\n> \"RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet.\"\r\n> \r\n> Am using transformers 3.5.1.\r\n\r\nsame with @trias702 ", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.", "> > I am also getting this exact same error with Reformer, but only when I wrap it with DDP and then train across multiple GPUs on the same box. I do not get this error with Longformer. If I don't use DDP with Reformer, then it works fine. Am doing vanilla AR language model training using a custom script. But my script works fine when used on a single GPU with no DDP. The error seems to indicate that there's something about Reformer which DDP does not yet support:\r\n> > \"RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet.\"\r\n> > Am using transformers 3.5.1.\r\n> \r\n> same with @trias702\r\n\r\nsame with @trias702 @yuanenming \r\nany updates? @xixiaoyao " ]
1,600
1,648
1,614
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.6.5 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: distributed on a single node with 4 gpu cards ### Who can help Longformer/Reformer: @patrickvonplaten --> ## Information Model I am using (LongformerForQuestionAnswering): The problem arises when using: * [ Y ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ Y ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. run run_squad.py with longformer and enable fp16 and gradient_checkpointing. ``` 09/16/2020 06:04:57 - INFO - __main__ - ***** Running training ***** 09/16/2020 06:04:57 - INFO - __main__ - Num examples = 800 09/16/2020 06:04:57 - INFO - __main__ - Num Epochs = 2 09/16/2020 06:04:57 - INFO - __main__ - Instantaneous batch size per GPU = 6 09/16/2020 06:04:57 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 24 09/16/2020 06:04:57 - INFO - __main__ - Gradient Accumulation steps = 1 09/16/2020 06:04:57 - INFO - __main__ - Total optimization steps = 68 09/16/2020 06:04:57 - INFO - __main__ - Starting fine-tuning. Epoch: 0%| | 0/2 [00:00<?, ?it/s/opt/conda/lib/python3.7/site-packages/transformers/modeling_longformer.py:72: UserWarning: This overload of nonzero is deprecated::00<?, ?it/s] nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.) sep_token_indices = (input_ids == sep_token_id).nonzero() /opt/conda/lib/python3.7/site-packages/transformers/modeling_longformer.py:72: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.) sep_token_indices = (input_ids == sep_token_id).nonzero() /opt/conda/lib/python3.7/site-packages/transformers/modeling_longformer.py:72: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.) sep_token_indices = (input_ids == sep_token_id).nonzero() /opt/conda/lib/python3.7/site-packages/transformers/modeling_longformer.py:72: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.) sep_token_indices = (input_ids == sep_token_id).nonzero() Traceback (most recent call last): File "run_squad.py", line 839, in <module> main() File "run_squad.py", line 780, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_squad.py", line 213, in train scaled_loss.backward() File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet. Exception raised from mark_variable_ready at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/distributed/c10d/reducer.cpp:453 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f7ace01177d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10d::Reducer::mark_variable_ready(c10d::Reducer::VariableIndex) + 0x4cd (0x7f7b07e1239d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #2: c10d::Reducer::autograd_hook(c10d::Reducer::VariableIndex) + 0xeb (0x7f7b07e12bdb in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0xabdd16 (0x7f7b07e12d16 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0xac4dc6 (0x7f7b07e19dc6 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x4dd (0x7f7b0355693d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #6: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f7b03558401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #7: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x25c (0x7f7b035559fc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #8: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x3c (0x7f7b0787fdcc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x803 (0x7f7b03554e53 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #10: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x4e (0x7f7b0787fbbe in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #11: THPEngine_run_backward(THPEngine*, _object*, _object*) + 0xa29 (0x7f7b07880889 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: _PyMethodDef_RawFastCallKeywords + 0x306 (0x7f7b2b5e2d36 in /opt/conda/bin/python) frame #13: _PyCFunction_FastCallKeywords + 0x21 (0x7f7b2b5e2db1 in /opt/conda/bin/python) frame #14: _PyEval_EvalFrameDefault + 0x52b5 (0x7f7b2b64ea85 in /opt/conda/bin/python) frame #15: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7b2b5922b9 in /opt/conda/bin/python) frame #16: _PyFunction_FastCallKeywords + 0x325 (0x7f7b2b5e2435 in /opt/conda/bin/python) frame #17: _PyEval_EvalFrameDefault + 0x4a59 (0x7f7b2b64e229 in /opt/conda/bin/python) frame #18: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7b2b5922b9 in /opt/conda/bin/python) frame #19: _PyFunction_FastCallDict + 0x1d5 (0x7f7b2b5933e5 in /opt/conda/bin/python) frame #20: _PyEval_EvalFrameDefault + 0x1d4a (0x7f7b2b64b51a in /opt/conda/bin/python) frame #21: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7b2b5922b9 in /opt/conda/bin/python) frame #22: _PyFunction_FastCallDict + 0x1d5 (0x7f7b2b5933e5 in /opt/conda/bin/python) frame #23: _PyObject_Call_Prepend + 0x63 (0x7f7b2b5b1b93 in /opt/conda/bin/python) frame #24: PyObject_Call + 0x6e (0x7f7b2b5a495e in /opt/conda/bin/python) frame #25: torch::autograd::PyNode::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x183 (0x7f7b07888033 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #26: <unknown function> + 0x30d1017 (0x7f7b0355c017 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #27: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7f7b03557860 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #28: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f7b03558401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #29: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7f7b03550579 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #30: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7f7b0787f99a in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #31: <unknown function> + 0xc819d (0x7f7b0a3b619d in /opt/conda/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #32: <unknown function> + 0x76db (0x7f7b2b03b6db in /lib/x86_64-linux-gnu/libpthread.so.0) frame #33: clone + 0x3f (0x7f7b2ad6488f in /lib/x86_64-linux-gnu/libc.so.6) Iteration: 0%| | 0/34 [00:00<?, ?it/s] Epoch: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last): File "run_squad.py", line 839, in <module> main() File "run_squad.py", line 780, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_squad.py", line 213, in train scaled_loss.backward() File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet. Exception raised from mark_variable_ready at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/distributed/c10d/reducer.cpp:453 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f78f667577d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10d::Reducer::mark_variable_ready(c10d::Reducer::VariableIndex) + 0x4cd (0x7f793047639d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #2: c10d::Reducer::autograd_hook(c10d::Reducer::VariableIndex) + 0xeb (0x7f7930476bdb in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0xabdd16 (0x7f7930476d16 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0xac4dc6 (0x7f793047ddc6 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x4dd (0x7f792bbba93d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #6: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f792bbbc401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #7: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x25c (0x7f792bbb99fc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #8: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x3c (0x7f792fee3dcc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x803 (0x7f792bbb8e53 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #10: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x4e (0x7f792fee3bbe in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #11: THPEngine_run_backward(THPEngine*, _object*, _object*) + 0xa29 (0x7f792fee4889 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: _PyMethodDef_RawFastCallKeywords + 0x306 (0x7f7953c46d36 in /opt/conda/bin/python) frame #13: _PyCFunction_FastCallKeywords + 0x21 (0x7f7953c46db1 in /opt/conda/bin/python) frame #14: _PyEval_EvalFrameDefault + 0x52b5 (0x7f7953cb2a85 in /opt/conda/bin/python) frame #15: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7953bf62b9 in /opt/conda/bin/python) frame #16: _PyFunction_FastCallKeywords + 0x325 (0x7f7953c46435 in /opt/conda/bin/python) frame #17: _PyEval_EvalFrameDefault + 0x4a59 (0x7f7953cb2229 in /opt/conda/bin/python) frame #18: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7953bf62b9 in /opt/conda/bin/python) frame #19: _PyFunction_FastCallDict + 0x1d5 (0x7f7953bf73e5 in /opt/conda/bin/python) frame #20: _PyEval_EvalFrameDefault + 0x1d4a (0x7f7953caf51a in /opt/conda/bin/python) frame #21: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7953bf62b9 in /opt/conda/bin/python) frame #22: _PyFunction_FastCallDict + 0x1d5 (0x7f7953bf73e5 in /opt/conda/bin/python) frame #23: _PyObject_Call_Prepend + 0x63 (0x7f7953c15b93 in /opt/conda/bin/python) frame #24: PyObject_Call + 0x6e (0x7f7953c0895e in /opt/conda/bin/python) frame #25: torch::autograd::PyNode::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x183 (0x7f792feec033 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #26: <unknown function> + 0x30d1017 (0x7f792bbc0017 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #27: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7f792bbbb860 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #28: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f792bbbc401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #29: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7f792bbb4579 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #30: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7f792fee399a in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #31: <unknown function> + 0xc819d (0x7f7932a1a19d in /opt/conda/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #32: <unknown function> + 0x76db (0x7f795369f6db in /lib/x86_64-linux-gnu/libpthread.so.0) frame #33: clone + 0x3f (0x7f79533c888f in /lib/x86_64-linux-gnu/libc.so.6) Traceback (most recent call last): File "run_squad.py", line 839, in <module> main() File "run_squad.py", line 780, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_squad.py", line 213, in train scaled_loss.backward() File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet. Exception raised from mark_variable_ready at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/distributed/c10d/reducer.cpp:453 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f88d10da77d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10d::Reducer::mark_variable_ready(c10d::Reducer::VariableIndex) + 0x4cd (0x7f890aedb39d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #2: c10d::Reducer::autograd_hook(c10d::Reducer::VariableIndex) + 0xeb (0x7f890aedbbdb in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0xabdd16 (0x7f890aedbd16 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0xac4dc6 (0x7f890aee2dc6 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x4dd (0x7f890661f93d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #6: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f8906621401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #7: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x25c (0x7f890661e9fc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #8: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x3c (0x7f890a948dcc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x803 (0x7f890661de53 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #10: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x4e (0x7f890a948bbe in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #11: THPEngine_run_backward(THPEngine*, _object*, _object*) + 0xa29 (0x7f890a949889 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: _PyMethodDef_RawFastCallKeywords + 0x306 (0x7f892e6abd36 in /opt/conda/bin/python) frame #13: _PyCFunction_FastCallKeywords + 0x21 (0x7f892e6abdb1 in /opt/conda/bin/python) frame #14: _PyEval_EvalFrameDefault + 0x52b5 (0x7f892e717a85 in /opt/conda/bin/python) frame #15: _PyEval_EvalCodeWithName + 0x2f9 (0x7f892e65b2b9 in /opt/conda/bin/python) frame #16: _PyFunction_FastCallKeywords + 0x325 (0x7f892e6ab435 in /opt/conda/bin/python) frame #17: _PyEval_EvalFrameDefault + 0x4a59 (0x7f892e717229 in /opt/conda/bin/python) frame #18: _PyEval_EvalCodeWithName + 0x2f9 (0x7f892e65b2b9 in /opt/conda/bin/python) frame #19: _PyFunction_FastCallDict + 0x1d5 (0x7f892e65c3e5 in /opt/conda/bin/python) frame #20: _PyEval_EvalFrameDefault + 0x1d4a (0x7f892e71451a in /opt/conda/bin/python) frame #21: _PyEval_EvalCodeWithName + 0x2f9 (0x7f892e65b2b9 in /opt/conda/bin/python) frame #22: _PyFunction_FastCallDict + 0x1d5 (0x7f892e65c3e5 in /opt/conda/bin/python) frame #23: _PyObject_Call_Prepend + 0x63 (0x7f892e67ab93 in /opt/conda/bin/python) frame #24: PyObject_Call + 0x6e (0x7f892e66d95e in /opt/conda/bin/python) frame #25: torch::autograd::PyNode::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x183 (0x7f890a951033 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #26: <unknown function> + 0x30d1017 (0x7f8906625017 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #27: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7f8906620860 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #28: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f8906621401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #29: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7f8906619579 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #30: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7f890a94899a in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #31: <unknown function> + 0xc819d (0x7f890d47f19d in /opt/conda/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #32: <unknown function> + 0x76db (0x7f892e1046db in /lib/x86_64-linux-gnu/libpthread.so.0) frame #33: clone + 0x3f (0x7f892de2d88f in /lib/x86_64-linux-gnu/libc.so.6) Traceback (most recent call last): File "run_squad.py", line 839, in <module> main() File "run_squad.py", line 780, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_squad.py", line 213, in train scaled_loss.backward() File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet. Exception raised from mark_variable_ready at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/distributed/c10d/reducer.cpp:453 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7fa6fe83577d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10d::Reducer::mark_variable_ready(c10d::Reducer::VariableIndex) + 0x4cd (0x7fa73863639d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #2: c10d::Reducer::autograd_hook(c10d::Reducer::VariableIndex) + 0xeb (0x7fa738636bdb in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0xabdd16 (0x7fa738636d16 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0xac4dc6 (0x7fa73863ddc6 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x4dd (0x7fa733d7a93d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #6: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7fa733d7c401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #7: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x25c (0x7fa733d799fc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #8: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x3c (0x7fa7380a3dcc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x803 (0x7fa733d78e53 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #10: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x4e (0x7fa7380a3bbe in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #11: THPEngine_run_backward(THPEngine*, _object*, _object*) + 0xa29 (0x7fa7380a4889 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: _PyMethodDef_RawFastCallKeywords + 0x306 (0x7fa75be06d36 in /opt/conda/bin/python) frame #13: _PyCFunction_FastCallKeywords + 0x21 (0x7fa75be06db1 in /opt/conda/bin/python) frame #14: _PyEval_EvalFrameDefault + 0x52b5 (0x7fa75be72a85 in /opt/conda/bin/python) frame #15: _PyEval_EvalCodeWithName + 0x2f9 (0x7fa75bdb62b9 in /opt/conda/bin/python) frame #16: _PyFunction_FastCallKeywords + 0x325 (0x7fa75be06435 in /opt/conda/bin/python) frame #17: _PyEval_EvalFrameDefault + 0x4a59 (0x7fa75be72229 in /opt/conda/bin/python) frame #18: _PyEval_EvalCodeWithName + 0x2f9 (0x7fa75bdb62b9 in /opt/conda/bin/python) frame #19: _PyFunction_FastCallDict + 0x1d5 (0x7fa75bdb73e5 in /opt/conda/bin/python) frame #20: _PyEval_EvalFrameDefault + 0x1d4a (0x7fa75be6f51a in /opt/conda/bin/python) frame #21: _PyEval_EvalCodeWithName + 0x2f9 (0x7fa75bdb62b9 in /opt/conda/bin/python) frame #22: _PyFunction_FastCallDict + 0x1d5 (0x7fa75bdb73e5 in /opt/conda/bin/python) frame #23: _PyObject_Call_Prepend + 0x63 (0x7fa75bdd5b93 in /opt/conda/bin/python) frame #24: PyObject_Call + 0x6e (0x7fa75bdc895e in /opt/conda/bin/python) frame #25: torch::autograd::PyNode::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x183 (0x7fa7380ac033 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #26: <unknown function> + 0x30d1017 (0x7fa733d80017 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #27: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7fa733d7b860 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #28: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7fa733d7c401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #29: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7fa733d74579 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #30: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7fa7380a399a in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #31: <unknown function> + 0xc819d (0x7fa73abda19d in /opt/conda/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #32: <unknown function> + 0x76db (0x7fa75b85f6db in /lib/x86_64-linux-gnu/libpthread.so.0) frame #33: clone + 0x3f (0x7fa75b58888f in /lib/x86_64-linux-gnu/libc.so.6) Traceback (most recent call last): File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module> main() File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'run_squad.py', '--local_rank=3', '--model_type', 'longformer', '--do_train', '--model_name_or_path', 'longformer-base-len4K', '--do_eval', '--do_lower_case', '--threads', '30', '--fp16', '--eval_all_checkpoints', '--save_steps', '2500', '--train_file', './data/marco_v1.0/train.json.demo', '--predict_file', './data/marco_v1.0/dev.json', '--per_gpu_train_batch_size', '6', '--learning_rate', '3e-5', '--num_train_epochs', '2', '--max_seq_length', '2048', '--doc_stride', '1024', '--output_dir', 'output/marco_pyramidlocalatt']' returned non-zero exit status 1. ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7160/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7159
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7159/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7159/comments
https://api.github.com/repos/huggingface/transformers/issues/7159/events
https://github.com/huggingface/transformers/issues/7159
702,483,963
MDU6SXNzdWU3MDI0ODM5NjM=
7,159
I reduce the longformer's attention window from 512 to 256, but train speed not changed
{ "login": "xixiaoyao", "id": 24541791, "node_id": "MDQ6VXNlcjI0NTQxNzkx", "avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xixiaoyao", "html_url": "https://github.com/xixiaoyao", "followers_url": "https://api.github.com/users/xixiaoyao/followers", "following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}", "gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}", "starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions", "organizations_url": "https://api.github.com/users/xixiaoyao/orgs", "repos_url": "https://api.github.com/users/xixiaoyao/repos", "events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}", "received_events_url": "https://api.github.com/users/xixiaoyao/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @xixiaoyao, the main gain is memory savings for longformer - so I'm not really surprised if you say that training speed does not change when halving the attention window...", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
# ❓ Questions & Help According to the Longformer's design, it should be sensitive for training speed and memory consuming when the local window size is changed. However, I find nothing happened when I half the attention window size (from [512]*12 to [256]*12). Is this phenomenon under expectations? or there is something wrong?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7159/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7158
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7158/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7158/comments
https://api.github.com/repos/huggingface/transformers/issues/7158/events
https://github.com/huggingface/transformers/issues/7158
702,471,187
MDU6SXNzdWU3MDI0NzExODc=
7,158
OOM Issue when evaluating with Trainer
{ "login": "Skyy93", "id": 20758301, "node_id": "MDQ6VXNlcjIwNzU4MzAx", "avatar_url": "https://avatars.githubusercontent.com/u/20758301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Skyy93", "html_url": "https://github.com/Skyy93", "followers_url": "https://api.github.com/users/Skyy93/followers", "following_url": "https://api.github.com/users/Skyy93/following{/other_user}", "gists_url": "https://api.github.com/users/Skyy93/gists{/gist_id}", "starred_url": "https://api.github.com/users/Skyy93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Skyy93/subscriptions", "organizations_url": "https://api.github.com/users/Skyy93/orgs", "repos_url": "https://api.github.com/users/Skyy93/repos", "events_url": "https://api.github.com/users/Skyy93/events{/privacy}", "received_events_url": "https://api.github.com/users/Skyy93/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "We know there is a limitation on the size of dataset for evaluation/prediction because every tensors are concatenated together (this is necessary for distributed training). We plan to work on it using the work done in the `datasets` library to have the predictions cached in temporary files as the prediction loop go. However this is not going to be ready before a few weeks, so in the meantime, I encourage you to feed your dataset to predictions by small slices.", "Okay, thank you. \r\n\r\nCurrently i'm only debugging on my machine so the:\r\n` if logits is not None: preds = logits.argmax(-1) if preds is None else torch.cat((preds, logits.argmax(-1)), dim=0)`\r\n-hack does it for me. But yeah as soon as I train on multiple machines I will feed it in small slices.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "> We know there is a limitation on the size of dataset for evaluation/prediction because every tensors are concatenated together (this is necessary for distributed training). We plan to work on it using the work done in the `datasets` library to have the predictions cached in temporary files as the prediction loop go. However this is not going to be ready before a few weeks, so in the meantime, I encourage you to feed your dataset to predictions by small slices.\r\n\r\nHi, has this problem been solved :)" ]
1,600
1,647
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Ubuntu 20.04 - Python version: Python 3.8.2 - PyTorch version (GPU?): 1.5.1 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Trainer: @sgugger ## Information Model I am using BartForConditionalGeneration: The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) Using the normal Trainer with trainer.evaluate() on the BillSum Dataset The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) BillSum Dataset ## To reproduce Steps to reproduce the behavior: 1. Load Billsum 2. FineTune Bart on Billsum 3. try to evaluate with trainer.evaluate() ` Traceback (most recent call last): File "main.py", line 211, in <module> main() File "main.py", line 182, in main trainer.evaluate() File "/home/mypc/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1157, in evaluate output = self.prediction_loop(eval_dataloader, description="Evaluation") File "/home/mypc/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1244, in prediction_loop preds = logits if preds is None else torch.cat((preds, logits), dim=0) RuntimeError: CUDA out of memory. Tried to allocate 2.58 GiB (GPU 0; 7.79 GiB total capacity; 2.72 GiB already allocated; 1.94 GiB free; 4.88 GiB reserved in total by PyTorch) ` ## Expected behavior The Tensors for the logits are getting really big (x, 1024, 52000) and will be concatenated every prediction step in the trainer.py in line 1244: ` if logits is not None: preds = logits if preds is None else torch.cat((preds, logits), dim=0) ` One possible solution is to argmax them while iterating through the prediction loop
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7158/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7157
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7157/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7157/comments
https://api.github.com/repos/huggingface/transformers/issues/7157/events
https://github.com/huggingface/transformers/pull/7157
702,467,625
MDExOlB1bGxSZXF1ZXN0NDg3NzQwNDcw
7,157
ProphetNet
{ "login": "qiweizhen", "id": 23720856, "node_id": "MDQ6VXNlcjIzNzIwODU2", "avatar_url": "https://avatars.githubusercontent.com/u/23720856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qiweizhen", "html_url": "https://github.com/qiweizhen", "followers_url": "https://api.github.com/users/qiweizhen/followers", "following_url": "https://api.github.com/users/qiweizhen/following{/other_user}", "gists_url": "https://api.github.com/users/qiweizhen/gists{/gist_id}", "starred_url": "https://api.github.com/users/qiweizhen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qiweizhen/subscriptions", "organizations_url": "https://api.github.com/users/qiweizhen/orgs", "repos_url": "https://api.github.com/users/qiweizhen/repos", "events_url": "https://api.github.com/users/qiweizhen/events{/privacy}", "received_events_url": "https://api.github.com/users/qiweizhen/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "I opened a wrong PR yesterday, please help me check this version, thanks!\r\n@JetRunner @patrickvonplaten ", "@qiweizhen - this looks great! Is this the complete PR? Can we close the \"old\" PR: https://github.com/huggingface/transformers/pull/6187 in favor of this one? ", "@qiweizhen the Integration tests look great! @JetRunner, I think we can take it from here :-) \r\n\r\nI saw that there are models, such \"xprophetnet-large-wiki100-cased-xglue-ntg\" that are both under microsoft and under weizhen - @qiweizhen are these models identical? ", "This PR is complete version, as I rebased this branch to the latest huggingface version with directions of @JetRunner .\r\n\r\nModels under Microsoft are what we actually used. Those under qiweizhen were used to debug. I will delete the models under qiweizhen.\r\n\r\nThank you for your helps @patrickvonplaten @JetRunner ", "> This PR is complete version, as I rebased this branch to the latest huggingface version with directions of @JetRunner .\r\n> \r\n> Models under Microsoft are what we actually used. Those under qiweizhen were used to debug. I will delete the models under qiweizhen.\r\n> \r\n> Thank you for your helps @patrickvonplaten @JetRunner\r\n\r\nAwesome! Thanks a million for your work! We will take it from here :-) ", "@patrickvonplaten Hi, may I ask when could ProphetNet be added into Transformers? Are there any jobs I can co-work to help it be integrated?", "Hey @qiweizhen ,\r\n\r\nSorry for the delay on this. Prophetnet is my no 1 priority next week. It should be merged by the end of next week. You have done your part - I might ping you for some further questions", "@qiweizhen - the integration tests are awesome! Thanks to that it should be quite straightforward to integrate the model", "@qiweizhen - would it be ok for you if we add a `ProphetNetModel` and a `XLMProphetNetModel`, each with their respective tokenizers. I think this would be cleaner and is also more in line with `Roberta` and `XLMRoberta` for example. I should be quite easy to do this. I can take care of it - would just be great to have your approval on it :-) ", "> @qiweizhen - would it be ok for you if we add a `ProphetNetModel` and a `XLMProphetNetModel`, each with their respective tokenizers. I think this would be cleaner and is also more in line with `Roberta` and `XLMRoberta` for example. I should be quite easy to do this. I can take care of it - would just be great to have your approval on it :-)\r\n\r\nSure! Thank you!" ]
1,600
1,603
1,603
CONTRIBUTOR
null
# Add [ProphetNet](https://arxiv.org/abs/2001.04063). This PR implements both ProphetNet and XLM-ProphetNet. The model architectures are identical, but each model uses a different tokenizer. ## Description: ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction. ProphetNet is able to predict more future tokens with an n-stream decoder. The original implementation is Fairseq version at [github repo](https://github.com/microsoft/ProphetNet). xProphetNet has the same model structure but is pretrained with wikipedia 100 languages dataset as described in [xGLUE](https://arxiv.org/abs/2004.01401). xGLUE is a benchmark for cross-lingual NLU and NLG tasks. xProphetNet is also served as a baseline model for cross-lingual generation tasks in xGLUE NTG and QG. ## Usage: Take xGLUE NTG task as an example: The cross-lingual pretrained model is finetuned with English news title generation data, but inference with both English and other zero-shot language data. A quick usage is like: ``` from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-ntg') tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-ntg') EN_SENTENCE_TO_QUESTION = "Microsoft Corporation intends to officially end free support for the Windows 7 operating system after January 14, 2020, according to the official portal of the organization. From that day, users of this system will not be able to receive security updates, which could make their computers vulnerable to cyber attacks." RU_SENTENCE_TO_QUESTION = "орпорация Microsoft намерена официально прекратить бесплатную поддержку операционной системы Windows 7 после 14 января 2020 года, сообщается на официальном портале организации . С указанного дня пользователи этой системы не смогут получать обновления безопасности, из-за чего их компьютеры могут стать уязвимыми к кибератакам." ZH_SENTENCE_TO_QUESTION = "根据该组织的官方门户网站,微软公司打算在2020年1月14日之后正式终止对Windows 7操作系统的免费支持。从那时起,该系统的用户将无法接收安全更新,这可能会使他们的计算机容易受到网络攻击。" inputs = tokenizer([EN_SENTENCE_TO_QUESTION, RU_SENTENCE_TO_QUESTION, ZH_SENTENCE_TO_QUESTION], padding=True, max_length=256, return_tensors='pt') summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=100, early_stopping=True) print([tokenizer.decode(g) for g in summary_ids]) ``` Model will generate news titles like: ``` ['[SEP] Microsoft to end Windows 7 free support after January 14, 2020[SEP][PAD][PAD][PAD][PAD]', '[SEP] Microsoft намерена прекратить бесплатную поддержку Windows 7 после 14 января 2020 года[SEP]', '[SEP]微软打算终止对Windows 7操作系统的免费支持[SEP][PAD][PAD][PAD][PAD][PAD][PAD]'] ``` ## Released checkpoints: pretrained: ``` microsoft/prophetnet-large-uncased microsoft/xprophetnet-large-wiki100-cased ``` fine-tuned: ``` microsoft/prophetnet-large-uncased-cnndm microsoft/xprophetnet-large-wiki100-cased-xglue-ntg microsoft/xprophetnet-large-wiki100-cased-xglue-qg ``` ## Notes According to the outputs of original fairseq outputs, integration tests for prophetnet include: 1. encoder hidden states, decoder hidden states, model hidden states of pretrained Prophetnet, xProphetnet checkpoints 2. model hidden states of xProphetnet NTG finetuned model 3. Cross-lingual outputs of xProphetNet NTG finetuned model with different beam sizes 4. CNN/DM outputs of ProphetNet CNN/DM finetuned model with different input lengths The model was implemented so all of its parts can be used separately. This means that `ProphetNetEncoder` and `ProphetNetEncoder` can be used as stand-alone models. `ProphetNetForCausalLM` can be instantiated easily from pretrained checkpoints and can be used within the EncoderDecoderModel framework.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7157/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7157", "html_url": "https://github.com/huggingface/transformers/pull/7157", "diff_url": "https://github.com/huggingface/transformers/pull/7157.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7157.patch", "merged_at": 1603121769000 }
https://api.github.com/repos/huggingface/transformers/issues/7156
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7156/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7156/comments
https://api.github.com/repos/huggingface/transformers/issues/7156/events
https://github.com/huggingface/transformers/pull/7156
702,406,273
MDExOlB1bGxSZXF1ZXN0NDg3Njg4NzA4
7,156
[doc] [testing] improve/expand the Parametrization section
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=h1) Report\n> Merging [#7156](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/85ffda96fcadf70d2558ba0a59c84b9f5a2d6f0f?el=desc) will **increase** coverage by `0.97%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7156/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7156 +/- ##\n==========================================\n+ Coverage 78.44% 79.42% +0.97% \n==========================================\n Files 168 168 \n Lines 32309 32309 \n==========================================\n+ Hits 25346 25662 +316 \n+ Misses 6963 6647 -316 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.04% <0.00%> (+20.43%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=footer). Last update [b00cafb...b347e1c](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,603
1,600
CONTRIBUTOR
null
Fixed some issues, plus documented `@pytest.mark.parametrize` which is used extensively in the `examples` tests. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7156/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7156", "html_url": "https://github.com/huggingface/transformers/pull/7156", "diff_url": "https://github.com/huggingface/transformers/pull/7156.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7156.patch", "merged_at": 1600260350000 }
https://api.github.com/repos/huggingface/transformers/issues/7155
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7155/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7155/comments
https://api.github.com/repos/huggingface/transformers/issues/7155/events
https://github.com/huggingface/transformers/pull/7155
702,322,399
MDExOlB1bGxSZXF1ZXN0NDg3NjE5OTEz
7,155
build/eval/gen-card scripts for fsmt
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=h1) Report\n> Merging [#7155](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/85ffda96fcadf70d2558ba0a59c84b9f5a2d6f0f?el=desc) will **increase** coverage by `3.11%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7155/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7155 +/- ##\n==========================================\n+ Coverage 78.44% 81.56% +3.11% \n==========================================\n Files 168 168 \n Lines 32309 32309 \n==========================================\n+ Hits 25346 26352 +1006 \n+ Misses 6963 5957 -1006 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.69% <0.00%> (-0.54%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (ø)` | |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=footer). Last update [85ffda9...cdec73c](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Good for me too!" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Here are the various build/eval/gen scripts used for fsmt. We will use them in the future should any updates/corrections need to be done. They are also a great starter for porting other similar architectures. Fixes #7092
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7155/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7155", "html_url": "https://github.com/huggingface/transformers/pull/7155", "diff_url": "https://github.com/huggingface/transformers/pull/7155.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7155.patch", "merged_at": 1600260087000 }
https://api.github.com/repos/huggingface/transformers/issues/7154
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7154/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7154/comments
https://api.github.com/repos/huggingface/transformers/issues/7154/events
https://github.com/huggingface/transformers/issues/7154
702,318,397
MDU6SXNzdWU3MDIzMTgzOTc=
7,154
Inconsistent parameter naming conventions in ModelConfigs
{ "login": "arkadyark", "id": 4860115, "node_id": "MDQ6VXNlcjQ4NjAxMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/4860115?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arkadyark", "html_url": "https://github.com/arkadyark", "followers_url": "https://api.github.com/users/arkadyark/followers", "following_url": "https://api.github.com/users/arkadyark/following{/other_user}", "gists_url": "https://api.github.com/users/arkadyark/gists{/gist_id}", "starred_url": "https://api.github.com/users/arkadyark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arkadyark/subscriptions", "organizations_url": "https://api.github.com/users/arkadyark/orgs", "repos_url": "https://api.github.com/users/arkadyark/repos", "events_url": "https://api.github.com/users/arkadyark/events{/privacy}", "received_events_url": "https://api.github.com/users/arkadyark/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! As you've mentioned, this would break backwards compatibility which is not something we're willing to do for a name change. Furthermore, these values are named as such to be the same as the original implementation, therefore these will not be changed.\r\n\r\nThank you for opening an issue!" ]
1,600
1,600
1,600
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 (N/A) ### Who can help @LysandreJik ## Information This isn't exactly a bug, but more a suggestion that would be helpful. Across several different models, there are config classes used to initialize a model (e.g. GPT2Config and BertConfig). Some of these configs have parameters that mean the same thing, but use different naming conventions for the parameters, for instance n_layer in GPT2Config is the same, conceptually, as num_hidden_layers for BertConfig. For ease of development, I think it could be worthwhile to make these parameter names consistent across all of the transformer config classes. I'd be happy to help out on this, it seems like it should be a relatively straightforward change, though it may break some backwards compatibility.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7154/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7153
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7153/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7153/comments
https://api.github.com/repos/huggingface/transformers/issues/7153/events
https://github.com/huggingface/transformers/pull/7153
702,282,521
MDExOlB1bGxSZXF1ZXN0NDg3NTg2NzEy
7,153
[model cards] ported allenai Deep Encoder, Shallow Decoder models
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=h1) Report\n> Merging [#7153](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0203ad43bcd0b29423dec6ca1a58ed58300f0d61?el=desc) will **decrease** coverage by `1.35%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7153/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7153 +/- ##\n==========================================\n- Coverage 80.86% 79.50% -1.36% \n==========================================\n Files 169 169 \n Lines 32293 32293 \n==========================================\n- Hits 26114 25675 -439 \n- Misses 6179 6618 +439 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=footer). Last update [0203ad4...b4e0ad4](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I wasn't sure about the format for multiple languages in the head of the card, I added:\r\n```\r\nlanguage: en, de\r\n```\r\nIs this correct?\r\n\r\nI couldn't find an example of the same situation. Perhaps this can be documented?\r\n\r\n@julien-c", "I think you want like this\r\n\r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/Helsinki-NLP/opus-mt-en-roa/README.md\r\n", "> I think you want like this\r\n\r\nDo you mean:\r\n\r\n```\r\nlanguage: - en - de\r\n```\r\n?" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Once @LysandreJik merges https://github.com/huggingface/transformers/pull/6940, please merge these cards These are models ported from https://github.com/jungokasai/deep-shallow/ The models are already on s3 under `allenai` - thank you for moving those from my username, @sshleifer. And added 2 more models from the same author. Fixes #7049
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7153/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7153/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7153", "html_url": "https://github.com/huggingface/transformers/pull/7153", "diff_url": "https://github.com/huggingface/transformers/pull/7153.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7153.patch", "merged_at": 1600358329000 }
https://api.github.com/repos/huggingface/transformers/issues/7152
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7152/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7152/comments
https://api.github.com/repos/huggingface/transformers/issues/7152/events
https://github.com/huggingface/transformers/issues/7152
702,139,883
MDU6SXNzdWU3MDIxMzk4ODM=
7,152
Gradient checkpointing for GPT-2
{ "login": "cemilcengiz", "id": 32267027, "node_id": "MDQ6VXNlcjMyMjY3MDI3", "avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cemilcengiz", "html_url": "https://github.com/cemilcengiz", "followers_url": "https://api.github.com/users/cemilcengiz/followers", "following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}", "gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions", "organizations_url": "https://api.github.com/users/cemilcengiz/orgs", "repos_url": "https://api.github.com/users/cemilcengiz/repos", "events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}", "received_events_url": "https://api.github.com/users/cemilcengiz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Did you run into the issue of imbalanced GPU usage? I have been trying to fine-tune the gpt2-xl model myself on two Titan RTX GPUs (24 GB RAM each) but the imbalanced GPU usage seems to be the main culprit to me. Not sure if gpt2-xl would fit even if I wasn't facing this issue. Why don't they provide some details about the memory requirements for fine-tuning the models. ", "> Did you run into the issue of imbalanced GPU usage? I have been trying to fine-tune the gpt2-xl model myself on two Titan RTX GPUs (24 GB RAM each) but the imbalanced GPU usage seems to be the main culprit to me. Not sure if gpt2-xl would fit even if this I wasn't facing this issue. Why don't they provide some details about the memory requirements for fine-tuning the models.\r\n\r\nNo, my GPU usage was balanced among multiple GPUs.", "For anyone interested, I was able to train the \"gpt2-xl\" model by implementing the gradient checkpointing myself by looking at the suggestions in the previous issues.\r\nSpecifically, I added\r\n\r\n\r\n```\r\n gradient_checkpointing = kwargs.pop(\"gradient_checkpointing\", None)\r\n if gradient_checkpointing is not None:\r\n config_dict[\"gradient_checkpointing\"] = gradient_checkpointing\r\n```\r\n\r\nto https://github.com/huggingface/transformers/blob/eb074af75e2fc64c9ec2f5f80637885c455ee15b/src/transformers/configuration_utils.py#L358\r\n. Additionally, I replaced the Block output with \r\n`return tuple(outputs)`\r\nin https://github.com/huggingface/transformers/blob/eb074af75e2fc64c9ec2f5f80637885c455ee15b/src/transformers/modeling_gpt2.py#L322\r\n. Finally, I replaced the block() call with the \r\n ```\r\n if getattr(self.config, \"gradient_checkpointing\", False):\r\n\r\n def create_custom_forward(module):\r\n def custom_forward(*inputs):\r\n return module(*inputs,\r\n layer_past,\r\n attention_mask,\r\n head_mask[i],\r\n encoder_hidden_states,\r\n encoder_attention_mask,\r\n use_cache,\r\n output_attentions)\r\n\r\n return custom_forward\r\n\r\n outputs = torch.utils.checkpoint.checkpoint(\r\n create_custom_forward(block),\r\n hidden_states)\r\n\r\n else:\r\n outputs = block(\r\n hidden_states,\r\n layer_past=layer_past,\r\n attention_mask=attention_mask,\r\n head_mask=head_mask[i],\r\n encoder_hidden_states=encoder_hidden_states,\r\n encoder_attention_mask=encoder_attention_mask,\r\n use_cache=use_cache,\r\n output_attentions=output_attentions,\r\n )\r\n\r\n```\r\nin https://github.com/huggingface/transformers/blob/eb074af75e2fc64c9ec2f5f80637885c455ee15b/src/transformers/modeling_gpt2.py#L611\r\n. To activate the gradient checkpointing, I construct the model by supplying `gradient_checkpointing=True` to the constructor, e.g. \r\n`model = GPT2LMHeadModel.from_pretrained(model_checkpoint_directory, gradient_checkpointing=True)`. \r\nI made similar changes to `openai-gpt` just to test if checkpointing works since I can train it with and without gradient checkpointing due to its small size. I verified that both the normal and checkpointed training yielded the same losses in the same iterations. Moreover, I checked the memory utilization of the GPUs and verified that the checkpointed training needs considerably lower memory. Thus, I concluded that the gradient checkpointing is working. \r\n\r\nBy the way, I could not train the xlarge model even with the gradient checkpointing if I use AdamW as the optimizer, even when the batch size is 1! Therefore, I switched to the RMSprop since its memory requirement is much smaller. I went ahead and performed a crude memory benchmark for training \"gpt2-large\" and \"gpt2-xl\" with sequence length = 1024. You can see the results below.\r\n![gpt2-memory](https://user-images.githubusercontent.com/32267027/93687651-ed9ac300-fac7-11ea-8f31-175157a52966.png)\r\n", "This is great. Will be implementing this. So with gradient checkpointing, gpt2-xl only requires about 32.47 GB of RAM with RMSProp? ", "> This is great. Will be implementing this. So with gradient checkpointing, gpt2-xl only requires about 32.47 GB of RAM with RMSProp?\r\n\r\nThat is correct for batch size of 3 according to my manual measurements.", "When I implement this, I get the error : \r\n```\r\nFile \"hyde/src/transformers/modeling_utils.py\", line 912, in from_pretrained\r\nTypeError: __init__() got an unexpected keyword argument 'gradient_checkpointing'\r\n\r\n```\r\nDid you do anything else besides the steps you mentioned? \r\nIn the configuration_utils.py file, I have added the lines you mentioned after this piece of code:\r\n\r\n```\r\nif resolved_config_file is None:\r\n raise EnvironmentError\r\n config_dict = cls._dict_from_json_file(resolved_config_file)\r\n\t\t\tgradient_checkpointing = kwargs.pop(\"gradient_checkpointing\", None)\r\n if gradient_checkpointing is not None:\r\n config_dict[\"gradient_checkpointing\"] = gradient_checkpointing \r\n\r\n```\r\n\r\nThank you so much.\r\n", "Hey, could you check this?", "Hi @cemilcengiz, could you tell me which version of transformers is used here?", "> Hi @cemilcengiz, could you tell me which version of transformers is used here?\r\n\r\n3.1.0", "> When I implement this, I get the error :\r\n> \r\n> ```\r\n> File \"hyde/src/transformers/modeling_utils.py\", line 912, in from_pretrained\r\n> TypeError: __init__() got an unexpected keyword argument 'gradient_checkpointing'\r\n> ```\r\n> \r\n> Did you do anything else besides the steps you mentioned?\r\n> In the configuration_utils.py file, I have added the lines you mentioned after this piece of code:\r\n> \r\n> ```\r\n> if resolved_config_file is None:\r\n> raise EnvironmentError\r\n> config_dict = cls._dict_from_json_file(resolved_config_file)\r\n> \t\t\tgradient_checkpointing = kwargs.pop(\"gradient_checkpointing\", None)\r\n> if gradient_checkpointing is not None:\r\n> config_dict[\"gradient_checkpointing\"] = gradient_checkpointing \r\n> ```\r\n> \r\n> Thank you so much.\r\n\r\nNo, that was all I did.", "Can someone please make a PR and add this feature?\r\nI think with this added, finetuning would become a lot better with this library", "Well, apparently this feature was added to the library 2 weeks after I opened the issue. Therefore, I am closing it." ]
1,600
1,608
1,608
NONE
null
# 🚀 Feature request Gradient checkpointing for GPT-2 would be very useful especially for the larger models in the GPT-2 family. I tried to train the largest release, i.e. "gpt2-xl", on AWS's p3dn.24xlarge instance (has 8 V100 GPUs with 32 GB VRAM on each) with no success. Even making the batch size 1 and using mixed precision training (through torch.cuda.amp) failed with OOM. By the way, I was able to train the second largest model ("gpt2-large") with that configuration i.e. batch size of 1 and enabling mixed precision).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7152/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7152/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7151
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7151/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7151/comments
https://api.github.com/repos/huggingface/transformers/issues/7151/events
https://github.com/huggingface/transformers/pull/7151
702,127,556
MDExOlB1bGxSZXF1ZXN0NDg3NDU0MjAz
7,151
fix the warning message of overflowed sequence
{ "login": "xiye17", "id": 43059752, "node_id": "MDQ6VXNlcjQzMDU5NzUy", "avatar_url": "https://avatars.githubusercontent.com/u/43059752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiye17", "html_url": "https://github.com/xiye17", "followers_url": "https://api.github.com/users/xiye17/followers", "following_url": "https://api.github.com/users/xiye17/following{/other_user}", "gists_url": "https://api.github.com/users/xiye17/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiye17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiye17/subscriptions", "organizations_url": "https://api.github.com/users/xiye17/orgs", "repos_url": "https://api.github.com/users/xiye17/repos", "events_url": "https://api.github.com/users/xiye17/events{/privacy}", "received_events_url": "https://api.github.com/users/xiye17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
This PR fixes the warning message (of _**PreTrainedTokenizerBase.prepare_for_model**_) when trying to tokenize a model with a sequence that is longer than the specified maximum sequence length. The current warning message shows the length of the first input_ids, which can be incorrect when a pair is being encoded and the first input length doesn't exceed the limit. The desired warning message should show the overall length of the encoded ids.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7151/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7151", "html_url": "https://github.com/huggingface/transformers/pull/7151", "diff_url": "https://github.com/huggingface/transformers/pull/7151.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7151.patch", "merged_at": 1600256457000 }
https://api.github.com/repos/huggingface/transformers/issues/7150
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7150/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7150/comments
https://api.github.com/repos/huggingface/transformers/issues/7150/events
https://github.com/huggingface/transformers/pull/7150
702,050,541
MDExOlB1bGxSZXF1ZXN0NDg3MzkzMTA3
7,150
Refactoring the TF activations functions
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=h1) Report\n> Merging [#7150](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52d250f6aa14844024806e5e4dd1c7882bbd8dd5?el=desc) will **increase** coverage by `0.38%`.\n> The diff coverage is `88.52%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7150/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7150 +/- ##\n==========================================\n+ Coverage 79.12% 79.50% +0.38% \n==========================================\n Files 168 169 +1 \n Lines 32303 32287 -16 \n==========================================\n+ Hits 25560 25671 +111 \n+ Misses 6743 6616 -127 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <50.00%> (-71.33%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `75.00% <75.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.92% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.90% <100.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `99.28% <100.00%> (+65.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `94.04% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <100.00%> (-23.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `94.55% <100.00%> (+0.43%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `97.06% <100.00%> (+0.15%)` | :arrow_up: |\n| ... and [30 more](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=footer). Last update [52d250f...a639d90](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
This PR groups all the TF activations functions into a common file like it is done in the PT part.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7150/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7150", "html_url": "https://github.com/huggingface/transformers/pull/7150", "diff_url": "https://github.com/huggingface/transformers/pull/7150.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7150.patch", "merged_at": 1600254227000 }
https://api.github.com/repos/huggingface/transformers/issues/7149
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7149/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7149/comments
https://api.github.com/repos/huggingface/transformers/issues/7149/events
https://github.com/huggingface/transformers/pull/7149
702,005,793
MDExOlB1bGxSZXF1ZXN0NDg3MzU3Nzgw
7,149
Ignore me!
{ "login": "davidefiocco", "id": 4547987, "node_id": "MDQ6VXNlcjQ1NDc5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidefiocco", "html_url": "https://github.com/davidefiocco", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions", "organizations_url": "https://api.github.com/users/davidefiocco/orgs", "repos_url": "https://api.github.com/users/davidefiocco/repos", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "received_events_url": "https://api.github.com/users/davidefiocco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry for the mess-up, was working on a fork of mine (facepalm) and accidentally PR to the repo.", "No worries – but potentially open to a PR adding Azure logging if it makes sense on your end!", "Hi @julien-c , thanks for pointing that out!\r\n\r\nI am running `transformers` on AzureML (finetuning on classification tasks mainly), as I find it convenient because of the e.g. automatic compute provisioning and downsizing, experiment tracking, model versioning capabilities Azure offers.\r\n\r\nI don't know what's the best way to contribute with respect to this (i.e. sharing code running finetuning of BERT using AzureML), but here's what I know:\r\n\r\n- Implementing tensorboard-like capabilities in AzureML experiments can be done with trivial modifications to the current `trainer.py`, along the lines of https://github.com/huggingface/transformers/commit/939f6798acb178e3d8388e4597ef7e3dfcef0bf2#diff-cd3b4dd7d09c0b32ff40ccc1840a4da4\r\n\r\n- Those trivial changes allow to see in the AzureML frontend trends like\r\n\r\n![image](https://user-images.githubusercontent.com/4547987/93239437-0ed77880-f783-11ea-803e-f674a378f611.png)\r\n\r\nwhich can be handy when running experiments.\r\n\r\n- To launch one experiment, one would use Azure-specific code like that in https://github.com/microsoft/AzureML-BERT/blob/master/finetune/PyTorch/notebooks/Pretrained-BERT-GLUE.ipynb\r\n\r\nMicrosoft had started a pretty cool repo at https://github.com/microsoft/AzureML-BERT/ with solutions for this using https://github.com/huggingface/transformers/tree/v0.6.2 . What I think is happening though is that `transformers` has evolved quite a lot since then and they would need to refresh those examples.\r\n \r\nI honestly don't know if it'd make sense to PR https://github.com/microsoft/AzureML-BERT (and not clutter the `transformers` with platform-specific code). WDYT? Cheers!\r\n", "The idea above is resurrected here https://github.com/huggingface/transformers/pull/8062 " ]
1,600
1,603
1,600
CONTRIBUTOR
null
I did this PR by mistake, and found out late. Sorry Hugging Face!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7149/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7149", "html_url": "https://github.com/huggingface/transformers/pull/7149", "diff_url": "https://github.com/huggingface/transformers/pull/7149.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7149.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7148
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7148/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7148/comments
https://api.github.com/repos/huggingface/transformers/issues/7148/events
https://github.com/huggingface/transformers/pull/7148
701,998,762
MDExOlB1bGxSZXF1ZXN0NDg3MzUyNTE1
7,148
add new model prophetnet
{ "login": "qiweizhen", "id": 23720856, "node_id": "MDQ6VXNlcjIzNzIwODU2", "avatar_url": "https://avatars.githubusercontent.com/u/23720856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qiweizhen", "html_url": "https://github.com/qiweizhen", "followers_url": "https://api.github.com/users/qiweizhen/followers", "following_url": "https://api.github.com/users/qiweizhen/following{/other_user}", "gists_url": "https://api.github.com/users/qiweizhen/gists{/gist_id}", "starred_url": "https://api.github.com/users/qiweizhen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qiweizhen/subscriptions", "organizations_url": "https://api.github.com/users/qiweizhen/orgs", "repos_url": "https://api.github.com/users/qiweizhen/repos", "events_url": "https://api.github.com/users/qiweizhen/events{/privacy}", "received_events_url": "https://api.github.com/users/qiweizhen/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Hi Patrick, I think @qiweizhen added the integration test. should we take from here?", "yes! That sounds great :-) Thanks a lot @qiweizhen :-) " ]
1,600
1,600
1,600
CONTRIBUTOR
null
prophetnet modified modify codes as suggested v1 add prophetnet test files
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7148/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7148", "html_url": "https://github.com/huggingface/transformers/pull/7148", "diff_url": "https://github.com/huggingface/transformers/pull/7148.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7148.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7147
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7147/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7147/comments
https://api.github.com/repos/huggingface/transformers/issues/7147/events
https://github.com/huggingface/transformers/pull/7147
701,978,482
MDExOlB1bGxSZXF1ZXN0NDg3MzM2ODI0
7,147
Funnel model cards
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=h1) Report\n> Merging [#7147](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52d250f6aa14844024806e5e4dd1c7882bbd8dd5?el=desc) will **increase** coverage by `0.12%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7147/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7147 +/- ##\n==========================================\n+ Coverage 79.12% 79.24% +0.12% \n==========================================\n Files 168 168 \n Lines 32303 32303 \n==========================================\n+ Hits 25560 25600 +40 \n+ Misses 6743 6703 -40 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.05% <0.00%> (-63.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `76.19% <0.00%> (-9.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.41% <0.00%> (-1.63%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <0.00%> (+0.17%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (+0.35%)` | :arrow_up: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=footer). Last update [52d250f...c666fea](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
COLLABORATOR
null
Add the model cards for the 10 Funnel Transformer checkpoints
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7147/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7147", "html_url": "https://github.com/huggingface/transformers/pull/7147", "diff_url": "https://github.com/huggingface/transformers/pull/7147.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7147.patch", "merged_at": 1600180857000 }
https://api.github.com/repos/huggingface/transformers/issues/7146
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7146/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7146/comments
https://api.github.com/repos/huggingface/transformers/issues/7146/events
https://github.com/huggingface/transformers/issues/7146
701,939,120
MDU6SXNzdWU3MDE5MzkxMjA=
7,146
Fine tune with local model raised `torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'config'`
{ "login": "Abbyyan", "id": 12140508, "node_id": "MDQ6VXNlcjEyMTQwNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/12140508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Abbyyan", "html_url": "https://github.com/Abbyyan", "followers_url": "https://api.github.com/users/Abbyyan/followers", "following_url": "https://api.github.com/users/Abbyyan/following{/other_user}", "gists_url": "https://api.github.com/users/Abbyyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Abbyyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abbyyan/subscriptions", "organizations_url": "https://api.github.com/users/Abbyyan/orgs", "repos_url": "https://api.github.com/users/Abbyyan/repos", "events_url": "https://api.github.com/users/Abbyyan/events{/privacy}", "received_events_url": "https://api.github.com/users/Abbyyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I haven't used the trainer but I think maybe you just need to change the line `self.total_flos = getattr(model.config, \"total_flos\", 0)` to \r\n\r\n`_model = model.module if hasattr(model, 'module') else model `\r\n`self.total_flos = getattr(_model.config, \"total_flos\", 0)`", "This is a breaking change that was not announced - it broke our production script.", "@sgugger might be interested in this.\r\n\r\n@marrrcin what is the breaking change? This seems to be a bug rather than an intentional change.", "I meat it's a bug that broke our usage of Trainer in production - we're using single-multigpu-machine to train our models and https://github.com/huggingface/transformers/blob/9e68d075a4100906509170498480823e7e61874a/src/transformers/trainer.py#L698 basically breaks it after upgrade to `3.2.0`.\r\nMaybe this is enough for the fix: \r\nhttps://github.com/huggingface/transformers/pull/7384\r\n", "I see! Thanks for opening a PR!", "My bad, I think this bug slipped through with one of the style changes at code review on one of my PRs. Thanks for raising the issue!", "Do you think that it will be possible to move this fix forward and release patch over the weekend? We don't want to have custom package installation in our CI.", "We're having a new release (v3.3.0) on Monday, is that soon enough? The fix will be in it.", "That would be great!", "I am facing the same issue. Whenever I continues my training from certain checkpoint instead of training from scratch. I got the exact same error as OP. Should I just wait for the new release on tomorrow?", "Hi,\r\nI tested this PR, this does not fully solve the issue, I am getting these error during evaluation of seq2seq_trainer.\r\n\r\n\r\n File \"finetune_t5_trainer.py\", line 233, in <module>\r\n main()\r\n File \"finetune_t5_trainer.py\", line 188, in main\r\n result = trainer.evaluate(eval_datasets, compute_metrics_fn)\r\n File \"/home/rabeeh/internship/seq2seq/t5_trainer.py\", line 175, in evaluate\r\n prediction_loss_only=True if self.compute_metrics is None else None, # self.compute_metrics[eval_task]\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/trainer.py\", line 1452, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"/home/rabeeh/internship/seq2seq/metrics/metrics.py\", line 112, in translation_metrics\r\n pred_str, label_str = decode_pred(pred)\r\n File \"/home/rabeeh/internship/seq2seq/metrics/metrics.py\", line 99, in decode_pred\r\n label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True)\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_utils_base.py\", line 2966, in batch_decode\r\n for seq in sequences\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_utils_base.py\", line 2966, in <listcomp>\r\n for seq in sequences\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_utils_base.py\", line 3002, in decode\r\n **kwargs,\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_utils.py\", line 732, in _decode\r\n filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_utils.py\", line 716, in convert_ids_to_tokens\r\n tokens.append(self._convert_id_to_token(index))\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_t5.py\", line 245, in _convert_id_to_token\r\n token = self.sp_model.IdToPiece(index)\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/sentencepiece/__init__.py\", line 501, in _batched_func\r\n return _func(self, arg)\r\n File \"/opt/conda/envs/internship/lib/python3.7/site-packages/sentencepiece/__init__.py\", line 494, in _func\r\n raise IndexError('piece id is out of range.')\r\nIndexError: piece id is out of range.\r\n42it [01:31, 2.18s/it]\r\n\r\n", "@rabeehk this seems like a completely different issue. Please open a new issue." ]
1,600
1,606
1,601
NONE
null
# ❓ Questions & Help I've fined tune distilgpt2 with `run_language_modeling.py` under my local `output_dir` and want to fine tune with the model in `output_dir` again, but it raised `torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'config'`. ## Details <!-- Description of your issue --> I've fined tune distilgpt2 with `run_language_modeling.py` as follows: ```shell python3 run_language_modeling.py --output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir --model_type=gpt2 --model_name_or_path=distilgpt2 --per_device_train_batch_size=10 --do_train --train_data_file=/home/xxx/gpt_model/data_info/data.txt --block_size=64 --save_steps=100 --overwrite_output_dir ``` It works fine and i can get the fine-tuned model after training with my data `data.txt`. Now i want to fine-tune in the `output_dir` so i run `run_language_modeling.py` as follows on my `new_data.txt`: ```shell python3 run_language_modeling.py --output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_new_data --model_type=gpt2 --model_name_or_path=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir/ --per_device_train_batch_size=20 --do_train --train_data_file=/home/xxx/gpt_model/data_info/new_data.txt --block_size=64 --save_steps=1000 --overwrite_output_dir ``` But it raise exception and exit. The stderr is as follows ```shell /home/xxx/gpt_model/pytorch/pytorch/torch/optim/lr_scheduler.py:235: UserWarning: Please also save or load the state of the optimizer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning) Traceback (most recent call last): File "run_language_modeling.py", line 320, in <module> main() File "run_language_modeling.py", line 284, in main trainer.train(model_path=model_path) File "/home/xxx/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/trainer.py", line 683, in train self.total_flos = getattr(model.config, "total_flos", 0) File "/home/xxx/gpt_model/pytorch/pytorch/torch/nn/modules/module.py", line 779, in __getattr__ type(self).__name__, name)) torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'config' ``` ##### (1) I'm training on multiple GPUs, but i did'n specify a specific gpu to run. So at first i think this may caused by multiple GPUs, and add the following code under https://github.com/huggingface/transformers/blob/52d250f6aa14844024806e5e4dd1c7882bbd8dd5/src/transformers/trainer.py#L641 ```shell if isinstance(model,torch.nn.DataParallel): model = model.module ``` It has no error but doesn't work. The output is as follows . After that, the process is killed and return code `echo $?` is `0`. ```shell Epoch: 0it [00:00, ?it/s] /home/lenajin/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/trainer.py:1087: FutureWarning: This method is deprecated, use `Trainer.is_world_process_zero()` instead. warnings.warn("This method is deprecated, use `Trainer.is_world_process_zero()` instead.", FutureWarning) ``` ##### (2) Following with https://github.com/huggingface/transformers/issues/1991 , I try to train with following script to train with gpu whose id is `1`. ```shell CUDA_VISIBLE_DEVICES=1 python3 run_language_modeling.py --output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir --model_type=gpt2 --model_name_or_path=distilgpt2 --per_device_train_batch_size=20 --do_train --train_data_file=/home/xxx/gpt_model/data_info/data.txt --block_size=64 --save_steps=100 --overwrite_output_dir ``` it return with return code `echo $?` =`0`. At first i don't know what's the problem, but now i changed `--per_device_train_batch_size=20 ` from 20 to 10. It seems all is well, and now my `GPU-Util` is about `98%`. Maybe it return without training duing to my limit gpu CC?It's wired but at least it works now. Maybe there can give some error message about the return reason?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7146/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7146/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7145
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7145/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7145/comments
https://api.github.com/repos/huggingface/transformers/issues/7145/events
https://github.com/huggingface/transformers/issues/7145
701,920,604
MDU6SXNzdWU3MDE5MjA2MDQ=
7,145
Load Pre-Trained Model Using Docker
{ "login": "idhasson", "id": 55091783, "node_id": "MDQ6VXNlcjU1MDkxNzgz", "avatar_url": "https://avatars.githubusercontent.com/u/55091783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/idhasson", "html_url": "https://github.com/idhasson", "followers_url": "https://api.github.com/users/idhasson/followers", "following_url": "https://api.github.com/users/idhasson/following{/other_user}", "gists_url": "https://api.github.com/users/idhasson/gists{/gist_id}", "starred_url": "https://api.github.com/users/idhasson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/idhasson/subscriptions", "organizations_url": "https://api.github.com/users/idhasson/orgs", "repos_url": "https://api.github.com/users/idhasson/repos", "events_url": "https://api.github.com/users/idhasson/events{/privacy}", "received_events_url": "https://api.github.com/users/idhasson/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Are you sure you have internet access on your machine? Can you do `wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-config.json`?", "I have the same issue. The machine on which I am deploying the container to does have internet access, it can download other models from nltk / spacy", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,607
1,607
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> I'm trying to load a pre trained model using transformers lib (by hugging-face): from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') Using local machine, it starts to download the model. But with docker I get the following: OSError: Model name 'gpt2-medium' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed 'gpt2-medium' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. Any idea why it happens?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7145/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7144
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7144/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7144/comments
https://api.github.com/repos/huggingface/transformers/issues/7144/events
https://github.com/huggingface/transformers/issues/7144
701,896,153
MDU6SXNzdWU3MDE4OTYxNTM=
7,144
unexpected keyword argument 'force_fusions' when running the onnx notebook
{ "login": "pierre-si", "id": 62605527, "node_id": "MDQ6VXNlcjYyNjA1NTI3", "avatar_url": "https://avatars.githubusercontent.com/u/62605527?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pierre-si", "html_url": "https://github.com/pierre-si", "followers_url": "https://api.github.com/users/pierre-si/followers", "following_url": "https://api.github.com/users/pierre-si/following{/other_user}", "gists_url": "https://api.github.com/users/pierre-si/gists{/gist_id}", "starred_url": "https://api.github.com/users/pierre-si/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pierre-si/subscriptions", "organizations_url": "https://api.github.com/users/pierre-si/orgs", "repos_url": "https://api.github.com/users/pierre-si/repos", "events_url": "https://api.github.com/users/pierre-si/events{/privacy}", "received_events_url": "https://api.github.com/users/pierre-si/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "The error might be something else, onnxruntime version 1.4.0 does include the `force_fusions` parameter: \r\n\r\nhttps://github.com/microsoft/onnxruntime/blob/v1.4.0/onnxruntime/python/tools/quantization/quantize.py#L1457", "Indeed,\r\nmy bad, somehow the notebook used the python 3.8 related onnxruntime package and not the one associated with the conda environment from which I was launching jupyter notebook.\r\nIt works perfectly, thanks :+1: " ]
1,600
1,600
1,600
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.8.0-1-amd64-x86_64-with-debian-bullseye-sid - Python version: 3.7.8 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ## Information The problem arises when using: * [x] the official example scripts: the notebook "04-onnx-export" * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the notebook's cells. First cell installs onnxruntime in version 1.4.0 The "Benchmarking ONNX quantized model" cell calls transformers/convert_graph_to_onnx.py, which calls the "quantize" function with parameter force_fusions. `Error: quantize() got an unexpected keyword argument 'force_fusions'` The parameter was discarded in onnxruntime 1.4.0 Removing the parameter in convert_graph_to_onnx.py solves the issue. ## Expected behavior No error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7144/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7143
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7143/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7143/comments
https://api.github.com/repos/huggingface/transformers/issues/7143/events
https://github.com/huggingface/transformers/pull/7143
701,865,355
MDExOlB1bGxSZXF1ZXN0NDg3MjQzNTgw
7,143
Tiny typo fix
{ "login": "SidJain1412", "id": 35868478, "node_id": "MDQ6VXNlcjM1ODY4NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/35868478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SidJain1412", "html_url": "https://github.com/SidJain1412", "followers_url": "https://api.github.com/users/SidJain1412/followers", "following_url": "https://api.github.com/users/SidJain1412/following{/other_user}", "gists_url": "https://api.github.com/users/SidJain1412/gists{/gist_id}", "starred_url": "https://api.github.com/users/SidJain1412/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SidJain1412/subscriptions", "organizations_url": "https://api.github.com/users/SidJain1412/orgs", "repos_url": "https://api.github.com/users/SidJain1412/repos", "events_url": "https://api.github.com/users/SidJain1412/events{/privacy}", "received_events_url": "https://api.github.com/users/SidJain1412/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=h1) Report\n> Merging [#7143](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e29c3f1b1104694889afcce4f13f8c842d6e0d6b?el=desc) will **increase** coverage by `0.75%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7143/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7143 +/- ##\n==========================================\n+ Coverage 79.43% 80.19% +0.75% \n==========================================\n Files 168 168 \n Lines 32303 32303 \n==========================================\n+ Hits 25660 25905 +245 \n+ Misses 6643 6398 -245 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <ø> (+4.76%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.49% <0.00%> (-41.42%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `47.05% <0.00%> (-13.24%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `81.96% <0.00%> (-9.84%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=footer). Last update [e29c3f1...2e6aeda](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
Small typo in the `generate` function within `generation_tf_utils.py`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7143/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7143/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7143", "html_url": "https://github.com/huggingface/transformers/pull/7143", "diff_url": "https://github.com/huggingface/transformers/pull/7143.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7143.patch", "merged_at": 1600172323000 }
https://api.github.com/repos/huggingface/transformers/issues/7142
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7142/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7142/comments
https://api.github.com/repos/huggingface/transformers/issues/7142/events
https://github.com/huggingface/transformers/pull/7142
701,824,275
MDExOlB1bGxSZXF1ZXN0NDg3MjA5NTI5
7,142
Add quotes to paths in MeCab arguments
{ "login": "polm", "id": 286278, "node_id": "MDQ6VXNlcjI4NjI3OA==", "avatar_url": "https://avatars.githubusercontent.com/u/286278?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polm", "html_url": "https://github.com/polm", "followers_url": "https://api.github.com/users/polm/followers", "following_url": "https://api.github.com/users/polm/following{/other_user}", "gists_url": "https://api.github.com/users/polm/gists{/gist_id}", "starred_url": "https://api.github.com/users/polm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polm/subscriptions", "organizations_url": "https://api.github.com/users/polm/orgs", "repos_url": "https://api.github.com/users/polm/repos", "events_url": "https://api.github.com/users/polm/events{/privacy}", "received_events_url": "https://api.github.com/users/polm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,600
1,600
1,600
CONTRIBUTOR
null
Without quotes directories with spaces in them will fail to be processed correctly. This bug was first reported in the fugashi repo [here](https://github.com/polm/fugashi/issues/24). It's not a bug in fugashi - fugashi handles quoted paths correctly, and the relevant dictionary packages quote paths in `MECAB_ARGS` to handle this case. Looks like it was missed when the dictionary handling code was added, since it doesn't use `MECAB_ARGS` but instead builds args up from component parts.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7142/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7142/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7142", "html_url": "https://github.com/huggingface/transformers/pull/7142", "diff_url": "https://github.com/huggingface/transformers/pull/7142.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7142.patch", "merged_at": 1600167891000 }
https://api.github.com/repos/huggingface/transformers/issues/7141
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7141/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7141/comments
https://api.github.com/repos/huggingface/transformers/issues/7141/events
https://github.com/huggingface/transformers/pull/7141
701,811,008
MDExOlB1bGxSZXF1ZXN0NDg3MTk4NTg3
7,141
Adding Fast tokenizers for SentencePiece based tokenizers - Breaking: remove Transfo-XL fast tokenizer
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[ { "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false } ]
[ "Ready for review, the remaining failing tests should be ok after the next `tokenizers` RC release", "Great job! I'm not entirely up to date with everything in transformers, but this looks very nice and clean!", "Ok, yes I'll add documentation. We will probably wait to have a clean documentation in `tokenizers` as well so we can do proper cross-linking." ]
1,600
1,602
1,602
MEMBER
null
This pull request add the "fast" Rust tokenizer for the SentencePiece tokenizers as well. Based on unreleased v0.9.0 of `tokenizers`. Tokenizers: - [x] Albert - [x] Bart - [x] Bert - [x] Camembert - [x] DistilBert - [x] DPR - [x] Electra - [x] Funnel - [x] GPT2 - [x] LongFormer - [x] LXMert - [x] MBart - [x] MobileBert - [x] OpenAI GPT - [x] Pegasus - [x] Reformer - [x] RetriBert - [x] Roberta - [x] T5 - [x] XLM-Roberta - [x] XLNet Breaking: - Fast version of Transformer-XL (which gave different tokenization results) is removed. Remaining tokenizers without Fast implementations (no fast tokenizers expected in the short/mid-term): - BertJapanese (special python libs for multi-linguality) - CTRL (would require a specific BPE to handle missing merges) - XLM (uses special python libs for multi-linguality) - Flaubert (same as XLM) - Transformer-XL (same as XLM) Other fixes: - Also allow to tokenizer Bert Japanese with Mecab token splitter and fix https://github.com/huggingface/datasets/issues/665 - deprecation warning in tokenizer methods are limited to one occurence per class instance
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7141/reactions", "total_count": 8, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 3, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7141/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7141", "html_url": "https://github.com/huggingface/transformers/pull/7141", "diff_url": "https://github.com/huggingface/transformers/pull/7141.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7141.patch", "merged_at": 1602149536000 }
https://api.github.com/repos/huggingface/transformers/issues/7140
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7140/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7140/comments
https://api.github.com/repos/huggingface/transformers/issues/7140/events
https://github.com/huggingface/transformers/issues/7140
701,782,360
MDU6SXNzdWU3MDE3ODIzNjA=
7,140
Onnx + TensorRT uses CPU not GPU
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@mfuntowicz - sorry to ping you here :D Do you have a \"high level\" idea why this might not work?", "@agemagician IIUC TensorRT needs a GPU with Tensor cores to work correctly. You're using a P100 in the notebook which doesn't have those. So Onnx will fall back to using CUDAExecutionProvider. Now that itself may be slow because of [this](https://github.com/microsoft/onnxruntime/blob/master/docs/ONNX_Runtime_Perf_Tuning.md#why-is-my-model-running-slower-on-gpu-than-cpu). I'd suggest setting verbose logging level and checking. To enable set the following:\r\n```\r\noptions = SessionOptions()\r\noptions.log_verbosity_level=2\r\noptions.log_severity_level=0\r\n```\r\nThen in the console logs if you see something like\r\n`2020-10-16 20:25:31.386914373 [W:onnxruntime:Default, fallback_cpu_capability.h:140 GetCpuPreferedNodes] Force fallback to CPU execution for node: Gather_9`, or there's a huge list\r\n```\r\n2020-10-16 20:25:31.406073394 [V:onnxruntime:, inference_session.cc:869 TransformGraph] Provider: [CPUExecutionProvider]: [Gather (Gather_9), Unsqueeze (Unsqueeze_10), DynamicQuantizeLinear (237_QuantizeLinear), QAttention (Attention_qua...\r\n```\r\n that indicates that some parts of your model are not GPU compatible because Onnx doesn't have the supported operators.\r\n\r\nIn my case it was happening because I was using a quantized model with the CUDA provider, but the quantized model needs TensorRT to run I believe.", "Thanks @bdalal for your reply. I have tested it also on V100 but I had the same issue.\r\nAnyway, I think the benefits vs troubles, seems to be not worth it for now.\r\nI hope Nvidia, onnx and Pytorch provide a better integration on the future for tensor rt.\r\n" ]
1,600
1,602
1,602
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @mfuntowicz ## Information Model I am using Bert: The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: https://colab.research.google.com/drive/1mqn99U2NMm-fdw213h0xsqm_uRwRkNzp?usp=sharing ## Expected behavior I am trying to run onnx using tensorrt on the gpu, but it always uses the CPU. I tried to install different version or onnx and cuda with not luck. Any idea why "TensorrtExecutionProvider" runs on the CPU rather than GPU ? and how to fix it ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7140/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7139
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7139/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7139/comments
https://api.github.com/repos/huggingface/transformers/issues/7139/events
https://github.com/huggingface/transformers/issues/7139
701,772,380
MDU6SXNzdWU3MDE3NzIzODA=
7,139
generate text function support part of the target inputs
{ "login": "lonelydancer", "id": 548443, "node_id": "MDQ6VXNlcjU0ODQ0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/548443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lonelydancer", "html_url": "https://github.com/lonelydancer", "followers_url": "https://api.github.com/users/lonelydancer/followers", "following_url": "https://api.github.com/users/lonelydancer/following{/other_user}", "gists_url": "https://api.github.com/users/lonelydancer/gists{/gist_id}", "starred_url": "https://api.github.com/users/lonelydancer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lonelydancer/subscriptions", "organizations_url": "https://api.github.com/users/lonelydancer/orgs", "repos_url": "https://api.github.com/users/lonelydancer/repos", "events_url": "https://api.github.com/users/lonelydancer/events{/privacy}", "received_events_url": "https://api.github.com/users/lonelydancer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "hi, did you solve this problem?" ]
1,600
1,648
1,606
NONE
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html?highlight=transformer#torch.nn.Transformer in pytorch transformer class, we can see the forward function can pass target sentence; so when i wanna to generate the target_sentence from the middle , i can pass part of the target_sentence to the model, then i can generate the rest of the sentence. ## Motivation the generate function is really powerful. Here is my situation, i have the full source sentence and certain part of the target sentence, then i begin to generate the rest of the target sentence, using beam search etc.. But i found the interface do not support part of target sentence as input. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7139/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7138
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7138/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7138/comments
https://api.github.com/repos/huggingface/transformers/issues/7138/events
https://github.com/huggingface/transformers/issues/7138
701,697,006
MDU6SXNzdWU3MDE2OTcwMDY=
7,138
OSError: Can't load weights for 'nlptown/bert-base-multilingual-uncased-sentiment'.
{ "login": "sansanai", "id": 25274898, "node_id": "MDQ6VXNlcjI1Mjc0ODk4", "avatar_url": "https://avatars.githubusercontent.com/u/25274898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sansanai", "html_url": "https://github.com/sansanai", "followers_url": "https://api.github.com/users/sansanai/followers", "following_url": "https://api.github.com/users/sansanai/following{/other_user}", "gists_url": "https://api.github.com/users/sansanai/gists{/gist_id}", "starred_url": "https://api.github.com/users/sansanai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sansanai/subscriptions", "organizations_url": "https://api.github.com/users/sansanai/orgs", "repos_url": "https://api.github.com/users/sansanai/repos", "events_url": "https://api.github.com/users/sansanai/events{/privacy}", "received_events_url": "https://api.github.com/users/sansanai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! This problem is because the owner of the `nlptown/bert-base-multilingual-uncased-sentiment` model has not uploaded a TensorFlow version. If you're using pipelines, you would only be able to use this model if you have PyTorch installed on your machine.\r\n\r\n~cc @mfuntowicz for pipelines' next version, it would be great to include the `from_tf` and `from_pt` options as well so that these models may be loaded directly.~ This would require having both PT and TF installed anyway so not great of a workaround.", "Thanks for the info @LysandreJik! Forgot to clear local cache of models after upgrading 3.0.2->3.1.0, thanks again!\r\n\r\n~~However it seems like the default model pull with the pytorch autoclasses are failing as well.~~\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"nlptown/bert-base-multilingual-uncased-sentiment\")\r\n\t\t\t\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"nlptown/bert-base-multilingual-uncased-sentiment\")\r\n```\r\n\r\n~~<details>\r\n~~ <summary>~~Full Error Log (I'm assuming the error is from the issues stated above)~~</summary>~~\r\n \r\n---------------------------------------------------------------------------\r\nJSONDecodeError Traceback (most recent call last)\r\n<ipython-input-25-cfac34095092> in <module>\r\n 1 from transformers import AutoTokenizer, AutoModelForSequenceClassification\r\n 2 \r\n----> 3 tokenizer = AutoTokenizer.from_pretrained(\"nlptown/bert-base-multilingual-uncased-sentiment\")\r\n 4 model = AutoModelForSequenceClassification.from_pretrained(\"nlptown/bert-base-multilingual-uncased-sentiment\")\r\n\r\n~/Documents/github/adaptnlp/venv-adaptnlp/lib/python3.6/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)\r\n 218 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n 219 else:\r\n--> 220 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n 221 \r\n 222 raise ValueError(\r\n\r\n~/Documents/github/adaptnlp/venv-adaptnlp/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, *inputs, **kwargs)\r\n 1423 \r\n 1424 \"\"\"\r\n-> 1425 return cls._from_pretrained(*inputs, **kwargs)\r\n 1426 \r\n 1427 @classmethod\r\n\r\n~/Documents/github/adaptnlp/venv-adaptnlp/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)\r\n 1544 if tokenizer_config_file is not None:\r\n 1545 with open(tokenizer_config_file, encoding=\"utf-8\") as tokenizer_config_handle:\r\n-> 1546 init_kwargs = json.load(tokenizer_config_handle)\r\n 1547 saved_init_inputs = init_kwargs.pop(\"init_inputs\", ())\r\n 1548 if not init_inputs:\r\n\r\n/usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\r\n 297 cls=cls, object_hook=object_hook,\r\n 298 parse_float=parse_float, parse_int=parse_int,\r\n--> 299 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\r\n 300 \r\n 301 \r\n\r\n/usr/lib/python3.6/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\r\n 352 parse_int is None and parse_float is None and\r\n 353 parse_constant is None and object_pairs_hook is None and not kw):\r\n--> 354 return _default_decoder.decode(s)\r\n 355 if cls is None:\r\n 356 cls = JSONDecoder\r\n\r\n/usr/lib/python3.6/json/decoder.py in decode(self, s, _w)\r\n 337 \r\n 338 \"\"\"\r\n--> 339 obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n 340 end = _w(s, end).end()\r\n 341 if end != len(s):\r\n\r\n/usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx)\r\n 355 obj, end = self.scan_once(s, idx)\r\n 356 except StopIteration as err:\r\n--> 357 raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\n 358 return obj, end\r\n\r\n~~JSONDecodeError: Expecting value: line 1 column 1 (char 0)~~\r\n\r\n\r\n~~</details>~~\r\n\r\n\r\n~~Since there already are TF-specific auto classes, should the default auto classes should still be able pull pytorch models even if there is no TF implementation?~~\r\n~~I haven't checked other models in the repo that only have pytorch models, but would it be safe to assume this is the case for all models in the repo that do not have models uploaded for both deep learning frameworks?~~\r\n\r\n~~PS: This is with torch==1.6.0 with CUDA 10.2 installed~~" ]
1,600
1,600
1,600
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-47-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in Question: After installing transformers, I followed the Quick tour document. I typed code as follows: from transformers import pipeline classifier = pipeline('sentiment-analysis', model="nlptown/bert-base-multilingual-uncased-sentiment") After running it, I got this error 2020-09-15 16:41:21.434156: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 Traceback (most recent call last): File "/home/a/.conda/envs/tf23hf/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 580, in from_pretrained raise EnvironmentError OSError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hf_pipetest.py", line 2, in <module> classifier = pipeline('sentiment-analysis', model='nlptown/bert-base-multilingual-uncased-sentiment') File "/home/a/.conda/envs/tf23hf/lib/python3.7/site-packages/transformers/pipelines.py", line 2629, in pipeline model = model_class.from_pretrained(model, config=config, **model_kwargs) File "/home/a/.conda/envs/tf23hf/lib/python3.7/site-packages/transformers/modeling_tf_auto.py", line 1515, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "/home/a/.conda/envs/tf23hf/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 587, in from_pretrained raise EnvironmentError(msg) OSError: Can't load weights for 'nlptown/bert-base-multilingual-uncased-sentiment'. Make sure that: - 'nlptown/bert-base-multilingual-uncased-sentiment' is a correct model identifier listed on 'https://huggingface.co/models' - or 'nlptown/bert-base-multilingual-uncased-sentiment' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. Please, let me know how to solve this problem.. Thanks in advance
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7138/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7137
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7137/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7137/comments
https://api.github.com/repos/huggingface/transformers/issues/7137/events
https://github.com/huggingface/transformers/issues/7137
701,695,707
MDU6SXNzdWU3MDE2OTU3MDc=
7,137
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
{ "login": "hadifar", "id": 7101287, "node_id": "MDQ6VXNlcjcxMDEyODc=", "avatar_url": "https://avatars.githubusercontent.com/u/7101287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hadifar", "html_url": "https://github.com/hadifar", "followers_url": "https://api.github.com/users/hadifar/followers", "following_url": "https://api.github.com/users/hadifar/following{/other_user}", "gists_url": "https://api.github.com/users/hadifar/gists{/gist_id}", "starred_url": "https://api.github.com/users/hadifar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hadifar/subscriptions", "organizations_url": "https://api.github.com/users/hadifar/orgs", "repos_url": "https://api.github.com/users/hadifar/repos", "events_url": "https://api.github.com/users/hadifar/events{/privacy}", "received_events_url": "https://api.github.com/users/hadifar/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: ubuntu 18.04 - Python version: 3.7 - PyTorch version (GPU?): 1.5.1 - Tensorflow version (GPU?): 1.14.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ## Information Model I am using: XLM The problem arises when using: When stack two transformes on top of each other and run them on GPUs. (Note that on CPU the following code works fine) ## To reproduce Steps to reproduce the behavior: ``` import torch from torch import nn from transformers import AutoConfig, AutoModel class Model(nn.Module): def __init__(self): super().__init__() config = AutoConfig.from_pretrained('xlm-mlm-tlm-xnli15-1024') self.transformer1 = AutoModel.from_config(config) self.transformer2 = AutoModel.from_config(config) def forward( self, input_ids=None, input_embeds=None, ): outputs = self.transformer1(input_ids) outputs = self.transformer2(inputs_embeds=outputs[0]) return outputs device = "cuda" if torch.cuda.is_available() else "cpu" model = Model().to(device) inps = torch.randint(1,256,[1,256],device=device) model(input_ids=inps) ``` The stack trace: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-35-5ada78c6ff36> in <module> 34 inps = torch.randint(1, 256, [1, 256], device=device) 35 ---> 36 model(input_ids=inps) /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), <ipython-input-35-5ada78c6ff36> in forward(self, input_ids, input_embeds) 26 print(outputs[0].is_cuda) 27 outputs = outputs[0].to(input_ids.device) ---> 28 outputs = self.transformer2(inputs_embeds=outputs) 29 return outputs 30 /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /opt/conda/lib/python3.8/site-packages/transformers/modeling_xlm.py in forward(self, input_ids, attention_mask, langs, token_type_ids, position_ids, lengths, cache, head_mask, inputs_embeds) 519 # if src_enc is not None: 520 # assert self.is_decoder --> 521 # assert src_enc.size(0) == bs 522 523 # generate masks RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7137/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7136
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7136/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7136/comments
https://api.github.com/repos/huggingface/transformers/issues/7136/events
https://github.com/huggingface/transformers/issues/7136
701,682,172
MDU6SXNzdWU3MDE2ODIxNzI=
7,136
BertForMaskedLM Loss function
{ "login": "lonelydancer", "id": 548443, "node_id": "MDQ6VXNlcjU0ODQ0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/548443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lonelydancer", "html_url": "https://github.com/lonelydancer", "followers_url": "https://api.github.com/users/lonelydancer/followers", "following_url": "https://api.github.com/users/lonelydancer/following{/other_user}", "gists_url": "https://api.github.com/users/lonelydancer/gists{/gist_id}", "starred_url": "https://api.github.com/users/lonelydancer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lonelydancer/subscriptions", "organizations_url": "https://api.github.com/users/lonelydancer/orgs", "repos_url": "https://api.github.com/users/lonelydancer/repos", "events_url": "https://api.github.com/users/lonelydancer/events{/privacy}", "received_events_url": "https://api.github.com/users/lonelydancer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "torch.nn.CrossEntropyLoss(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean')" ]
1,600
1,600
1,600
NONE
null
# ❓ Questions & Help https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py 1)In BertForMaskedLM, why the loss function don't ignore the not masked tokens? loss_fct = CrossEntropyLoss() # -100 index = padding token masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2) do i have to set the not masked token to -100 ? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7136/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7135
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7135/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7135/comments
https://api.github.com/repos/huggingface/transformers/issues/7135/events
https://github.com/huggingface/transformers/issues/7135
701,571,628
MDU6SXNzdWU3MDE1NzE2Mjg=
7,135
Loss mask for fine-tuning GPT2LMHeadModel model
{ "login": "zhujl1991", "id": 1834838, "node_id": "MDQ6VXNlcjE4MzQ4Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1834838?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhujl1991", "html_url": "https://github.com/zhujl1991", "followers_url": "https://api.github.com/users/zhujl1991/followers", "following_url": "https://api.github.com/users/zhujl1991/following{/other_user}", "gists_url": "https://api.github.com/users/zhujl1991/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhujl1991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhujl1991/subscriptions", "organizations_url": "https://api.github.com/users/zhujl1991/orgs", "repos_url": "https://api.github.com/users/zhujl1991/repos", "events_url": "https://api.github.com/users/zhujl1991/events{/privacy}", "received_events_url": "https://api.github.com/users/zhujl1991/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It has already been mentioned here https://github.com/huggingface/transformers/issues/2001 (see \"Bug: Padded tokens are not excluded from the loss\" session).\r\nAny plan to fix this?", "Hi GPT-2 has no pad token so you can either introduce new pad token or set the eos toke as pad token\r\n```tokenizer.pad_token_id = tokenizer.eos_token_id```\r\n\r\nand then set the pad tokens in `labels` to -100 which is the default ignore index for `CrossEntropyLoss`\r\n``` labels[labels == self.tokenizer.pad_token_id] = -100```", "> Hi GPT-2 has no pad token so you can either introduce new pad token or set the eos toke as pad token\r\n> `tokenizer.pad_token_id = tokenizer.eos_token_id`\r\n> \r\n> and then set the pad tokens in `labels` to -100 which is the default ignore index for `CrossEntropyLoss`\r\n> ` labels[labels == self.tokenizer.pad_token_id] = -100`\r\n\r\nThanks. Just get aware of the `ignore_index` parameter https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html ", "For fine-tuning the GPT2 model, it's necessary to manually prepend the bos_token and append eos_token to the input, as has been established here: #3311\r\n\r\nSetting pad_token = eos_token and running `labels[labels == pad_token_id] = -100` would therefore be a problem in my opinion, since we would not only ignore padding tokens, but also eos_tokens at the end of sentences for loss computation.\r\n\r\nI solved the problem by first converting the attention_mask to boolean values, and then inverting the boolean attention_mask. Then `labels[inv_bool_attention_mask] = -100`, such that padding tokens are ignored, but no eos_tokens.\r\n", "Just to save the hassle for some folk\r\n\r\n```python\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\n\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\nprint(\"EOS\", tokenizer.convert_tokens_to_ids(tokenizer.eos_token))\r\nprint(\"PAD\", tokenizer.convert_tokens_to_ids(tokenizer.pad_token))\r\n\r\nstring = \"Hello World!\"\r\nstring += tokenizer.eos_token # manually append eos since this is not done by GPT2Tokenizer\r\n# string = tokenizer.bos_token + string # optionally prepend bos (which is actually the same as eos for GPT2Tokenizer)\r\n\r\ntokenized = tokenizer(string, padding=\"max_length\", max_length=10, return_tensors=\"pt\")\r\ninput_ids = tokenized[\"input_ids\"]\r\nattention_mask = tokenized[\"attention_mask\"]\r\n\r\nprint(\"INPUT_IDS BEFORE\")\r\nprint(input_ids)\r\nprint(\"ATTENTION_MASK\")\r\nprint(attention_mask)\r\n\r\ninput_ids[~attention_mask.bool()] = -100 # disable loss for padding tokens (i.e., eos tokens meant for padding)\r\n\r\nprint(\"INPUT_IDS AFTER\")\r\nprint(input_ids)\r\n```\r\n\r\nResult:\r\n\r\n```txt\r\nEOS 50256\r\nPAD 50256\r\nINPUT_IDS BEFORE\r\ntensor([[15496, 2159, 0, 50256, 50256, 50256, 50256, 50256, 50256, 50256]])\r\nATTENTION_MASK\r\ntensor([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0]])\r\nINPUT_IDS AFTER\r\ntensor([[15496, 2159, 0, 50256, -100, -100, -100, -100, -100, -100]])\r\n```" ]
1,600
1,678
1,600
NONE
null
If we use padding for short-sentence fine-tune data, when fine-tuning GPT2LMHeadModel, should we change the code here https://github.com/huggingface/transformers/blob/48ff6d5109d691e3630169962a5052586aaaf659/src/transformers/modeling_gpt2.py#L744 to exclude the loss for padding tokens? @patrickvonplaten @thomwolf
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7135/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7134
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7134/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7134/comments
https://api.github.com/repos/huggingface/transformers/issues/7134/events
https://github.com/huggingface/transformers/issues/7134
701,518,866
MDU6SXNzdWU3MDE1MTg4NjY=
7,134
evaluate_during_training after each epoch
{ "login": "dianags", "id": 38453268, "node_id": "MDQ6VXNlcjM4NDUzMjY4", "avatar_url": "https://avatars.githubusercontent.com/u/38453268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dianags", "html_url": "https://github.com/dianags", "followers_url": "https://api.github.com/users/dianags/followers", "following_url": "https://api.github.com/users/dianags/following{/other_user}", "gists_url": "https://api.github.com/users/dianags/gists{/gist_id}", "starred_url": "https://api.github.com/users/dianags/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dianags/subscriptions", "organizations_url": "https://api.github.com/users/dianags/orgs", "repos_url": "https://api.github.com/users/dianags/repos", "events_url": "https://api.github.com/users/dianags/events{/privacy}", "received_events_url": "https://api.github.com/users/dianags/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi there. When using `Trainer`, the evaluation loop is run every `args.eval_steps` (which is more consistent across datasets than an evaluation at the end of each epoch since with a small or a large dataset you would get evaluations that don't have the same meaning). If you want it run at the end of each epoch, you can indeed set it to `number_of_samples/batch_size`.\r\n\r\nI'm not quite sure how this is surprising when the documentation clearly says it runs every `eval_steps` though.", "> Hi there. When using `Trainer`, the evaluation loop is run every `args.eval_steps` (which is more consistent across datasets than an evaluation at the end of each epoch since with a small or a large dataset you would get evaluations that don't have the same meaning). \r\n\r\nHi @sgugger . I see! I've been focused on making the `run_multiple_choice.py` work on my dataset using the default parameters. When running the example to get results on SWAG with 3 epochs, I did see the output of the evaluation loop after each epoch. I didn't understand why the same didn't happen with my dataset. Didn't see the problem from your perspective (be more consistent across datasets) but now that you mention it, it makes sense. Thank you for the explanation!\r\n\r\n> If you want it run at the end of each epoch, you can indeed set it to `number_of_samples/batch_size`. I'm not quite sure how this is surprising when the documentation clearly says it runs every `eval_steps` though.\r\n\r\nIn the `--help` I do see that:\r\n--evaluate_during_training\r\n Run evaluation during training at each logging step.\r\n --eval_steps EVAL_STEPS\r\n Run an evaluation every X steps.\r\n\r\nGuess I got confused because I expected the `--evaluate_during_training` doc to refer to `eval_steps` instead of logging step. It wasn't that straightforward for me that what I need to do is to set `number_of_samples/batch_size`, that's what I meant it would be nice to find in the documentation. It is all clear now though! Thank you again :)\r\n\r\n", "Note that we can add support for an evaluation strategy that is at the end of each epoch instead of every n steps. That could make things even clearer :-)\r\n\r\nI'll look into it when I have some time.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> Hello! I just migrated from `pytorch_pretrained_bert` to `transformers` (3.1.0) and I am having problems understanding how to make the model evaluate at the end of each epoch. For this purpose, it is recommended to use `--evaluate_during_training` as mentioned in the issue [#4617](https://github.com/huggingface/transformers/issues/4617). Looking at `Trainer` source code the condition to run an evaluation is the following: ```python if self.args.evaluate_during_training and self.global_step % self.args.eval_steps == 0: self.evaluate() ``` I am not quite following the logic of why the evaluation depends on the number of steps and not on the number of epochs. I would appreciate if someone helps me on this. In the same issue mentioned above (#4617) someone said that you should set `--save_steps` to `number_of_samples/batch_size`. If this is true (currently testing if this works on my dataset), shouldn't it be mentioned in the documentation? Thank you! <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7134/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7133
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7133/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7133/comments
https://api.github.com/repos/huggingface/transformers/issues/7133/events
https://github.com/huggingface/transformers/pull/7133
701,406,116
MDExOlB1bGxSZXF1ZXN0NDg2ODY2NTM1
7,133
Update README
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=h1) Report\n> Merging [#7133](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90cde2e938638e64a8696a12b79ee5f52364b162?el=desc) will **increase** coverage by `1.87%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7133/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7133 +/- ##\n==========================================\n+ Coverage 79.62% 81.49% +1.87% \n==========================================\n Files 168 168 \n Lines 32284 32284 \n==========================================\n+ Hits 25706 26310 +604 \n+ Misses 6578 5974 -604 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.55% <0.00%> (-34.28%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `81.96% <0.00%> (-9.84%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.00% <0.00%> (-0.67%)` | :arrow_down: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=footer). Last update [90cde2e...8faad23](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
COLLABORATOR
null
This PR reorganizes the README to reflect the current state of transformers a bit more. croll down a bit [here](https://github.com/huggingface/transformers/tree/update_readme) for a preview. Here is the plan I followed: **Introduction:** Expand a bit on the first two sentences to explain a bit more generally what this library is about. The idea to have content aimed at both researchers/NLP experts and data scientists/engineers. Since it gets longer, we could move it after the top contributors badges. **Online demo:** I think it makes more sense to have this right after the introduction, to show what transformers can do. **Code sample:** Then we switch to how to do it, showcasing two shorts examples: pipeline and a pretrained model loading/example of use. Let’s not overwhelm a reader with more than that and link to our tutorials for more examples. **Feature list:** This becomes Why use/not-use transformers but the spirit remains the same. **Installation:** Keeping this simple and to the point, linking to the documentation for more in-depth content. **Model architectures:** I've kept the full list with papers but I'm not sure it makes sense here as we also have it in the index of our doc. An alternative would be to show this in terms of tasks and give examples for each one of models supporting them (so key names still appear) with links to the docs. **Learn more:** This is where all the content I removed is linked. **Citation**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7133/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/7133/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7133", "html_url": "https://github.com/huggingface/transformers/pull/7133", "diff_url": "https://github.com/huggingface/transformers/pull/7133.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7133.patch", "merged_at": 1600272733000 }
https://api.github.com/repos/huggingface/transformers/issues/7132
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7132/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7132/comments
https://api.github.com/repos/huggingface/transformers/issues/7132/events
https://github.com/huggingface/transformers/issues/7132
701,393,639
MDU6SXNzdWU3MDEzOTM2Mzk=
7,132
Add tokenizer file save in convert_graph_to_onnx.py
{ "login": "ierezell", "id": 30974685, "node_id": "MDQ6VXNlcjMwOTc0Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ierezell", "html_url": "https://github.com/ierezell", "followers_url": "https://api.github.com/users/ierezell/followers", "following_url": "https://api.github.com/users/ierezell/following{/other_user}", "gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}", "starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ierezell/subscriptions", "organizations_url": "https://api.github.com/users/ierezell/orgs", "repos_url": "https://api.github.com/users/ierezell/repos", "events_url": "https://api.github.com/users/ierezell/events{/privacy}", "received_events_url": "https://api.github.com/users/ierezell/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "cc. @mfuntowicz ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
CONTRIBUTOR
null
# 🚀 Feature request Allow saving the associated tokenizer files (to be loaded possibly with the tokenizer library) when running `convert_graph_to_onnx.py` ## Motivation Having the good vocab file next to the good onnx model to avoid confusion and ease the loading process by any other framework (because models and their tokenizers are deeply linked). ## Your contribution With [this comment from tokenizer repo](https://github.com/huggingface/tokenizers/issues/59#issuecomment-610645970) and the [convert_graph_to_onnx.py](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py) one can simply add L.358 ``` if tokenizer is None: download_vocab_files_for_tokenizer(nlp.tokenizer, model, output) ``` and before `if __name__ == "__main__"` ``` def download_vocab_files_for_tokenizer(tokenizer, model_type, output_path): vocab_files_map = tokenizer.pretrained_vocab_files_map vocab_files = {} for resource in vocab_files_map.keys(): download_location = vocab_files_map[resource][model_type] f_path = path.join(output_path.parent, path.basename(download_location)) urllib.request.urlretrieve(download_location, f_path) ``` I can do a PR for it if the interest is big enough. NB : This is a patch I added in my local lib but feel free to modify it or transform it as you want, it's just to give the idea
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7132/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7132/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7131
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7131/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7131/comments
https://api.github.com/repos/huggingface/transformers/issues/7131/events
https://github.com/huggingface/transformers/pull/7131
701,364,799
MDExOlB1bGxSZXF1ZXN0NDg2ODMzMzMy
7,131
[EncoderDecoderModel] fix indentation error
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can ignore the failing tests" ]
1,600
1,600
1,600
MEMBER
null
A statement regarding the encoder config init in `EncoderDecoderModel` was falsely indented which might lead to errors for very specific cases.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7131/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7131", "html_url": "https://github.com/huggingface/transformers/pull/7131", "diff_url": "https://github.com/huggingface/transformers/pull/7131.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7131.patch", "merged_at": 1600197008000 }
https://api.github.com/repos/huggingface/transformers/issues/7130
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7130/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7130/comments
https://api.github.com/repos/huggingface/transformers/issues/7130/events
https://github.com/huggingface/transformers/issues/7130
701,350,166
MDU6SXNzdWU3MDEzNTAxNjY=
7,130
AssertionError with multiple GPU
{ "login": "aclifton314", "id": 53267795, "node_id": "MDQ6VXNlcjUzMjY3Nzk1", "avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aclifton314", "html_url": "https://github.com/aclifton314", "followers_url": "https://api.github.com/users/aclifton314/followers", "following_url": "https://api.github.com/users/aclifton314/following{/other_user}", "gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}", "starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions", "organizations_url": "https://api.github.com/users/aclifton314/orgs", "repos_url": "https://api.github.com/users/aclifton314/repos", "events_url": "https://api.github.com/users/aclifton314/events{/privacy}", "received_events_url": "https://api.github.com/users/aclifton314/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Cryptic error, but Trainer master @sgugger might have a hunch", "Mmm it looks like the inputs are not on a GPU from the error. Are you running the base script or a modified version of it?", "@sgugger It isn't one of the run scripts provided by HF but it is a modified forward method from the `GPT2LMHeadModel`. Here is the majority of my script: https://discuss.huggingface.co/t/finetuning-gpt2-with-user-defined-loss/163", "I can't see how the loss is computed, so can't really help. Are you sure you are always setting the tensors on the right devices?", "@sgugger the loss is calculated in an Ngrams model. My guess is that the tensors are probably not set. Is there some thing I need to set on the pytorch tensors being passed to the Ngrams model?", "Any further thoughts on this? I could provide the whole code or whichever pieces are needed. \r\n@sgugger ", "I told you I could not help without seeing how you computed your loss. You did not elaborate on that and I'm not a magician, so I don't have any further thoughts on it. If you share the code you're using I could have more insight, yes.", "@sgugger My apologies. Below is the relevant code that shows the loss being calculated as well as the run script that controls the training:\r\n```python\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel, TrainingArguments, Trainer\r\nimport torch\r\nfrom torch.utils.data import Dataset\r\nimport sys\r\nimport numpy as np\r\n\r\nZERO = sys.float_info.min\r\n\r\nclass GPT2FinetunedWithNgrams(GPT2LMHeadModel):\r\n def __init__(self, config, model_tokenizer=None):\r\n super().__init__(config)\r\n self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')\r\n self.tokenizer.pad_token = self.tokenizer.eos_token\r\n\r\n def eval_sentence(self, sent: str):\r\n vec = self.sentence_vec(\r\n sent) # remove punct, lower case, split on space, prepend \"<s>\", postpend \"</s>\" start and stop tokens. Returns list of strings.\r\n last_idx = min(self.max_ngram, len(vec))\r\n\r\n log_prob = 0\r\n for i in range(2, last_idx + 1):\r\n log_prob += np.log(max(ZERO, self.pkatz(vec[0:i]))) # conditional probability with katz backoff\r\n\r\n for i in range(1, len(vec) - last_idx + 1):\r\n j = i + last_idx\r\n log_prob += np.log(max(ZERO, self.pkatz(vec[i:j])))\r\n return log_prob, len(vec)\r\n\r\n def sentence_loss(self, sent: str):\r\n p, l = self.eval_sentence(sent)\r\n return -p\r\n\r\n def generate_text_while_finetuning(self,\r\n input_ids=None,\r\n past=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n labels=None,\r\n use_cache=None,\r\n output_attentions=None,\r\n output_hidden_states=None, ):\r\n\r\n transformer_outputs = self.transformer(\r\n input_ids,\r\n past=past,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n use_cache=use_cache,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n )\r\n\r\n hidden_states = transformer_outputs[0]\r\n lm_logits = self.lm_head(hidden_states)\r\n outputs = (lm_logits,) + transformer_outputs[1:]\r\n return outputs # (loss), lm_logits, presents, (all hidden_states), (attentions)\r\n\r\n def forward(\r\n self,\r\n input_ids=None,\r\n past=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n labels=None,\r\n use_cache=True,\r\n ):\r\n\r\n max_length = input_ids.shape[1] + 50\r\n full_generated_gpt2_ids = self.generate(input_ids=input_ids,\r\n max_length=max_length,\r\n is_finetuning_current_model=True,\r\n attention_mask=attention_mask,\r\n pad_token_id=50256,\r\n do_sample=True,\r\n top_k=50,\r\n top_p=0.95)\r\n\r\n decoded_gen_samples = self.tokenizer.batch_decode(full_generated_gpt2_ids, skip_special_tokens=True)\r\n tmp_losses = [self.sentence_loss(decoded_sample) for decoded_sample in decoded_gen_samples]\r\n losses = torch.tensor(tmp_losses, requires_grad=True)\r\n loss = losses.mean()\r\n return (loss,)\r\n\r\n\r\n##The code below is the run script using Trainer\r\nclass MyDataset(Dataset):\r\n def __init__(self, csv_file: str):\r\n self.df = pd.read_csv(csv_file, encoding='ISO-8859-1')\r\n\r\n def __len__(self):\r\n return len(self.df)\r\n\r\n def __getitem__(self, idx):\r\n if torch.is_tensor(idx):\r\n idx = idx.tolist()\r\n text = self.df.iloc[idx, 1]\r\n return text\r\n\r\ndef my_data_collator(dataset_samples_list):\r\n tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')\r\n tokenizer.pad_token = tokenizer.eos_token\r\n\r\n encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True)\r\n\r\n batch = {}\r\n batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']])\r\n batch['past'] = None\r\n batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']])\r\n batch['position_ids'] = None\r\n batch['head_mask'] = None\r\n batch['inputs_embeds'] = None\r\n batch['labels'] = None\r\n batch['use_cache'] = True\r\n return batch\r\n\r\ndataset_train = MyDataset('/path/to/train_dataset.csv')\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir='/path/to/out',\r\n do_train=True,\r\n per_device_train_batch_size=64,\r\n logging_dir='/path/to/dir',\r\n max_steps=300000\r\n)\r\n\r\nmodel = GPT2FinetunedWithNgrams.from_pretrained('gpt2')\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=my_data_collator,\r\n train_dataset=dataset_train\r\n)\r\ntrainer.train()\r\ntrainer.save_model('/path/to/model_save_dir')\r\n```\r\n\r\nThe `is_finetuning_current_model=True` flag in `self.generate` in the `forward` method I had to include to overcome a recursion error (#6105).", "You are using numpy to compute your loss. This can't work in PyTorch since it then won't be able to properly compute the gradients of your parameters.", "Ok, I think I see what you are saying. `eval_sentence` needs to rely on pytorch to make it's calculations. I can't take the numpy output and convert it to a pytorch tensor in the `forward` method. Is this correct?", "No you can't. PyTorch needs to see operations applied to your tensors to be able to compute gradients properly.", "Ok. I'll rewrite my loss in pytorch and check to see if that resolves the issue with multiple gpu. Just curious, would the above code work if I were running on a machine with a single gpu? Because I've run it on a single gpu machine (with less training steps and a smaller batch size) and there were no issues. ", "There was no error because the tensors were set on the only GPU you add when back from numpy but the gradients were still wrong (basically everything that happened before the numpy part was wiped out).", "@sgugger That is very helpful and would possibly explain why my first training test didn't seem to learn.\r\n\r\nLooking at the loss that is posted (i.e. `sentence_loss()` and `eval_sentence()`) if I change `log_prob` to a pytorch tensor, is there a flag that needs to be set or anything in order for the gradients to be calculated correctly?", "I went ahead and tried the updated loss on a machine with 3 GPU:\r\n```python\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel, TrainingArguments, Trainer\r\nimport torch\r\nfrom torch.utils.data import Dataset\r\nimport sys\r\nimport pandas as pd\r\n#import numpy as np\r\n\r\nZERO = sys.float_info.min\r\nZERO_PT = torch.tensor(ZERO)\r\n\r\nclass GPT2FinetunedWithNgrams(GPT2LMHeadModel):\r\n def __init__(self, config, model_tokenizer=None):\r\n super().__init__(config)\r\n self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')\r\n self.tokenizer.pad_token = self.tokenizer.eos_token\r\n\r\n def eval_sentence(self, sent: str):\r\n vec = self.sentence_vec(\r\n sent) # remove punct, lower case, split on space, prepend \"<s>\", postpend \"</s>\" start and stop tokens. Returns list of strings.\r\n last_idx = min(self.max_ngram, len(vec))\r\n\r\n log_prob = 0\r\n for i in range(2, last_idx + 1):\r\n #log_prob += np.log(max(ZERO, self.pkatz(vec[0:i]))) # conditional probability with katz backoff\r\n log_prob += torch.log(max(ZERO_PT, self.pkatz(vec[0:i])))\r\n\r\n for i in range(1, len(vec) - last_idx + 1):\r\n j = i + last_idx\r\n #log_prob += np.log(max(ZERO, self.pkatz(vec[i:j])))\r\n log_prob += torch.log(max(ZERO_PT, self.pkatz(vec[i:j])))\r\n return log_prob, len(vec)\r\n\r\n def sentence_loss(self, sent: str):\r\n p, l = self.eval_sentence(sent)\r\n return -p\r\n\r\n def generate_text_while_finetuning(self,\r\n input_ids=None,\r\n past=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n labels=None,\r\n use_cache=None,\r\n output_attentions=None,\r\n output_hidden_states=None, ):\r\n transformer_outputs = self.transformer(\r\n input_ids,\r\n past=past,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n use_cache=use_cache,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n )\r\n hidden_states = transformer_outputs[0]\r\n lm_logits = self.lm_head(hidden_states)\r\n outputs = (lm_logits,) + transformer_outputs[1:]\r\n return outputs # (loss), lm_logits, presents, (all hidden_states), (attentions)\r\n\r\n\r\n def forward(\r\n self,\r\n input_ids=None,\r\n past=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n labels=None,\r\n use_cache=True,\r\n ):\r\n\r\n max_length = input_ids.shape[1] + 50\r\n full_generated_gpt2_ids = self.generate(input_ids=input_ids,\r\n max_length=max_length,\r\n is_finetuning_current_model=True,\r\n attention_mask=attention_mask,\r\n pad_token_id=50256,\r\n do_sample=True,\r\n top_k=50,\r\n top_p=0.95)\r\n\r\n decoded_gen_samples = self.tokenizer.batch_decode(full_generated_gpt2_ids, skip_special_tokens=True)\r\n tmp_losses = [self.sentence_loss(decoded_sample) for decoded_sample in decoded_gen_samples]\r\n losses = torch.stack(tmp_losses)\r\n loss = losses.mean()\r\n return (loss,)\r\n\r\n\r\n##The code below is the run script.\r\nclass MyDataset(Dataset):\r\n def __init__(self, csv_file: str):\r\n self.df = pd.read_csv(csv_file, encoding='ISO-8859-1')\r\n\r\n def __len__(self):\r\n return len(self.df)\r\n\r\n def __getitem__(self, idx):\r\n if torch.is_tensor(idx):\r\n idx = idx.tolist()\r\n text = self.df.iloc[idx, 1]\r\n return text\r\n\r\ndef my_data_collator(dataset_samples_list):\r\n tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')\r\n tokenizer.pad_token = tokenizer.eos_token\r\n\r\n encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True)\r\n\r\n batch = {}\r\n batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']])\r\n batch['past'] = None\r\n batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']])\r\n batch['position_ids'] = None\r\n batch['head_mask'] = None\r\n batch['inputs_embeds'] = None\r\n batch['labels'] = None\r\n batch['use_cache'] = True\r\n return batch\r\n\r\n\r\ndataset_train = MyDataset('/path/to/train_dataset.csv')\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir='/path/to/out',\r\n do_train=True,\r\n per_device_train_batch_size=64,\r\n logging_dir='/path/to/dir',\r\n max_steps=300000\r\n)\r\n\r\nmodel = GPT2FinetunedWithNgrams.from_pretrained('gpt2')\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=my_data_collator,\r\n train_dataset=dataset_train\r\n)\r\ntrainer.train()\r\ntrainer.save_model('/path/to/model_save_dir')\r\n```\r\n\r\nHowever I am still getting this error:\r\n```python\r\nTraceback (most recent call last):\r\n File \"run_finetune_gpt2.py\", line 180, in <module>\r\n main()\r\n File \"run_finetune_gpt2.py\", line 165, in main\r\n trainer.train()\r\n File \"/path/to/venvs/my-venv/lib/python3.6/site-packages/transformers/trainer.py\", line 499, in train\r\n tr_loss += self._training_step(model, inputs, optimizer)\r\n File \"/path/to/venvs/my-venv/lib/python3.6/site-packages/transformers/trainer.py\", line 622, in _training_step\r\n outputs = model(**inputs)\r\n File \"/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py\", line 156, in forward\r\n return self.gather(outputs, self.output_device)\r\n File \"/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py\", line 168, in gather\r\n return gather(outputs, output_device, dim=self.dim)\r\n File \"/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py\", line 68, in gather\r\n res = gather_map(outputs)\r\n File \"/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py\", line 63, in gather_map\r\n return type(out)(map(gather_map, zip(*outputs)))\r\n File \"/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py\", line 55, in gather_map\r\n return Gather.apply(target_device, dim, *outputs)\r\n File \"/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/_functions.py\", line 54, in forward\r\n assert all(map(lambda i: i.is_cuda, inputs))\r\nAssertionError\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,607
1,607
NONE
null
## System Info Red Hat Server 7.7 Pytorch: 1.6.0 Transformers: 3.0.2 Python: 3.7.6 Number of GPU: 4 ## Question I am trying to finetune a GPT2 model using `Trainer` with multiple GPU installed on my machine. However, I get the following error: ```python Traceback (most recent call last): File "run_finetune_gpt2.py", line 158, in <module> main() File "run_finetune_gpt2.py", line 145, in main trainer.train() File "/path/to/venvs/my-venv/lib/python3.6/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 156, in forward return self.gather(outputs, self.output_device) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather return gather(outputs, output_device, dim=self.dim) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map return Gather.apply(target_device, dim, *outputs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 54, in forward assert all(map(lambda i: i.is_cuda, inputs)) AssertionError wandb: Program failed with code 1. Press ctrl-c to abort syncing. wandb: You can sync this run to the cloud by running: wandb: wandb sync wandb/dryrun-20200914_134757-1sih3p0q ``` Any ideas about what might be going on? Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7130/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7129
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7129/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7129/comments
https://api.github.com/repos/huggingface/transformers/issues/7129/events
https://github.com/huggingface/transformers/pull/7129
701,348,807
MDExOlB1bGxSZXF1ZXN0NDg2ODE5OTI5
7,129
[WIP RAG] Finalize RAG parallel
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "https://github.com/huggingface/transformers/blob/60c8defa01049b559933ad4e2ab3cc32c8b7ef27/src/transformers/tokenization_rag.py#L58\r\n\r\nHello, thank you for the great work! It seems like the first argument passed to the method, `self`, needs to be deleted.", "Closing this PR as it served its purpose. Continue PR together with @ola13 here: https://github.com/huggingface/transformers/pull/6813" ]
1,600
1,651
1,600
MEMBER
null
finalizing RAG in parallel to https://github.com/huggingface/transformers/pull/6813
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7129/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7129", "html_url": "https://github.com/huggingface/transformers/pull/7129", "diff_url": "https://github.com/huggingface/transformers/pull/7129.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7129.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7128
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7128/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7128/comments
https://api.github.com/repos/huggingface/transformers/issues/7128/events
https://github.com/huggingface/transformers/pull/7128
701,333,692
MDExOlB1bGxSZXF1ZXN0NDg2ODA3MDU3
7,128
Fix the HF logger
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think the issue stems from the fact that this example is not relying on the base `transformers` logging. The logger is defined by the `__name__` from which you initialize it, and when you're initializing the logger within `transformers` you're doing the following:\r\n\r\n```py\r\n# file_utils.py\r\nfile_utils_logger = logging.get_logger(__name__) # initializes the logger with `transformers.file_utils`\r\n```\r\n\r\n```py\r\n# modeling_utils.py\r\nmodeling_utils_logger = logging.get_logger(__name__) # initializes the logger with `transformers.modeling_utils`\r\n```\r\n\r\nTherefore when you're getting the base `transformers` logger and changing its level, you're also changing all these loggers' levels.\r\n\r\n```py\r\nfile_utils_logger.getEffectiveLevel() # 30\r\nmodeling_utils_logger.getEffectiveLevel() # 30\r\n\r\nlogger = logging.get_logger()\r\nlogger.setLevel(logging.INFO)\r\n\r\nfile_utils_logger.getEffectiveLevel() # 20\r\nmodeling_utils_logger.getEffectiveLevel() # 20\r\n```\r\n\r\nHowever, for the `run_tf_ner.py` script, the `__name__` is not dependent on `transformers.[module_name]`, but is instead `__main__`. When you're updating the root HF logger's level, it's therefore not updating that logger level. You can run the following script to see what happens:\r\n\r\n```py\r\nfrom transformers import logging\r\n\r\nfile_utils_logger = logging.get_logger('transformers.file_utils')\r\nmodeling_utils_logger = logging.get_logger('transformers.modeling_utils')\r\nner_script = logging.get_logger('__main__')\r\n\r\nprint(file_utils_logger.getEffectiveLevel(), modeling_utils_logger.getEffectiveLevel(), ner_script.getEffectiveLevel())\r\n# 30 30 30\r\n\r\nmain = logging.get_logger()\r\nmain.setLevel(logging.INFO)\r\n\r\nprint(file_utils_logger.getEffectiveLevel(), modeling_utils_logger.getEffectiveLevel(), ner_script.getEffectiveLevel())\r\n# 20 20 30\r\n```\r\n\r\nI think the same thing is happening with handlers. What do you think?", "You are totally right. I fully reverted the changes in `logging.py` and replaced all the missing usage of the HF wrapper in the lib.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=h1) Report\n> Merging [#7128](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52d250f6aa14844024806e5e4dd1c7882bbd8dd5?el=desc) will **increase** coverage by `1.73%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7128/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7128 +/- ##\n==========================================\n+ Coverage 79.12% 80.86% +1.73% \n==========================================\n Files 168 168 \n Lines 32303 32303 \n==========================================\n+ Hits 25560 26121 +561 \n+ Misses 6743 6182 -561 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `90.76% <100.00%> (+20.74%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `94.11% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=footer). Last update [52d250f...977b295](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "It should be ok from my side. @LysandreJik you can review whenever you want." ]
1,600
1,601
1,601
CONTRIBUTOR
null
Hello. The current HF logger do not behave as expected. As example let's take the `run_tf_ner.py` script. With the following header: ``` import logging import os from dataclasses import dataclass, field from importlib import import_module from typing import Dict, List, Optional, Tuple import numpy as np from seqeval.metrics import classification_report, f1_score, precision_score, recall_score from transformers import logging as hf_logging handler = logging.StreamHandler() formatter = logging.Formatter('[%(levelname)s|%(filename)s:%(lineno)s] %(asctime)s >> %(message)s') handler.setFormatter(formatter) logger = hf_logging.get_logger(__name__) logger.handlers.clear() logger.addHandler(handler) hf_logging.enable_propagation() hf_logging.set_verbosity_info() hf_logging.enable_default_handler() from transformers import ( AutoConfig, AutoTokenizer, EvalPrediction, HfArgumentParser, TFAutoModelForTokenClassification, TFTrainer, TFTrainingArguments, ) from utils_ner import Split, TFTokenClassificationDataset, TokenClassificationTask ``` The formatter is fully ignored and takes the default logging format of TensorFlow. This PR fixes the problem with the following new header: ``` import logging import os from dataclasses import dataclass, field from importlib import import_module from typing import Dict, List, Optional, Tuple import numpy as np from seqeval.metrics import classification_report, f1_score, precision_score, recall_score import tensorflow as tf from transformers import logging as hf_logging handler = logging.StreamHandler() formatter = logging.Formatter('[%(levelname)s|%(filename)s:%(lineno)s] %(asctime)s >> %(message)s') handler.setFormatter(formatter) logger = hf_logging.get_logger() logger.handlers.clear() logger.addHandler(handler) logger.setLevel(logging.INFO) from transformers import ( AutoConfig, AutoTokenizer, EvalPrediction, HfArgumentParser, TFAutoModelForTokenClassification, TFTrainer, TFTrainingArguments, ) from utils_ner import Split, TFTokenClassificationDataset, TokenClassificationTask ``` But now the logger is set at library level and not at the class level anymore. Then I wanted to know if this slitght update is a problem or if it is ok.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7128/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7128", "html_url": "https://github.com/huggingface/transformers/pull/7128", "diff_url": "https://github.com/huggingface/transformers/pull/7128.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7128.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7127
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7127/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7127/comments
https://api.github.com/repos/huggingface/transformers/issues/7127/events
https://github.com/huggingface/transformers/issues/7127
701,331,592
MDU6SXNzdWU3MDEzMzE1OTI=
7,127
only fine tune the encoder part of BART
{ "login": "lytum", "id": 38668257, "node_id": "MDQ6VXNlcjM4NjY4MjU3", "avatar_url": "https://avatars.githubusercontent.com/u/38668257?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lytum", "html_url": "https://github.com/lytum", "followers_url": "https://api.github.com/users/lytum/followers", "following_url": "https://api.github.com/users/lytum/following{/other_user}", "gists_url": "https://api.github.com/users/lytum/gists{/gist_id}", "starred_url": "https://api.github.com/users/lytum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lytum/subscriptions", "organizations_url": "https://api.github.com/users/lytum/orgs", "repos_url": "https://api.github.com/users/lytum/repos", "events_url": "https://api.github.com/users/lytum/events{/privacy}", "received_events_url": "https://api.github.com/users/lytum/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "What task are you working on? Or to rephrase, what do you do once you have ` encoder_output = model(**inputs).encoder_last_hidden_state`?", "Thanks for your feedback. In my case, I only want the bart encoder and combine it to my own decoder.  Thanks for further feedback", "How do you want to fine-tune the encoder? You'll either need to add a task specific head or pair it with your decoder and train both encoder and decoder. I think you should be able to Initialize only the encoder using following snippet \r\n\r\n```python\r\nfrom torch import nn\r\n\r\nfrom transformers.modeling_bart import BartEncoder, PretrainedBartModel, PretrainedBartModel\r\nfrom transformers import BartConfig\r\n\r\nclass Encoder(PretrainedBartModel):\r\n def __init__(self, config: BartConfig):\r\n super().__init__(config)\r\n\r\n padding_idx, vocab_size = config.pad_token_id, config.vocab_size\r\n self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx)\r\n\r\n self.encoder = BartEncoder(config, self.shared)\r\n \r\n def forward(\r\n self, input_ids, attention_mask=None, output_attentions=False, output_hidden_states=False, return_dict=False\r\n ):\r\n\r\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\r\n output_hidden_states = (\r\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\r\n )\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n\r\n encoder_outputs = self.encoder(\r\n input_ids=input_ids,\r\n attention_mask=attention_mask,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n\r\n return encoder_outputs\r\n\r\nenc = Encoder.from_pretrained(\"facebook/bart-base\")\r\n```\r\n\r\n@sshleifer does this makes sense ?", "The text makes sense. Does that code run!?", "> How do you want to fine-tune the encoder? You'll either need to add a task specific head or pair it with your decoder and train both encoder and decoder. I think you should be able to Initialize only the encoder using following snippet\r\n> \r\n> ```python\r\n> from torch import nn\r\n> \r\n> from transformers.modeling_bart import BartEncoder, PretrainedBartModel, PretrainedBartModel\r\n> from transformers import BartConfig\r\n> \r\n> class Encoder(PretrainedBartModel):\r\n> def __init__(self, config: BartConfig):\r\n> super().__init__(config)\r\n> \r\n> padding_idx, vocab_size = config.pad_token_id, config.vocab_size\r\n> self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx)\r\n> \r\n> self.encoder = BartEncoder(config, self.shared)\r\n> \r\n> def forward(\r\n> self, input_ids, attention_mask=None, output_attentions=False, output_hidden_states=False, return_dict=False\r\n> ):\r\n> \r\n> output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\r\n> output_hidden_states = (\r\n> output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\r\n> )\r\n> return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n> \r\n> encoder_outputs = self.encoder(\r\n> input_ids=input_ids,\r\n> attention_mask=attention_mask,\r\n> output_attentions=output_attentions,\r\n> output_hidden_states=output_hidden_states,\r\n> return_dict=return_dict,\r\n> )\r\n> \r\n> return encoder_outputs\r\n> \r\n> enc = Encoder.from_pretrained(\"facebook/bart-base\")\r\n> ```\r\n> \r\n> @sshleifer does this makes sense ?\r\n\r\n Thanks for your code @patil-suraj . Really appreciate!!", "> The text makes sense. Does that code run!?\r\n\r\nIn my case, the code runs without problems, so I think it\"s correct! Thanks a lot!" ]
1,600
1,604
1,604
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> May i ask how can I fine-tune only the BART-encoder? from transformers import BartTokenizer, BartModel tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartModel.from_pretrained('facebook/bart-base', return_dict=True) inputs = tokenizer(["i am a girl", "i am from germany"], return_tensors="pt", padding=True) outputs = model(**inputs).encoder_last_hidden_state i load the pretrained BART, and get the encoder_last_hiddene_state. however, during fine-tuning, how about the decoder part? how could i only load the part of encoder and fine-tune? Thanks a lot <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7127/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7126
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7126/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7126/comments
https://api.github.com/repos/huggingface/transformers/issues/7126/events
https://github.com/huggingface/transformers/pull/7126
701,294,668
MDExOlB1bGxSZXF1ZXN0NDg2Nzc1NzIz
7,126
Multi predictions trainer
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=h1) Report\n> Merging [#7126](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52d250f6aa14844024806e5e4dd1c7882bbd8dd5?el=desc) will **increase** coverage by `1.71%`.\n> The diff coverage is `76.92%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7126/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7126 +/- ##\n==========================================\n+ Coverage 79.12% 80.84% +1.71% \n==========================================\n Files 168 168 \n Lines 32303 32305 +2 \n==========================================\n+ Hits 25560 26117 +557 \n+ Misses 6743 6188 -555 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.04% <70.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.29% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.77% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=footer). Last update [52d250f...6eb9b5b](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Yes this doesn't work if the model has `output_attentions` or `output_hiddens` in the config set to True: it needs the outputs to be a tuple of tensor with optional loss at the beginning. Documenting this is next when I make a template/protocol for models that `Trainer` supports. I can add a clean error message if one output is detected to be a tuple of tensors and add asserts that `output_attention` and `output_all_hiddens` are False in the config (if those arguments can be found).", "@sgugger are you saying that the eval in Trainer-backed https://github.com/huggingface/transformers/blob/master/examples/multiple-choice/run_multiple_choice.py is not currently working? ", "I'm pretty sure he meant `XxxForQuestionAnswering`, not multiple choice", "> I'm pretty sure he meant `XxxForQuestionAnswering`, not multiple choice\r\n\r\nOh yes, makes sense now. Thanks @LysandreJik ;)", "Yes @LysandreJik reads my thoughts right, sorry about the typo ;-)" ]
1,600
1,600
1,600
COLLABORATOR
null
This allows the `Trainer` to properly return predictions when the model has several outputs (for instance, all models `XxxForMultipleChoice`). This should unlock progress in #7032 where the start and end logits are both required.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7126/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7126", "html_url": "https://github.com/huggingface/transformers/pull/7126", "diff_url": "https://github.com/huggingface/transformers/pull/7126.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7126.patch", "merged_at": 1600180045000 }
https://api.github.com/repos/huggingface/transformers/issues/7125
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7125/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7125/comments
https://api.github.com/repos/huggingface/transformers/issues/7125/events
https://github.com/huggingface/transformers/pull/7125
701,292,906
MDExOlB1bGxSZXF1ZXN0NDg2Nzc0MzQ2
7,125
fix ZeroDivisionError and epoch counting
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks great! Would you mind adding a few tests so we don't accidentally make changes that will resurface this bug?", "> \r\n> \r\n> Looks great! Would you mind adding a few tests so we don't accidentally make changes that will resurface this bug?\r\n\r\n@sgugger \r\nI did a few simple tests on my side. Could you guide me how to write a test that is usually done by HF members? BTW, I don't have powerful machine, so hope the tests could be lightweight.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=h1) Report\n> Merging [#7125](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4d3914841932065f19a61ad46e178f94dedeff5a?el=desc) will **increase** coverage by `1.60%`.\n> The diff coverage is `66.66%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7125/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7125 +/- ##\n==========================================\n+ Coverage 80.07% 81.68% +1.60% \n==========================================\n Files 168 168 \n Lines 32257 32259 +2 \n==========================================\n+ Hits 25831 26351 +520 \n+ Misses 6426 5908 -518 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `52.90% <66.66%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.55% <0.00%> (-34.28%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.80% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.54% <0.00%> (+0.71%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=footer). Last update [4d39148...c8a542c](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Look at the `test_trainer.py` file in the tests directory. There are utility functions to build a small `Trainer` on a simple regression problem. You can just add a few tests mimicking the ones that count the number of steps during training.\r\n\r\nTo check they pass, you can run the command:\r\n```\r\npytest tests/test_trainer.py\r\n```\r\nwhich should go pretty quickly.", "Thanks. I will do it and report it when ready.", "I add a test `test_num_train_epochs_in_training` for the case `len(train_dataloader) < self.args.gradient_accumulation_steps`.\r\nFor the offset by 1 bug below due to the `+ 1` in \r\n\r\n num_train_epochs = (\r\n self.args.max_steps // (len(train_dataloader) // self.args.gradient_accumulation_steps) + 1\r\n )\r\n\r\n, I didn't figure a way to test, because this information is not returned in the output, and its actual usage is for this line\r\n\r\n logger.info(\" Num Epochs = %d\", num_train_epochs)\r\n\r\nbecause the training loop is actually controlled by `self.args.max_steps` if it is given.\r\n\r\n(I assume `adding a few tests` means to add in the `test_trainer.py` and push).", "Yes adding the test in `test_trainer.py` is exactly what I wanted, thanks!" ]
1,600
1,651
1,600
COLLABORATOR
null
@sgugger This PR fix 2 minor bugs in `trainer.py`. First, the two lines num_train_epochs = ( self.args.max_steps // (len(train_dataloader) // self.args.gradient_accumulation_steps) + 1 ) and epochs_trained = self.global_step // (len(train_dataloader) // self.args.gradient_accumulation_steps) gives `ZeroDivisionError` when `len(train_dataloader) < self.args.gradient_accumulation_steps`. The fix takes into account the code below the comment # last step in epoch but step is always smaller than gradient_accumulation_steps therefore when `len(train_dataloader) < self.args.gradient_accumulation_steps`, we still have 1 step in each epoch. The second bug is due to the `+ 1` in the num_train_epochs = ( self.args.max_steps // (len(train_dataloader) // self.args.gradient_accumulation_steps) + 1 ) In the example below, we have len(train_dataloader) == 40 self.args.gradient_accumulation_steps = 4 self.args.max_steps = 10 However we get `num_train_epochs == 2` from the original code, which should be `1`. python utils/download_glue_data.py --data_dir ./examples/text-classification/glue/ --tasks all python3 run_glue.py \ --task_name wnli \ --data_dir ./glue/WNLI \ --model_name_or_path distilbert-base-uncased \ --output_dir ./glue/WNLI/ \ --max_seq_length 16 \ --num_train_epochs 1 \ --per_device_train_batch_size 16 \ --gradient_accumulation_steps 4 \ --max_steps 10 \ --logging_steps 1 \ --save_steps 5 \ --seed 1 \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir (Pdb) len(train_dataloader) 40 (Pdb) self.args.gradient_accumulation_steps 4 (Pdb) self.args.max_steps 10 (Pdb) p num_train_epochs 2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7125/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7125", "html_url": "https://github.com/huggingface/transformers/pull/7125", "diff_url": "https://github.com/huggingface/transformers/pull/7125.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7125.patch", "merged_at": 1600185111000 }
https://api.github.com/repos/huggingface/transformers/issues/7124
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7124/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7124/comments
https://api.github.com/repos/huggingface/transformers/issues/7124/events
https://github.com/huggingface/transformers/pull/7124
701,262,148
MDExOlB1bGxSZXF1ZXN0NDg2NzQ4Nzk1
7,124
[s2s] distributed eval in one command
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=h1) Report\n> Merging [#7124](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/206b78d4850d3c6fe85a015654293fc4b803ed7b?el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7124/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7124 +/- ##\n==========================================\n+ Coverage 80.84% 80.86% +0.02% \n==========================================\n Files 168 168 \n Lines 32284 32284 \n==========================================\n+ Hits 26099 26108 +9 \n+ Misses 6185 6176 -9 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7124/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7124/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (+0.16%)` | :arrow_up: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7124/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `94.11% <0.00%> (+1.17%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7124/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7124/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <0.00%> (+4.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=footer). Last update [206b78d...be6badf](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,600
1,600
1,600
CONTRIBUTOR
null
### Issue One GPU command: ```bash python run_eval.py Helsinki-NLP/opus-mt-en-ro wmt_en_ro/test.source enro_test_translations.txt --reference_path wmt_en_ro/test.target --task translation --score_path mar_test_bleu.json --fp16 --bs 64 # {'bleu': 27.6865, 'n_obs': 1999, 'runtime': 85, 'seconds_per_sample': 0.0425} ``` Multi GPU: ``` python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name $opus --input_path wmt_en_ro --type_path test --fp16 --save_dir tmp_gen --fp16 --bs 64 python aggregate_distributed_results.py tmp_gen tmp_gen --calc_bleu cat tmp_gen/metrics.json # "bleu": 27.7772 ``` ### New command ``` python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name Helsinki-NLP/opus-mt-en-ro --data_dir wmt_en_ro --type_path test --save_dir tmp_gen4 --fp16 --bs 64 --task translation ``` ### Future PRs The actual generations differ slightly vs the single gpu implementation, likely because of sortish sampler/leading spaces.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7124/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7124", "html_url": "https://github.com/huggingface/transformers/pull/7124", "diff_url": "https://github.com/huggingface/transformers/pull/7124.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7124.patch", "merged_at": 1600113476000 }
https://api.github.com/repos/huggingface/transformers/issues/7123
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7123/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7123/comments
https://api.github.com/repos/huggingface/transformers/issues/7123/events
https://github.com/huggingface/transformers/issues/7123
701,217,198
MDU6SXNzdWU3MDEyMTcxOTg=
7,123
Seq2SeqDataset experiment: try to use arrow datasets
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" }, { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "@sshleifer could please assign me to this issue, would love to take a stab at this alongside `Seq2SeqTrainer` or once it's merged. ;)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,600
1,606
1,606
CONTRIBUTOR
null
### High Level Goal See if [`datasets/arrow_dataset.py`](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L172) can improve `example/seq2seq/utils.py`'s [`Seq2SeqDataset`](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L165) This is more experimental/proof of concept. ### Spec + Generate a PR that adds `Seq2SeqArrowDataset` with a similar signature to `Seq2SeqDataset` but uses the arrow dataset as a backend. + This means that wherever `linecache` is used, arrow should be used. + You can flex the API as much as you'd like, but if you don't call `prepare_seq2seq_batch` you will have to implement lots of collate fns. (If you go this route, you can just implement 1) + Feel free to reduce the scope to either translation or summarization. PR description should answer the following question: - Can we generate similar batches with an ArrowDataset? - Is the dataset faster? - Does it consume as little RAM? - Does it simplify the code? Let me know if this is impossible or unclear!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7123/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7123/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7122
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7122/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7122/comments
https://api.github.com/repos/huggingface/transformers/issues/7122/events
https://github.com/huggingface/transformers/issues/7122
701,211,595
MDU6SXNzdWU3MDEyMTE1OTU=
7,122
backtranslation script
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "https://discuss.huggingface.co/t/marian-language-discovery-questions/739/4?u=sshleifer" ]
1,600
1,602
1,602
CONTRIBUTOR
null
+ Wait for #7106, then resume in https://github.com/huggingface/transformers/pull/7121.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7122/timeline
completed
null
null