url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/5820 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5820/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5820/comments | https://api.github.com/repos/huggingface/transformers/issues/5820/events | https://github.com/huggingface/transformers/issues/5820 | 658,424,836 | MDU6SXNzdWU2NTg0MjQ4MzY= | 5,820 | Migrate Marian models names to ISO-639-3 where possible | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"> for NORTH_EU, why is the code \"multilingual\"\r\n\r\nCouldn't find a better qualifier and I think there were a lot of languages\r\n\r\n> how will people know if a model requires a target language code?\r\n\r\nHow do they know now? Maybe add it to the model card? or you could detect if `code.includes('_')`?\r\n\r\n> we will still have many cases where model name is not a language code.\r\n\r\nNot that many, right?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | CONTRIBUTOR | null | Proposed renamings:
ROMANCE-> roa
CELTIC -> cel
NORWAY -> no (check for conflicts w existing bilingual norwegian models.)
remove ch_group
@julien-c for NORTH_EU, why is the code ["multilingual"](https://github.com/huggingface/moon-landing/blob/d50efc4725a7546fb8159626470f3babacd44911/server/lib/ModelInfo.ts#L134-L170)
Should it be the same for all groups that do not map cleanly to an iso-639-3 group?
I will do this via mv, not cp, unless anyone objects.
cc @julien-c
Unresloved:
how will people know if a model requires a target language code?
we will still have many cases where model name is not a language code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5820/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5819 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5819/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5819/comments | https://api.github.com/repos/huggingface/transformers/issues/5819/events | https://github.com/huggingface/transformers/pull/5819 | 658,394,236 | MDExOlB1bGxSZXF1ZXN0NDUwMzYyNDA4 | 5,819 | [seq2seq] pack_dataset.py rewrites dataset in max_tokens format | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5819?src=pr&el=h1) Report\n> Merging [#5819](https://codecov.io/gh/huggingface/transformers/pull/5819?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/63761614eb7a5f96e089b77d5469ea31a7177e16&el=desc) will **decrease** coverage by `1.62%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5819?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5819 +/- ##\n==========================================\n- Coverage 78.59% 76.96% -1.63% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20471 20047 -424 \n- Misses 5576 6000 +424 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5819?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `19.81% <0.00%> (-79.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.00% <0.00%> (-25.72%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `16.51% <0.00%> (-21.34%)` | :arrow_down: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `32.00% <0.00%> (-17.10%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.32% <0.00%> (-11.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `95.89% <0.00%> (-2.74%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `88.28% <0.00%> (-1.81%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/5819/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5819?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5819?src=pr&el=footer). Last update [6376161...144b667](https://codecov.io/gh/huggingface/transformers/pull/5819?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Inspired by fairseq max_tokens_dataset.
This works on disk, by regenerating the data directory, so it can work with or without @Pradhy729 's PR.
It is especially useful for avoiding padding computation in a multigpu setup, where SortishSampler doesn't work.
It will also play nice with TPU by allowing us to always of the same `input_ids` shape, rather than trimming batches to avoid padding computation.
Empirically, it reduces epoch time for mbart finetuning on 8GPU from 2.5hrs to 1.5 hrs. (You can increase batch size to 8 from 4), by cutting the number of examples by a factor of 2, and spending a bit more time on each example.
It also doesn't lead to that much more truncation than the previous approach, although I haven't quantified this.
### Example
```
wget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_ro_128.tgz
tar -czvf wmt_en_ro_128.tgz
```
Creates a directory called `wmt_en_ro_packed/` with fewer examples
### Usage
```bash
python pack_dataset.py --tok_name facebook/mbart-large-en-ro --max_seq_len 128 --data_dir wmt_en_ro --save_path wmt_en_ro_packed_128
```
Output:
```bash
100%|██████████████████████████████████████████| 1999/1999 [00:01<00:00, 1533.19it/s]
packed val split from 1999 examples -> 683.
100%|██████████████████████████████████████████| 1999/1999 [00:01<00:00, 1590.80it/s]
packed test split from 1999 examples -> 652.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5819/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5819",
"html_url": "https://github.com/huggingface/transformers/pull/5819",
"diff_url": "https://github.com/huggingface/transformers/pull/5819.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5819.patch",
"merged_at": 1594922810000
} |
https://api.github.com/repos/huggingface/transformers/issues/5818 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5818/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5818/comments | https://api.github.com/repos/huggingface/transformers/issues/5818/events | https://github.com/huggingface/transformers/pull/5818 | 658,364,099 | MDExOlB1bGxSZXF1ZXN0NDUwMzM2MjI2 | 5,818 | [seq2seq] Don't copy self.source in sortishsampler | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5818?src=pr&el=h1) Report\n> Merging [#5818](https://codecov.io/gh/huggingface/transformers/pull/5818?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/63761614eb7a5f96e089b77d5469ea31a7177e16&el=desc) will **decrease** coverage by `1.35%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5818?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5818 +/- ##\n==========================================\n- Coverage 78.59% 77.24% -1.36% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20471 20119 -352 \n- Misses 5576 5928 +352 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5818?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+31.77%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5818?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5818?src=pr&el=footer). Last update [6376161...f3a250e](https://codecov.io/gh/huggingface/transformers/pull/5818?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I don't think you touch this file but, cc @nateraw "
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5818/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5818",
"html_url": "https://github.com/huggingface/transformers/pull/5818",
"diff_url": "https://github.com/huggingface/transformers/pull/5818.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5818.patch",
"merged_at": 1594965205000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5817 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5817/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5817/comments | https://api.github.com/repos/huggingface/transformers/issues/5817/events | https://github.com/huggingface/transformers/issues/5817 | 658,353,297 | MDU6SXNzdWU2NTgzNTMyOTc= | 5,817 | error with transformers 2.9.1 but not with 2.3.0, same code, why? | {
"login": "wenfeixiang1991",
"id": 26091157,
"node_id": "MDQ6VXNlcjI2MDkxMTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/26091157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wenfeixiang1991",
"html_url": "https://github.com/wenfeixiang1991",
"followers_url": "https://api.github.com/users/wenfeixiang1991/followers",
"following_url": "https://api.github.com/users/wenfeixiang1991/following{/other_user}",
"gists_url": "https://api.github.com/users/wenfeixiang1991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wenfeixiang1991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wenfeixiang1991/subscriptions",
"organizations_url": "https://api.github.com/users/wenfeixiang1991/orgs",
"repos_url": "https://api.github.com/users/wenfeixiang1991/repos",
"events_url": "https://api.github.com/users/wenfeixiang1991/events{/privacy}",
"received_events_url": "https://api.github.com/users/wenfeixiang1991/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Indeed, I can reproduce. This seems to be an edge-case we were not testing for in late v2+ versions. It does run correctly on v2.3.0, as you've shown, and on recent versions (v3+) as well.\r\n\r\nAfter looking a bit deeper into it, it seems to have happened because of the introduction of `BatchEncoding`, in version v2.9.0. It was later patched in v3.0.0 so the blacklisted versions for using a parallelization mechanism (here the dataloader) with `batch_encode_plus` would be the versions between `v2.9.0` and `v3.0.0`. That would be `v2.9.x`, `v2.10.x` and `v2.11.x`.\r\n\r\nHope this helps, and sorry for the inconvenience."
] | 1,594 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): Chinese
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
My problem is `return encoded` Error with transformers-2.9.1! but with transformers-2.3.0 was fine! Which way is Right? Very Confused!
My code as following:
```
# -*- coding: utf-8 -*-
import torch
from transformers import BertTokenizer, BertModel
from torch.utils.data import Dataset, DataLoader
from functools import partial
import logging
logging.basicConfig(level=logging.INFO)
bert_path = "/Users/kiwi/Desktop/chinese_wwm_ext"
model = BertModel.from_pretrained(bert_path)
tokenizer = BertTokenizer.from_pretrained(bert_path)
def tok_collate(batch_data):
batch_sentence = [x[0] for x in batch_data]
encoded = tokenizer.batch_encode_plus(
batch_sentence,
add_special_tokens=True,
return_tensors='pt',
pad_to_max_length=True)
#return encoded['input_ids'], encoded['token_type_ids'], encoded['attention_mask']
# (tensor([[ 101, 704, 1066, 704, 1925, 2600, 741, 6381, 510, 1744, 2157, 712,
# 2375, 510, 704, 1925, 1092, 1999, 712, 2375, 739, 6818, 2398, 8108,
# 3189, 683, 7305, 6626, 3959, 1266, 4689, 3636, 3727, 2356, 5440, 2175,
# 4554, 2658, 7344, 2971, 2339, 868, 102]]), tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
# 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
# 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]))
return encoded
#
# File "/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 203, in __getattr__
# return self.data[item]
# KeyError: '__getstate__'
def data_loader(data):
dl = DataLoader(data, batch_size=8, shuffle=False, collate_fn=partial(tok_collate),
num_workers=2)
for batch_data in dl:
print(batch_data)
data = [('中共中央总书记、国家主席、中央军委主席习近平10日专门赴湖北省武汉市考察疫情防控工作', 1)]
data_loader(data)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
```
encoded = {'input_ids': tensor([[ 101, 704, 1066, 704, 1925, 2600, 741, 6381, 510, 1744, 2157, 712,
2375, 510, 704, 1925, 1092, 1999, 712, 2375, 739, 6818, 2398, 8108,
3189, 683, 7305, 6626, 3959, 1266, 4689, 3636, 3727, 2356, 5440, 2175,
4554, 2658, 7344, 2971, 2339, 868, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:2.9.1
- Platform:
- Python version:3.6+
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5817/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5816 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5816/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5816/comments | https://api.github.com/repos/huggingface/transformers/issues/5816/events | https://github.com/huggingface/transformers/issues/5816 | 658,337,725 | MDU6SXNzdWU2NTgzMzc3MjU= | 5,816 | Additional layers to BERT | {
"login": "psureshmagadi17",
"id": 52900849,
"node_id": "MDQ6VXNlcjUyOTAwODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/52900849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psureshmagadi17",
"html_url": "https://github.com/psureshmagadi17",
"followers_url": "https://api.github.com/users/psureshmagadi17/followers",
"following_url": "https://api.github.com/users/psureshmagadi17/following{/other_user}",
"gists_url": "https://api.github.com/users/psureshmagadi17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psureshmagadi17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psureshmagadi17/subscriptions",
"organizations_url": "https://api.github.com/users/psureshmagadi17/orgs",
"repos_url": "https://api.github.com/users/psureshmagadi17/repos",
"events_url": "https://api.github.com/users/psureshmagadi17/events{/privacy}",
"received_events_url": "https://api.github.com/users/psureshmagadi17/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @psureshmagadi17 , you can add additional layers easily, take a loot the source code for `BERTForSequenceClassification`, you can take that code as it is and add the additional layers before the final classifier. ",
"> Hi @psureshmagadi17 , you can add additional layers easily, take a loot the source code for `BERTForSequenceClassification`, you can take that code as it is and add the additional layers before the final classifier.\r\n\r\nHi @patil-suraj , thank you for your response. Did you mean that we can just alter the code in main class? If yes, do you have an example? ",
"Hi @psureshmagadi17, if your goal is to add layers to a pretrained model only for fine-tuning BERTForSequenceClassification I think the best option is to modify the BertForSequenceClassification Module.\r\n\r\nIf you want to add attention layers, make sure to use the sequence_output of the BertModel Module and not the pooled_output in the forward function, then use a BertPooler layer before the classifier.",
"> Hi @psureshmagadi17, if your goal is to add layers to a pretrained model only for fine-tuning BERTForSequenceClassification I think the best option is to modify the BertForSequenceClassification Module.\r\n> \r\n> If you want to add attention layers, make sure to use the sequence_output of the BertModel Module and not the pooled_output in the forward function, then use a BertPooler layer before the classifier.\r\n\r\nHi @nassim-yagoub - thank you for the response! I'm fairly new to this process i.e., modify the network structure. Do you have an example or discussion that I can follow to help me through this process? ",
"A small example:\r\n\r\n import torch.nn as nn\r\n from transformers import BertModel\r\n\r\n class CustomBERTModel(nn.Module):\r\n def __init__(self):\r\n super(CustomBERTModel, self).__init__()\r\n self.bert = BertModel.from_pretrained(\"bert-base-uncased\")\r\n # add your additional layers here, for example a dropout layer followed by a linear classification head\r\n self.dropout = nn.Dropout(0.3)\r\n self.out = nn.Linear(768, 2)\r\n \r\n def forward(self, ids, mask, token_type_ids):\r\n sequence_output, pooled_output = self.bert(\r\n ids, \r\n attention_mask=mask,\r\n token_type_ids=token_type_ids\r\n )\r\n\r\n # we apply dropout to the sequence output, tensor has shape (batch_size, sequence_length, 768)\r\n sequence_output = self.dropout(sequence_output)\r\n \r\n # next, we apply the linear layer. The linear layer (which applies a linear transformation)\r\n # takes as input the hidden states of all tokens (so seq_len times a vector of size 768, each corresponding to\r\n # a single token in the input sequence) and outputs 2 numbers (scores, or logits) for every token\r\n # so the logits are of shape (batch_size, sequence_length, 2)\r\n logits = self.out(sequence_output)\r\n\r\n return logits",
"> A small example:\r\n> \r\n> ```\r\n> import torch.nn as nn\r\n> from transformers import BertModel\r\n> \r\n> class CustomBERTModel(nn.Module):\r\n> def __init__(self):\r\n> super(CustomBERTModel, self).__init__()\r\n> self.bert = BertModel.from_pretrained(\"bert-base-uncased\")\r\n> # add your additional layers here, for example a dropout layer followed by a linear classification head\r\n> self.dropout = nn.Dropout(0.3)\r\n> self.out = nn.Linear(768, 2)\r\n> \r\n> def forward(self, ids, mask, token_type_ids):\r\n> sequence_output, pooled_output = self.bert(\r\n> ids, \r\n> attention_mask=mask,\r\n> token_type_ids=token_type_ids\r\n> )\r\n> \r\n> # we apply dropout to the sequence output, tensor has shape (batch_size, sequence_length, 768)\r\n> sequence_output = self.dropout(sequence_output)\r\n> \r\n> # next, we apply the linear layer. The linear layer (which applies a linear transformation)\r\n> # takes as input the hidden states of all tokens (so seq_len times a vector of size 768, each corresponding to\r\n> # a single token in the input sequence) and outputs 2 numbers (scores, or logits) for every token\r\n> # so the logits are of shape (batch_size, sequence_length, 2)\r\n> logits = self.out(sequence_output)\r\n> \r\n> return logits\r\n> ```\r\n\r\nThank you, @NielsRogge",
"For example if you want to add the same layers used in Bert, you may want to modify the Module this way (with new_layers_config being the same than the original config, except for the number of layers):\r\n\r\n```python\r\n\r\n\r\nclass BertForSequenceClassification(BertPreTrainedModel):\r\n def __init__(self, config, new_layers_config):\r\n super().__init__(config)\r\n self.num_labels = config.num_labels\r\n\r\n self.bert = BertModel(config)\r\n self.new_layers = BertEncoder(new_layers_config)\r\n self.pooler = BertPooler(config)\r\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\r\n self.classifier = nn.Linear(config.hidden_size, config.num_labels)\r\n\r\n self.init_weights()\r\n\r\n def forward(\r\n self,\r\n input_ids=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n labels=None,\r\n output_attentions=None,\r\n output_hidden_states=None,\r\n ):\r\n\r\n outputs = self.bert(\r\n input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n )\r\n\r\n sequence_output = outputs[0]\r\n \r\n new_layers_output = self.new_layers(sequence_output)[0]\r\n \r\n pooled_output = self.pooler(new_layers_output)\r\n\r\n pooled_output = self.dropout(pooled_output)\r\n logits = self.classifier(pooled_output)\r\n\r\n outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here\r\n\r\n if labels is not None:\r\n if self.num_labels == 1:\r\n # We are doing regression\r\n loss_fct = MSELoss()\r\n loss = loss_fct(logits.view(-1), labels.view(-1))\r\n else:\r\n loss_fct = CrossEntropyLoss()\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n outputs = (loss,) + outputs\r\n\r\n return outputs # (loss), logits, (hidden_states), (attentions)\r\n\r\n```\r\n\r\nWe added a BertEncoder and a BertPooler to the base implementation.\r\nYou can also retreive the hidden_states and attention of the new layers if you want to, I did not do it here.",
"Thanks @nassim-yagoub !",
"@nassim-yagoub - I had another question : are the weights for BERTForSequenceClassification Model layers frozen by default?",
"The weights are not frozen by default when you load them, however you can manually freeze them with `.requires_grad = False`",
"> The weights are not frozen by default when you load them, however you can manually freeze them with `.requires_grad = False`\r\n\r\nThank you @nassim-yagoub!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> A small example:\r\n> \r\n> ```\r\n> import torch.nn as nn\r\n> from transformers import BertModel\r\n> \r\n> class CustomBERTModel(nn.Module):\r\n> def __init__(self):\r\n> super(CustomBERTModel, self).__init__()\r\n> self.bert = BertModel.from_pretrained(\"bert-base-uncased\")\r\n> # add your additional layers here, for example a dropout layer followed by a linear classification head\r\n> self.dropout = nn.Dropout(0.3)\r\n> self.out = nn.Linear(768, 2)\r\n> \r\n> def forward(self, ids, mask, token_type_ids):\r\n> sequence_output, pooled_output = self.bert(\r\n> ids, \r\n> attention_mask=mask,\r\n> token_type_ids=token_type_ids\r\n> )\r\n> \r\n> # we apply dropout to the sequence output, tensor has shape (batch_size, sequence_length, 768)\r\n> sequence_output = self.dropout(sequence_output)\r\n> \r\n> # next, we apply the linear layer. The linear layer (which applies a linear transformation)\r\n> # takes as input the hidden states of all tokens (so seq_len times a vector of size 768, each corresponding to\r\n> # a single token in the input sequence) and outputs 2 numbers (scores, or logits) for every token\r\n> # so the logits are of shape (batch_size, sequence_length, 2)\r\n> logits = self.out(sequence_output)\r\n> \r\n> return logits\r\n> ```\r\n\r\nI wonder is there a way to do this modification while still using the \"trainer\" just like in the example codes of run_ner.py?\r\n\r\nMy second question is about the return values of self.bert, so basically \"sequence_output\" is the context vector for the sentence, but what does pooled_output mean? ",
"> A small example:\r\n> \r\n> ```\r\n> import torch.nn as nn\r\n> from transformers import BertModel\r\n> \r\n> class CustomBERTModel(nn.Module):\r\n> def __init__(self):\r\n> super(CustomBERTModel, self).__init__()\r\n> self.bert = BertModel.from_pretrained(\"bert-base-uncased\")\r\n> # add your additional layers here, for example a dropout layer followed by a linear classification head\r\n> self.dropout = nn.Dropout(0.3)\r\n> self.out = nn.Linear(768, 2)\r\n> \r\n> def forward(self, ids, mask, token_type_ids):\r\n> sequence_output, pooled_output = self.bert(\r\n> ids, \r\n> attention_mask=mask,\r\n> token_type_ids=token_type_ids\r\n> )\r\n> \r\n> # we apply dropout to the sequence output, tensor has shape (batch_size, sequence_length, 768)\r\n> sequence_output = self.dropout(sequence_output)\r\n> \r\n> # next, we apply the linear layer. The linear layer (which applies a linear transformation)\r\n> # takes as input the hidden states of all tokens (so seq_len times a vector of size 768, each corresponding to\r\n> # a single token in the input sequence) and outputs 2 numbers (scores, or logits) for every token\r\n> # so the logits are of shape (batch_size, sequence_length, 2)\r\n> logits = self.out(sequence_output)\r\n> \r\n> return logits\r\n> ```\r\n\r\nIf possible it would be great if you can provide the training loop for the same"
] | 1,594 | 1,643 | 1,601 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I'm currently fine-tuning BERTForSequenceClassification model for a classification task and I wanted to know if there are ways to add additional layers before the final classification layer?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5816/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5815 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5815/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5815/comments | https://api.github.com/repos/huggingface/transformers/issues/5815/events | https://github.com/huggingface/transformers/issues/5815 | 658,335,782 | MDU6SXNzdWU2NTgzMzU3ODI= | 5,815 | cast_bool_to_primitive breaks TensorFlow graph support. | {
"login": "AndreasMadsen",
"id": 505333,
"node_id": "MDQ6VXNlcjUwNTMzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/505333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreasMadsen",
"html_url": "https://github.com/AndreasMadsen",
"followers_url": "https://api.github.com/users/AndreasMadsen/followers",
"following_url": "https://api.github.com/users/AndreasMadsen/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreasMadsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreasMadsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreasMadsen/subscriptions",
"organizations_url": "https://api.github.com/users/AndreasMadsen/orgs",
"repos_url": "https://api.github.com/users/AndreasMadsen/repos",
"events_url": "https://api.github.com/users/AndreasMadsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreasMadsen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @AndreasMadsen, \r\n\r\nThanks a lot for the detailed error description! I added this function, so I will try to take a look as soon as possible. I will be off for the next two weeks though (but will not this issue down). If by change @jplu you find some time, feel free to take a look :-) ",
"Should be fixed in the PR https://github.com/huggingface/transformers/pull/5468",
"Hi @jplu, I'm sorry, but I doubt #5468 will fix the issue. Fundamentally speaking casting to primitives is not a good practice in TensorFlow, as it invalidates the use of `@tf.function` and is generally unnecessary as described above. Casting to primitives is, in my experience, just never the correct solution in TensorFlow.\r\n\r\nI do think #5468 mitigates the issue, which is maybe where the confusion is coming from. This is because, the models will now correctly default to the `config` object when `output_hidden_states=True` **is not** specified as an input. In those cases object property is never cast to a tensor to begin with, therefore the `@tf.function` graph will be statically compiled to always output the `hidden_states`, as intended.\r\n\r\nHowever, the behavior is different when `output_hidden_states=True` is specified as an input, as it will be cast to a Tensor when it becomes part of the `inputs` argument in `call()`. After that, it is not possible to convert back to a primitive, as that invalidates `@tf.function`.\r\n\r\nIf you insist on keeping it as a primitive, the best solution might be to specify it as an aux-input, similar to `training` and `mask` in a `keras.layers.Layer`, as they don't get converted the same way. I'm not familiar enough with the Keras internals to know the details here, and I think it might also be incompatible with `compute_output_shape` etc. \r\n\r\n_BTW, in the keras RNN layers, hidden_state is only specified in the constructor, properly because it can get a bit messy having to specify it in the `inputs`, but I don't see anything fundamentally wrong with specifying it in `inputs`._",
"The PR fix your issue at least for the piece of code you are providing, here the result I have:\r\n\r\n```\r\n>>> import transformers\r\n>>> import tensorflow as tf\r\n>>> bert = tf.function(transformers.TFBertForMaskedLM.from_pretrained('bert-base-uncased'))\r\n>>> for i in range(2):\r\n... (_, hidden_state) = bert(tf.constant([[10,11,12]]), output_hidden_states=True)\r\n... print(f'computed {i}')\r\n... \r\ncomputed 0\r\ncomputed 1\r\n>>> \r\n```",
"I believe the solution of @AndreasMadsen is correct -- `output_hidden_states` and also `output_attentions` should be passed as named arguments of the `call` method of `TFBertEncoder.call` -- that way they are not converted to Tensors and can be just constants.",
"@jplu Sorry for the misunderstanding. The test now works because `output_hidden_states` is now an auxiliary-input, thus is stays a primitive, thus the casting is no longer involved. However, casting to primitives is still not good practice in TensorFlow, so it doesn't take much to break it.\r\n\r\nI read your code more thoroughly, and have the following two failure cases for you.\r\n\r\n**Case 1:**\r\n\r\n```python\r\nbert = tf.function(transformers.TFBertForMaskedLM.from_pretrained('bert-base-uncased'))\r\noutputs = bert(tf.constant([[10,11,12]]), output_attentions=True)\r\nassert len(outputs) == 2\r\n```\r\n\r\nFails with an IndexError. Essentially because casting in a `tf.function` can not be done. Edit: I think it is an inconsistent casting, in some functions the `output_attentions` is a primitive and the \"casting\" works by being a no-op, it other functions the casting can't be done as it is in a `tf.function`.\r\n\r\n**Case 2:**\r\n\r\n```python\r\nbert = tf.function(transformers.TFBertForMaskedLM.from_pretrained('bert-base-uncased'))\r\noutputs = bert(tf.constant([[10,11,12]]), output_hidden_states=tf.constant(True))\r\nassert len(outputs) == 2\r\n```\r\n\r\noutputs only one output (two is exected), because `tf.constant` can not be casted in a `tf.function`.\r\n\r\n---\r\n\r\nHowever, I would like that, instead of working around these issues, you read the documentation on AutoGraph.\r\n\r\nI really think there is a misunderstanding here, about what TensorFlow AutoGraph can do for you and why casting to primitives is really not necessary at all. I would suggest you read https://www.tensorflow.org/guide/function#conditionals and also check out the hidden docs, such as https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md#effects-of-the-tracing-process which explains it in more details.\r\n\r\nWhat @foxik says is true, but I think depending on the auxiliary-inputs just avoids the misunderstanding. Truly, casting to primitives is just not a good idea.",
"Ok, thanks for the hints, I will review that part! I should be able to start working on it from tomorrow or Thursday.",
"I have started to work on this today. I basically just removed the casting to primitive function and updated the code accordingly. Nevertheless it doesn't work when booleans are tensors (your Case 2) because AutoGraph do not let you return an output of different size on each branch. A simple example to simply the use case can be:\r\n\r\n```\r\nimport tensorflow as tf\r\n\r\[email protected]\r\ndef output(output1, output2):\r\n first_conditional_output = ()\r\n second_conditional_output = ()\r\n\r\n for i in range(2):\r\n if output1:\r\n first_conditional_output = first_conditional_output + (i,)\r\n\r\n if output2:\r\n second_conditional_output = second_conditional_output + (i,)\r\n\r\n outputs = (0,)\r\n if output1:\r\n outputs = outputs + (first_conditional_output,)\r\n if output2:\r\n outputs = outputs + (second_conditional_output,)\r\n\r\n return outputs\r\n```\r\n\r\nThis piece of code works fine as long as the parameters are primitives, but when using `tf.constant(True)` for at least one of the two makes it fail with:\r\n\r\n```\r\n/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/tensorflow/python/ops/cond_v2.py:633 _make_indexed_slices_indices_types_match\r\n assert len(set(outs_per_branch)) == 1, outs_per_branch\r\n\r\nAssertionError: [1, 0]\r\n```\r\n\r\nI currently struggling on this, and try to find a workaround but if it is not possible to fix this, I think we will just skip this functionality.",
"@jplu TensorFlow guys internally use `tf.cond` to handle this case, for example here: https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/framework/smart_cond.py#L27-L59 . Keras layers like Dropout have another special case for `Variables` here (the `smart_module.smart_cond` is the above method): https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/keras/utils/tf_utils.py#L42-L65",
"I already came across this, but I don't see how it could solve this issue. Each if in this piece of code is automatically translated to a `tf.cond`. The problem is that there is no else and then the branches `true_fn` and `false_fn` will have a different output size from each other. This is not allowed and this is the problem :(\n\nUnless I'm missing something in what you said.\n\nI'm still trying to figure out how to find a workaround anyway.",
"@jplu I thought that `tf.cond` can return arbitrary results from the `true_fn` and `false_fn` (contrary to Autograph), but you are right, it is not allowed during graph construction -- my bad.\r\n\r\nPersonally I think it is enough for the `output_hidden_size` to be a Python constant (and fail otherwise). It is in fact not obvious what type should the output have in case computation graph is used and the `output_hidden_size` is undefined Tensor value.\r\n\r\nSo I think you were right with skipping the functionality ;-)",
"@jplu Good point, regarding the variable output-signature. I think it is perfectly acceptable to assert for a primitive input, at least for now.\r\n\r\nAlternatively, the solution would be to return an empty tensor when `tf.constant([False])` is used. Such an approach could look like this:\r\n\r\n```python\r\nimport tensorflow as tf\r\n\r\[email protected]\r\ndef output(output1, output2):\r\n first_conditional_output = tf.TensorArray(tf.int64, size=0, dynamic_size=True, clear_after_read=True)\r\n second_conditional_output = tf.TensorArray(tf.int64, size=0, dynamic_size=True, clear_after_read=True)\r\n\r\n for i in range(2):\r\n if output1:\r\n first_conditional_output = first_conditional_output.write(i, i)\r\n if output2:\r\n second_conditional_output = second_conditional_output.write(i, i)\r\n\r\n outputs = (0,)\r\n if isinstance(output1, tf.Tensor) or output1:\r\n outputs = outputs + (first_conditional_output.stack(),)\r\n\r\n if isinstance(output2, tf.Tensor) or output2:\r\n outputs = outputs + (second_conditional_output.stack(),)\r\n\r\n return outputs\r\n```\r\n\r\nedit: PS: there is more documentation here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md",
"@AndreasMadsen Thanks!! I basically already came across the same solution (which is I think the only one :smile:), but the issue here is that we will always have three outputs, which is not really what is wanted. I have to talk to the whole team to see if they are ok with this or not, If yes, we will update the code base accordingly otherwise we will skip this.",
"I'm trying to figure out if using `tf.boolean_mask` could work.",
"Hi @jplu, I will leave it up to you to decide what is \"wanted\". But you should consider the usage pattern when unpacking the output:\r\n\r\n**with always three outputs**:\r\n\r\n```python\r\[email protected]\r\ndef usage(hasOutput1, hasOutput2):\r\n (one, output1, output2) = output(hasOutput1, hasOutput2)\r\n\r\n tf.print(one)\r\n if hasOutput1:\r\n tf.print(output1)\r\n if hasOutput2:\r\n tf.print(output2)\r\n```\r\n\r\n**with always variable outputs**:\r\n\r\n```python\r\[email protected]\r\ndef usage(hasOutput1, hasOutput2):\r\n\r\n output1 = tf.zeros((0,))\r\n output2 = tf.zeros((0,))\r\n if hasOutput1 and hasOutput2:\r\n (one, output1, output2) = output(hasOutput1, hasOutput2)\r\n elif hasOutput1:\r\n (one, output1) = output(hasOutput1, hasOutput2)\r\n elif hasOutput2:\r\n (one, output2) = output(hasOutput1, hasOutput2)\r\n else:\r\n (one, ) = output(hasOutput1, hasOutput2)\r\n\r\n tf.print(one)\r\n if hasOutput1:\r\n tf.print(output1)\r\n if hasOutput2:\r\n tf.print(output2)\r\n```\r\n",
"You are totally right @AndreasMadsen!! Now everything should work like expected in the PR at the condition to use primitive booleans. If we all decide to fix the output size to 3 I will open a new PR for this.\r\n\r\nNevertheless, I've just spotted another problem with the usage of `TensorArray`, instead to have a tuple that looks like : `(batch_size, num_tokens, hidden_states_size) * num_layers` we get a tensor that looks like `(num_layers, batch_size, num_tokens, hidden_states_size)` which cause several other issues for later.",
"> Nevertheless, I've just spotted another problem with the usage of TensorArray, instead to have a tuple that looks like : (batch_size, num_tokens, hidden_states_size) * num_layers we get a tensor that looks like (num_layers, batch_size, num_tokens, hidden_states_size) which cause several other issues for later.\r\n\r\nTo avoid a breaking change, you could do it as a tuple of empty tensors. For the next major version, I would suggest it become one big tensor. You can swap `transpose`/swap the first and last axis, to make it mostly compatible, with indexing, unpack, etc..",
"Exactly!! This should be a good way to go if the decision of having always 3 outputs is taken.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,602 | 1,602 | CONTRIBUTOR | null | # 🐛 Bug
## To reproduce
```python
import transformers
bert = tf.function(transformers.TFBertForMaskedLM.from_pretrained('bert-base-uncased'))
for i in range(2):
(_, hidden_state) = bert(tf.constant([[10,11,12]]), output_hidden_states=True)
print(f'computed {i}')
```
Errors with
```
ValueError: not enough values to unpack (expected 2, got 1)
```
## Expected behavior
```
computed 1
computed 2
```
Same result as if `tf.function` was not used.
## Environment info
Example environment : https://colab.research.google.com/gist/AndreasMadsen/593df94a3319dee58bba33a26efedeb3/untitled6.ipynb
- `transformers` version: 3.0.2
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
-----
## Details
The bug happens due to `cast_bool_to_primitive`, that was introduced in https://github.com/huggingface/transformers/commit/6e603cb7892b49a2cbbc10ba859759f92c3fb7a6. Before that, it was possible to get the `hidden_states` from Bert in TensorFlow graph/function mode.
Generally speaking, casting TensorFlow tensors to primitives is not a good practice, as it only works in eager mode. It is also completely unnecessary in this case, as using `if bool_tensor_scalar:` works perfectly fine.
```python
def print_bool(x):
if x:
print('True')
else:
print('False')
print_bool_graph = tf.function(print_bool)
print('eager:')
print_bool(True) # Prints True
print_bool(False) # Prints False
print_bool(tf.constant(True)) # Prints True
print_bool(tf.constant(False)) # Prints False
print('')
print('graph:')
print_bool_graph(True) # Prints True
print_bool_graph(False) # Prints False
print_bool_graph(tf.constant(True)) # Prints True
print_bool_graph(tf.constant(False)) # Prints False
```
I can see there are some cases where defaults are used. The right way to handle that is to implement the default handling upstream in the first `call()` method. A lesser way would be to implement it as:
```python
def cast_bool_to_primitive(x, default_value=False):
if x is None:
return default_value
return x
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5815/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5814 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5814/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5814/comments | https://api.github.com/repos/huggingface/transformers/issues/5814/events | https://github.com/huggingface/transformers/issues/5814 | 658,272,065 | MDU6SXNzdWU2NTgyNzIwNjU= | 5,814 | How to download original weights of gpt2 | {
"login": "nuradilK",
"id": 26120466,
"node_id": "MDQ6VXNlcjI2MTIwNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/26120466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nuradilK",
"html_url": "https://github.com/nuradilK",
"followers_url": "https://api.github.com/users/nuradilK/followers",
"following_url": "https://api.github.com/users/nuradilK/following{/other_user}",
"gists_url": "https://api.github.com/users/nuradilK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nuradilK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nuradilK/subscriptions",
"organizations_url": "https://api.github.com/users/nuradilK/orgs",
"repos_url": "https://api.github.com/users/nuradilK/repos",
"events_url": "https://api.github.com/users/nuradilK/events{/privacy}",
"received_events_url": "https://api.github.com/users/nuradilK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @nuradilK,\r\n\r\n1) The GPT2 weight initialization is a known warning bug we should remove, but which does not affect model performance, \r\nsee https://github.com/huggingface/transformers/issues/5800 e.g.\r\n\r\n2) It's very hard for us to track down low performance of specific user code (which data to use, data processing, ...). Do you mind posting a question like \"How to reproduce GPT2 PPL results\" on https://discuss.huggingface.co/ since it is not really a bug but more a high level question. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | # ❓ Questions & Help
Hi, I have been trying to reproduce losses from [this paper](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf), but I realized that my losses are lower than the ones from the paper. Also, I noticed a warning - Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2-medium and are newly initialized: ... Does it mean that the original weights of GPT-2 was changed? If so, how can I download the original weights?
For instance my PPL for PTB is 35.20 while the PPL in the paper is 65.85. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5814/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5813 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5813/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5813/comments | https://api.github.com/repos/huggingface/transformers/issues/5813/events | https://github.com/huggingface/transformers/pull/5813 | 658,232,398 | MDExOlB1bGxSZXF1ZXN0NDUwMjI0MjE0 | 5,813 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5813?src=pr&el=h1) Report\n> Merging [#5813](https://codecov.io/gh/huggingface/transformers/pull/5813?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/89a78be51f1c2afd263da66ec76d3297432f0c2a&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5813?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5813 +/- ##\n=======================================\n Coverage 78.19% 78.19% \n=======================================\n Files 146 146 \n Lines 26047 26047 \n=======================================\n Hits 20367 20367 \n Misses 5680 5680 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5813?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5813?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5813?src=pr&el=footer). Last update [89a78be...1965bb0](https://codecov.io/gh/huggingface/transformers/pull/5813?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5813/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5813",
"html_url": "https://github.com/huggingface/transformers/pull/5813",
"diff_url": "https://github.com/huggingface/transformers/pull/5813.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5813.patch",
"merged_at": 1595232403000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5812/comments | https://api.github.com/repos/huggingface/transformers/issues/5812/events | https://github.com/huggingface/transformers/pull/5812 | 658,226,320 | MDExOlB1bGxSZXF1ZXN0NDUwMjE5MDY1 | 5,812 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5812?src=pr&el=h1) Report\n> Merging [#5812](https://codecov.io/gh/huggingface/transformers/pull/5812?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/89a78be51f1c2afd263da66ec76d3297432f0c2a&el=desc) will **increase** coverage by `0.31%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5812?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5812 +/- ##\n==========================================\n+ Coverage 78.19% 78.50% +0.31% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n+ Hits 20367 20448 +81 \n+ Misses 5680 5599 -81 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5812?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5812?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5812?src=pr&el=footer). Last update [89a78be...a5f39ab](https://codecov.io/gh/huggingface/transformers/pull/5812?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Fix missig "-" in meta data | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5812/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5812",
"html_url": "https://github.com/huggingface/transformers/pull/5812",
"diff_url": "https://github.com/huggingface/transformers/pull/5812.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5812.patch",
"merged_at": 1594909551000
} |
https://api.github.com/repos/huggingface/transformers/issues/5811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5811/comments | https://api.github.com/repos/huggingface/transformers/issues/5811/events | https://github.com/huggingface/transformers/pull/5811 | 658,218,445 | MDExOlB1bGxSZXF1ZXN0NDUwMjEyNDE2 | 5,811 | [Longformer] fix longformer slow-down | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5811?src=pr&el=h1) Report\n> Merging [#5811](https://codecov.io/gh/huggingface/transformers/pull/5811?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/89a78be51f1c2afd263da66ec76d3297432f0c2a&el=desc) will **decrease** coverage by `0.80%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5811?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5811 +/- ##\n==========================================\n- Coverage 78.19% 77.38% -0.81% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20367 20157 -210 \n- Misses 5680 5890 +210 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5811?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5811?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5811?src=pr&el=footer). Last update [89a78be...25c6419](https://codecov.io/gh/huggingface/transformers/pull/5811?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great find. +1 to tests!"
] | 1,594 | 1,594 | 1,594 | MEMBER | null | A drastic slow-down of Longformer after the PR: https://github.com/huggingface/transformers/pull/5219 was deteced by @HHousen (thanks a lot!) here: https://github.com/huggingface/transformers/issues/4406#issuecomment-658483050 .
After digging a bit into the code it can be seen that the line:
`is_global_attn = any(is_index_global_attn.flatten())` is the reason of the drastic slow-down.
Running the following benchmark on master:
```
python examples/benchmarking/run_benchmark.py --models allenai/longformer-base-4096 --no_memory --sequence_length 512 1024
```
yields the following result:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 512 0.647
allenai/longformer-base-4096 8 1024 1.284
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 3.0.2
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.0
- python_version: 3.7.7
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-16
- time: 14:07:02.719531
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32089
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 0
- use_tpu: False
```
Now on this branch the results are as follows:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
allenai/longformer-base-4096 8 512 0.141
allenai/longformer-base-4096 8 1024 0.271
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 3.0.2
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.0
- python_version: 3.7.7
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-07-16
- time: 14:07:57.623378
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32089
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 0
- use_tpu: False
```
So this simple line is responsible for a slow-down of factor 4.
Moral of the story: Never use `any()` on a PyTorch tensor => always use tensor.any(). I wonder if this is actually a known problem of PyTorch/Python. It might be a good to check our code if we have more statements like this.
Another lesson for me is that one should always run the benchmarking before and after doing such a big refactoring as in https://github.com/huggingface/transformers/pull/5219 .
It's very simple to run the benchmark script for a model and takes usually only a couple of seconds. Ideally we should have performance regression tests to automatically detect such slow downs.
Pinging @thomwolf @mfuntowicz @sshleifer @LysandreJik @ibeltagy - think this is quite interesting. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5811/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5811",
"html_url": "https://github.com/huggingface/transformers/pull/5811",
"diff_url": "https://github.com/huggingface/transformers/pull/5811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5811.patch",
"merged_at": 1594909178000
} |
https://api.github.com/repos/huggingface/transformers/issues/5810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5810/comments | https://api.github.com/repos/huggingface/transformers/issues/5810/events | https://github.com/huggingface/transformers/pull/5810 | 658,215,534 | MDExOlB1bGxSZXF1ZXN0NDUwMjA5OTA5 | 5,810 | Add missing arguments for BertWordPieceTokenizer | {
"login": "monologg",
"id": 28896432,
"node_id": "MDQ6VXNlcjI4ODk2NDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/28896432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monologg",
"html_url": "https://github.com/monologg",
"followers_url": "https://api.github.com/users/monologg/followers",
"following_url": "https://api.github.com/users/monologg/following{/other_user}",
"gists_url": "https://api.github.com/users/monologg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monologg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monologg/subscriptions",
"organizations_url": "https://api.github.com/users/monologg/orgs",
"repos_url": "https://api.github.com/users/monologg/repos",
"events_url": "https://api.github.com/users/monologg/events{/privacy}",
"received_events_url": "https://api.github.com/users/monologg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5810?src=pr&el=h1) Report\n> Merging [#5810](https://codecov.io/gh/huggingface/transformers/pull/5810?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/89a78be51f1c2afd263da66ec76d3297432f0c2a&el=desc) will **decrease** coverage by `0.88%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5810?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5810 +/- ##\n==========================================\n- Coverage 78.19% 77.30% -0.89% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20367 20136 -231 \n- Misses 5680 5911 +231 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5810?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.32% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5810/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5810?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5810?src=pr&el=footer). Last update [89a78be...d7b3627](https://codecov.io/gh/huggingface/transformers/pull/5810?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@LysandreJik \r\n\r\nCan you merge this PR? Thank you:)"
] | 1,594 | 1,599 | 1,599 | CONTRIBUTOR | null | Hi:)
It seems that in `__init__` of `BertTokenizerFast`, arguments of `pad_token` and `mask_token` of `BertWordPieceTokenizer` are missing.
I've added them in this commit. If these missed arguments are intended, please let me know:)
Thank you!
```python
class BertTokenizerFast(PreTrainedTokenizerFast):
def __init__(
self,
vocab_file,
do_lower_case=True,
unk_token="[UNK]",
sep_token="[SEP]",
pad_token="[PAD]",
cls_token="[CLS]",
mask_token="[MASK]",
clean_text=True,
tokenize_chinese_chars=True,
strip_accents=None,
wordpieces_prefix="##",
**kwargs
):
super().__init__(
BertWordPieceTokenizer(
vocab_file=vocab_file,
unk_token=unk_token,
sep_token=sep_token,
cls_token=cls_token,
clean_text=clean_text,
handle_chinese_chars=tokenize_chinese_chars,
strip_accents=strip_accents,
lowercase=do_lower_case,
wordpieces_prefix=wordpieces_prefix,
),
unk_token=unk_token,
sep_token=sep_token,
pad_token=pad_token,
cls_token=cls_token,
mask_token=mask_token,
**kwargs,
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5810/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5810",
"html_url": "https://github.com/huggingface/transformers/pull/5810",
"diff_url": "https://github.com/huggingface/transformers/pull/5810.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5810.patch",
"merged_at": 1599482141000
} |
https://api.github.com/repos/huggingface/transformers/issues/5809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5809/comments | https://api.github.com/repos/huggingface/transformers/issues/5809/events | https://github.com/huggingface/transformers/issues/5809 | 658,182,976 | MDU6SXNzdWU2NTgxODI5NzY= | 5,809 | TypeError: forward() got an unexpected keyword argument 'head_mask' | {
"login": "gandharvsuri",
"id": 31670690,
"node_id": "MDQ6VXNlcjMxNjcwNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/31670690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gandharvsuri",
"html_url": "https://github.com/gandharvsuri",
"followers_url": "https://api.github.com/users/gandharvsuri/followers",
"following_url": "https://api.github.com/users/gandharvsuri/following{/other_user}",
"gists_url": "https://api.github.com/users/gandharvsuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gandharvsuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gandharvsuri/subscriptions",
"organizations_url": "https://api.github.com/users/gandharvsuri/orgs",
"repos_url": "https://api.github.com/users/gandharvsuri/repos",
"events_url": "https://api.github.com/users/gandharvsuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/gandharvsuri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"The `EncoderDecoderModel` does not work with longformer yet. Closing this in favor of https://github.com/huggingface/transformers/issues/4225 ."
] | 1,594 | 1,594 | 1,594 | NONE | null | I'm getting the above error while training for EncoderDecoder Model for longformer.
Following is the code snippet from the training loop, followed by the model definition.
` output = model(input_ids = b_input_ids, attention_mask = b_input_masks, decoder_input_ids = b_decoder_input_ids, decoder_attention_mask = b_decoder_input_masks )
`
`model = EncoderDecoderModel.from_encoder_decoder_pretrained('allenai/longformer-base-4096','allenai/longformer-base-4096')
`
The crazy thing is I've not even defined 'head_mask' and left it to take its default value of 'none'.
Following the complete error.
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-18-68f3d2859ad0> in <module>()
32 optimizer.zero_grad()
33
---> 34 output = model(input_ids = b_input_ids, attention_mask = b_input_masks, head_mask = None,decoder_input_ids = b_decoder_input_ids, decoder_attention_mask = b_decoder_input_masks )
35 loss = output[0]
36
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
TypeError: forward() got an unexpected keyword argument 'head_mask'` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5809/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5808/comments | https://api.github.com/repos/huggingface/transformers/issues/5808/events | https://github.com/huggingface/transformers/pull/5808 | 658,179,055 | MDExOlB1bGxSZXF1ZXN0NDUwMTc4ODc1 | 5,808 | [Benchmark] Fix models without `architectures` param in config | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | MEMBER | null | This PR fixes a bug, when running benchmarks for models that have `config.architectures = None`, *e.g.* `longformer-base-4096`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5808/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5808/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5808",
"html_url": "https://github.com/huggingface/transformers/pull/5808",
"diff_url": "https://github.com/huggingface/transformers/pull/5808.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5808.patch",
"merged_at": 1594905311000
} |
https://api.github.com/repos/huggingface/transformers/issues/5807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5807/comments | https://api.github.com/repos/huggingface/transformers/issues/5807/events | https://github.com/huggingface/transformers/issues/5807 | 658,175,416 | MDU6SXNzdWU2NTgxNzU0MTY= | 5,807 | While running finetune.py in seq2seq examples on a custom dataset, I am getting the following error. | {
"login": "laibamehnaz",
"id": 36405283,
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laibamehnaz",
"html_url": "https://github.com/laibamehnaz",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @laibamehnaz , what is your batch size ?\r\n\r\nHere's my wild guess. I think one of your batch has empty lines. The batches are trimmed using the below function to remove excessive pad tokens\r\n\r\n```python3\r\ndef trim_batch(\r\n input_ids, pad_token_id, attention_mask=None,\r\n):\r\n \"\"\"Remove columns that are populated exclusively by pad_token_id\"\"\"\r\n keep_column_mask = input_ids.ne(pad_token_id).any(dim=0)\r\n if attention_mask is None:\r\n return input_ids[:, keep_column_mask]\r\n else:\r\n return (input_ids[:, keep_column_mask], attention_mask[:, keep_column_mask])\r\n```\r\n\r\nif your batch has empty lines then after tokenizing it'll contain only pad tokens, and when its passed to trim batch it'll remove all those pad tokens and the batch will be empty i.e of shape [batch_size, 0]. This could the reason for the `RuntimeError`\r\n\r\nyou can verify this with \r\n```python3\r\ntok = T5Tokenizer.from_pretrained(\"t5-base\")\r\ntext_batch = [\"\", \"\"]\r\nenc = tok(text_batch, max_length=64, padding=\"max_length\", return_tensors=\"pt\")\r\nprint(enc['input_ids'].shape)\r\nids = trim_batch(enc['input_ids'], tok.pad_token_id)\r\ninput_shape = ids.size()\r\nids = ids.view(-1, input_shape[-1])\r\n```\r\n\r\nit'll give the same `RuntimeError`.\r\n\r\nSo I think you should check your dataset for empty lines, that should solve this issue ",
"Hi @patil-suraj ,\r\nThank you so much for your quick response. I have checked my dataset already for the same issue, and it didn't show any empty lines, and that's why I was so confused. But sure, I will check again. Also, my batch_size=1. ",
"If your bs is 1 then most probably blank line is the reason ",
"@laibamehnaz can you just iterate through the dataloader with bs 1 and see when the shape comes out to be [bs, 0] this should allow you to pinpoint the error",
"Hi @patil-suraj, I tried what you suggested, but I do not get the same error. My smallest sentence is 1 word long, and there are no empty lines. ",
"Interesting, can you post the word here, maybe tokenizer is skipping it then. One more thing you can try is do forward pass only with that one word example and see if it gives the error",
"Also pinging @sshleifer ",
"Also, there is [this line](https://github.com/huggingface/transformers/blob/ab0b3fa15c159034a55ee95ac39b7a6ba4527c4a/examples/seq2seq/finetune.py#L133) in finetune.py, which suggests that if your decoder_input_ids are length 0 and you're not adding special tokens (as was the case previously for t5), the labels would get truncated to shape (1, 0).",
"https://github.com/huggingface/transformers/blob/ab0b3fa15c159034a55ee95ac39b7a6ba4527c4a/examples/seq2seq/finetune.py#L133",
"Right, so one word example is encoded as single id and due to shifting and no special token, shape becomes (1,0). So one more reason to automatically add `</s>` with t5 tokenizer ",
"@laibamehnaz so immediate solution for this issue is to add `</s>` at the end of every target example text",
"Alright, thank you so much. Will get back after trying it out. ",
"Works fine now. Thank you @patil-suraj.",
"hi @laibamehnaz , there's another bug in finetune.py when using T5. See issue #5987 and PR #5994",
"Fix is easy, just check the change in the PR",
"Thanks a lot!",
"That PR is merged now, you can now just clone the master or pull if you've already cloned",
"I did notice that. Thanks a lot!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,601 | 1,601 | NONE | null | ### This is what I'm getting:
Can I get some help? Thank you
Epoch 1: 34% 3410/10000 [1:43:10<3:19:22, 1.82s/it, loss=3.052, v_num=6]Traceback (most recent call last):
File "finetune.py", line 344, in <module>
main(args)
File "finetune.py", line 322, in main
logger=logger,
File "/content/drive/My Drive/Colab Notebooks/transformers/examples/lightning_base.py", line 339, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 918, in fit
self.single_gpu_train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 176, in single_gpu_train
self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1093, in run_pretrain_routine
self.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 375, in train
self.run_training_epoch()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 458, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 634, in run_training_batch
loss, batch_output = optimizer_closure()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 598, in optimizer_closure
output_dict = self.training_forward(split_batch, batch_idx, opt_idx, self.hiddens)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 773, in training_forward
output = self.model.training_step(*args)
File "finetune.py", line 131, in training_step
loss_tensors = self._step(batch)
File "finetune.py", line 126, in _step
outputs = self(source_ids, attention_mask=source_mask, decoder_input_ids=y_ids, labels=lm_labels,)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "finetune.py", line 112, in forward
return self.model(input_ids, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 1175, in forward
return_tuple=return_tuple,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 692, in forward
input_ids = input_ids.view(-1, input_shape[-1])
RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 0] because the unspecified dimension size -1 can be any value and is ambiguous
Epoch 1: 34%|███▍ | 3410/10000 [1:43:10<3:19:23, 1.82s/it, loss=3.052, ### v_num=6]
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5807/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5806/comments | https://api.github.com/repos/huggingface/transformers/issues/5806/events | https://github.com/huggingface/transformers/issues/5806 | 658,140,970 | MDU6SXNzdWU2NTgxNDA5NzA= | 5,806 | BERT-viewer is broken for russian? | {
"login": "Dronte",
"id": 3256572,
"node_id": "MDQ6VXNlcjMyNTY1NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3256572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dronte",
"html_url": "https://github.com/Dronte",
"followers_url": "https://api.github.com/users/Dronte/followers",
"following_url": "https://api.github.com/users/Dronte/following{/other_user}",
"gists_url": "https://api.github.com/users/Dronte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dronte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dronte/subscriptions",
"organizations_url": "https://api.github.com/users/Dronte/orgs",
"repos_url": "https://api.github.com/users/Dronte/repos",
"events_url": "https://api.github.com/users/Dronte/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dronte/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I don't see anything obviously wrong on our side, so you should ask the model authors (@deeppavlov and other members of @deepmipt) – maybe they uploaded the model without a trained final LM prediction layer, for instance.\r\n\r\nOff topic, but e.g. by contrast this translation model works very well for russian: https://huggingface.co/Helsinki-NLP/opus-mt-ru-en?text=%D0%A3+%D0%BC%D0%B5%D0%BD%D1%8F+%D0%B1%D1%8B%D0%BB+%D1%81%D0%B0%D0%BB%D0%B0%D1%82+%D0%BD%D0%B0+%D0%BE%D0%B1%D0%B5%D0%B4",
"Hi!\r\nWe (DeepPavlov) might have a problem with Russian model that was uploaded to HugginFace and we need to check it.\r\nEverything should be okay with MLM head if you use checkpoints from here:\r\nhttp://docs.deeppavlov.ai/en/master/features/pretrained_vectors.html#downloads\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | # 🐛 Bug
Hi I'm looking at BERT demos for russian, for example:
https://huggingface.co/DeepPavlov/rubert-base-cased?text=%D0%AF+%5BMASK%5D+%D1%81%D0%B0%D0%BB%D0%B0%D1%82+%D0%BD%D0%B0+%D0%BE%D0%B1%D0%B5%D0%B4.
English version for comparison:
https://huggingface.co/bert-base-cased?text=I+%5BMASK%5D+salad+for+lunch.
In the first link all token scores are 0.0 and they are absolute rubbish. Is everything fine with the viewer or the model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5806/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/5806/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5805/comments | https://api.github.com/repos/huggingface/transformers/issues/5805/events | https://github.com/huggingface/transformers/pull/5805 | 658,127,381 | MDExOlB1bGxSZXF1ZXN0NDUwMTM0NDE3 | 5,805 | [Fix] Checkpoint saving and I/O for XLNetQASimple | {
"login": "jerrykuo7727",
"id": 13695584,
"node_id": "MDQ6VXNlcjEzNjk1NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/13695584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerrykuo7727",
"html_url": "https://github.com/jerrykuo7727",
"followers_url": "https://api.github.com/users/jerrykuo7727/followers",
"following_url": "https://api.github.com/users/jerrykuo7727/following{/other_user}",
"gists_url": "https://api.github.com/users/jerrykuo7727/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerrykuo7727/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerrykuo7727/subscriptions",
"organizations_url": "https://api.github.com/users/jerrykuo7727/orgs",
"repos_url": "https://api.github.com/users/jerrykuo7727/repos",
"events_url": "https://api.github.com/users/jerrykuo7727/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerrykuo7727/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,601 | 1,601 | NONE | null | **1. [Fix] Create directories for model checkpoints**
Saving model checkpoints without making directories would lead to error.
(i.e. The output directory will not exist using the currenct script.)
The [older version](https://github.com/huggingface/transformers/blob/3f5ccb183e3cfa755dea2dd2afd9abbf1a0f93b8/examples/run_squad.py#L193) of the part of this script works fine.
**2. [Fix] Fix wrong input/output structures for XLNet**
Running XLNet using the current script would lead to this error below.
> TypeError: forward() got an unexpected keyword argument 'cls_index'
The reason is that the [AutoModelForQuestionAnswering](https://github.com/huggingface/transformers/blob/ba2400189b2242620868096ae49babf93bd9ce00/examples/question-answering/run_squad.py#L735) would initialize XLNetForQuestionAnsweringSimple, while the I/O structures are written for XLNetForQuestionAnswering. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5805/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5805",
"html_url": "https://github.com/huggingface/transformers/pull/5805",
"diff_url": "https://github.com/huggingface/transformers/pull/5805.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5805.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5804/comments | https://api.github.com/repos/huggingface/transformers/issues/5804/events | https://github.com/huggingface/transformers/pull/5804 | 658,120,899 | MDExOlB1bGxSZXF1ZXN0NDUwMTI4NzUz | 5,804 | Add MPNet | {
"login": "StillKeepTry",
"id": 6577458,
"node_id": "MDQ6VXNlcjY1Nzc0NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6577458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StillKeepTry",
"html_url": "https://github.com/StillKeepTry",
"followers_url": "https://api.github.com/users/StillKeepTry/followers",
"following_url": "https://api.github.com/users/StillKeepTry/following{/other_user}",
"gists_url": "https://api.github.com/users/StillKeepTry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StillKeepTry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StillKeepTry/subscriptions",
"organizations_url": "https://api.github.com/users/StillKeepTry/orgs",
"repos_url": "https://api.github.com/users/StillKeepTry/repos",
"events_url": "https://api.github.com/users/StillKeepTry/events{/privacy}",
"received_events_url": "https://api.github.com/users/StillKeepTry/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@StillKeepTry Don't forget to ping me if you need another round of review!\r\n\r\nBy the way, I haven't seen any refactorization for code like this:\r\n```python\r\nfrom .modeling_bert import (\r\n BertEmbeddings, \r\n BertPreTrainedModel, \r\n BertIntermediate, \r\n BertOutput, \r\n BertSelfAttention,\r\n BertSelfOutput, \r\n BertAttention, \r\n BertModel\r\n)\r\n```\r\nI recommend you copy the code like `BertEmbeddings` and paste them into your own modeling file and rename it to `MPNetEmebedding`.",
"Would you please add tests for MPNet? You can do it following other models.",
"Also there are several CI fails, please fix it and we can move to the next step. Thanks for your great contribution!",
"Hey @StillKeepTry - there seems to have been a problem with a git rebase. We sadly cannot review 905 changed files. Can you open a new PR that only has the necessary changes? ",
"> Hey @StillKeepTry - there seems to have been a problem with a git rebase. We sadly cannot review 905 changed files. Can you open a new PR that only has the necessary changes?\r\n\r\nok",
"The new PR is submitted at [https://github.com/huggingface/transformers/pull/8804](https://github.com/huggingface/transformers/pull/8804)\r\n\r\n@patrickvonplaten "
] | 1,594 | 1,606 | 1,606 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5804/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5804",
"html_url": "https://github.com/huggingface/transformers/pull/5804",
"diff_url": "https://github.com/huggingface/transformers/pull/5804.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5804.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5803/comments | https://api.github.com/repos/huggingface/transformers/issues/5803/events | https://github.com/huggingface/transformers/issues/5803 | 658,100,264 | MDU6SXNzdWU2NTgxMDAyNjQ= | 5,803 | Hosted Inference API: Error loading tokenizer Can't load config | {
"login": "sampathkethineedi",
"id": 30796835,
"node_id": "MDQ6VXNlcjMwNzk2ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/30796835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sampathkethineedi",
"html_url": "https://github.com/sampathkethineedi",
"followers_url": "https://api.github.com/users/sampathkethineedi/followers",
"following_url": "https://api.github.com/users/sampathkethineedi/following{/other_user}",
"gists_url": "https://api.github.com/users/sampathkethineedi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sampathkethineedi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sampathkethineedi/subscriptions",
"organizations_url": "https://api.github.com/users/sampathkethineedi/orgs",
"repos_url": "https://api.github.com/users/sampathkethineedi/repos",
"events_url": "https://api.github.com/users/sampathkethineedi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sampathkethineedi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can load the pipeline locally so I am not sure what's happening here, indeed. Will post here when we know more.",
"Hey @julien-c, thanks for the quick response!\r\n\r\nI'm assuming the inference API is using the `transformers-cli serve` ?\r\nWhen I do the same locally, I'm able to get results at **localhost:8888/forward** using `POST -d {\"inputs\": [\"< Text >\"]}` \r\nBut when I try it on the https://api-inference.huggingface.co/models/sampathkethineedi/industry-classification-api, it doesn't work.\r\n\r\nThe Hosted Inference API has the same structure as `transformers-cli serve` or is it any different?",
"[https://huggingface.co/sampathkethineedi/industry-classification-api](https://huggingface.co/sampathkethineedi/industry-classification-api)\n\nI just checked and the API is working. Would love to know what was causing the problem. Closing the issue.",
"@sampathkethineedi can i get dataset u trained on "
] | 1,594 | 1,601 | 1,595 | NONE | null | Hi,
[https://huggingface.co/sampathkethineedi/industry-classification-api](https://huggingface.co/sampathkethineedi/industry-classification-api)
I uploaded my classification model fine tuned on BERT. There is no issue running the model from the hub and using the ‘sentiment-analysis’ pipeline. But there seems to be some problem when it comes to Hosted Inference API.
```
Error loading tokenizer Can't load config for 'sampathkethineedi/industry-classification-api classification'. Make sure that: - 'sampathkethineedi/industry-classification-api' is a correct model identifier listed on 'https://huggingface.co/models' - or 'sampathkethineedi/industry-classification-api' is the correct path to a directory containing a config.json file OSError("Can't load config for 'sampathkethineedi/industry-classification-api'. Make sure that:\n\n- 'sampathkethineedi/industry-classification-api' is a correct model identifier listed on 'https://huggingface.co/models'\n\n- or 'sampathkethineedi/industry-classification-api' is the correct path to a directory containing a config.json file\n\n")
```
Can someone help me with this?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5803/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5802/comments | https://api.github.com/repos/huggingface/transformers/issues/5802/events | https://github.com/huggingface/transformers/pull/5802 | 658,069,672 | MDExOlB1bGxSZXF1ZXN0NDUwMDg0MjQ1 | 5,802 | [WIP - Benchmark] Add generate function | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | MEMBER | null | This PR adds the `generate` function to Benchmarks. You can try it out by running:
```
python examples/benchmarking/run_benchmark.py --models gpt2 --batch_sizes 1 --sequence_lengths 20 --no_inference --generate
```
## TODO
- [ ] Add for TF as well | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5802/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5802",
"html_url": "https://github.com/huggingface/transformers/pull/5802",
"diff_url": "https://github.com/huggingface/transformers/pull/5802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5802.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5801/comments | https://api.github.com/repos/huggingface/transformers/issues/5801/events | https://github.com/huggingface/transformers/pull/5801 | 658,048,854 | MDExOlB1bGxSZXF1ZXN0NDUwMDY2MzU4 | 5,801 | [Benchmark] fix benchmark non standard model | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5801?src=pr&el=h1) Report\n> Merging [#5801](https://codecov.io/gh/huggingface/transformers/pull/5801?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8ce610bc96ace8be6b514c0f2c3fb0f76eb7ee15&el=desc) will **increase** coverage by `0.08%`.\n> The diff coverage is `75.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5801?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5801 +/- ##\n==========================================\n+ Coverage 77.33% 77.42% +0.08% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n+ Hits 20143 20166 +23 \n+ Misses 5904 5881 -23 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5801?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5801/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `65.03% <50.00%> (+3.49%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5801/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `81.10% <100.00%> (+7.08%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5801/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5801/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5801/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.34% <0.00%> (-6.55%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5801/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5801/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5801/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.95% <0.00%> (-0.81%)` | :arrow_down: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5801/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5801?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5801?src=pr&el=footer). Last update [8ce610b...dbf09d1](https://codecov.io/gh/huggingface/transformers/pull/5801?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Pinging @LysandreJik for notification.",
"Nice!"
] | 1,594 | 1,595 | 1,594 | MEMBER | null | This PR fixes a typo in Benchmark, that allows to load specifc architectures and not just the base model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5801/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5801",
"html_url": "https://github.com/huggingface/transformers/pull/5801",
"diff_url": "https://github.com/huggingface/transformers/pull/5801.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5801.patch",
"merged_at": 1594894391000
} |
https://api.github.com/repos/huggingface/transformers/issues/5800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5800/comments | https://api.github.com/repos/huggingface/transformers/issues/5800/events | https://github.com/huggingface/transformers/issues/5800 | 658,023,693 | MDU6SXNzdWU2NTgwMjM2OTM= | 5,800 | GPT2 weights don't initialize from checkpoint | {
"login": "alma-lindborg",
"id": 8694790,
"node_id": "MDQ6VXNlcjg2OTQ3OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8694790?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alma-lindborg",
"html_url": "https://github.com/alma-lindborg",
"followers_url": "https://api.github.com/users/alma-lindborg/followers",
"following_url": "https://api.github.com/users/alma-lindborg/following{/other_user}",
"gists_url": "https://api.github.com/users/alma-lindborg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alma-lindborg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alma-lindborg/subscriptions",
"organizations_url": "https://api.github.com/users/alma-lindborg/orgs",
"repos_url": "https://api.github.com/users/alma-lindborg/repos",
"events_url": "https://api.github.com/users/alma-lindborg/events{/privacy}",
"received_events_url": "https://api.github.com/users/alma-lindborg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @almaLindborg,\r\n\r\nThis is a known warning. We should probably disable this...a couple of other models have it as well. But the model works as intended, so no worries about the message.\r\n\r\nAlso pinging @sshleifer for notification."
] | 1,594 | 1,594 | 1,594 | NONE | null | OS: OSX 10.15.5 (Catalina)
Transformers version 3.0.2
I'm running into this warning when I'm trying to initialize a pre-trained GPT2 model.
This looks a bit worrying to me because it looks like like it ignores all the pre-trained attention heads, or am I missing something here?
<img width="1087" alt="Screenshot 2020-07-16 at 11 19 10" src="https://user-images.githubusercontent.com/8694790/87654081-b837c900-c756-11ea-925e-106314ad9942.png">
Any idea of what's gone wrong? I've been running the same code on another computer earlier without encountering this problem, but on my current setup I haven't been able to get around it. I also tried deleting the downloaded files from the cache and re-loading the models with no luck.
Any help is very appreciated! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5800/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5799/comments | https://api.github.com/repos/huggingface/transformers/issues/5799/events | https://github.com/huggingface/transformers/issues/5799 | 657,942,620 | MDU6SXNzdWU2NTc5NDI2MjA= | 5,799 | Issue when load pretrained weights | {
"login": "mt324010",
"id": 35977320,
"node_id": "MDQ6VXNlcjM1OTc3MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/35977320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mt324010",
"html_url": "https://github.com/mt324010",
"followers_url": "https://api.github.com/users/mt324010/followers",
"following_url": "https://api.github.com/users/mt324010/following{/other_user}",
"gists_url": "https://api.github.com/users/mt324010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mt324010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mt324010/subscriptions",
"organizations_url": "https://api.github.com/users/mt324010/orgs",
"repos_url": "https://api.github.com/users/mt324010/repos",
"events_url": "https://api.github.com/users/mt324010/events{/privacy}",
"received_events_url": "https://api.github.com/users/mt324010/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Cannot reproduce. The command:\r\n\r\n```python \r\nfrom transformers import AutoModelWithLMHead\r\nmodel = AutoModelWithLMHead.from_pretrained(\"bert-base-chinese\")\r\n```\r\n\r\nworks fine on master. Can you update to v3.0.2 `pip install --upgrade transformers` and check again? :-) ",
"> Cannot reproduce. The command:\r\n> \r\n> ```python\r\n> from transformers import AutoModelWithLMHead\r\n> model = AutoModelWithLMHead.from_pretrained(\"bert-base-chinese\")\r\n> ```\r\n> \r\n> works fine on master. Can you update to v3.0.2 `pip install --upgrade transformers` and check again? :-)\r\n\r\nStill not work for me. I tried to download the weights directly, but another error occurred...\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte",
"Can you please post your environment info here `python src/transformers/commands/env.py`?",
"I'm seeing the same error when trying to load a GTP2 checkpoint model (using both `GPT2Model` and `AutoModel`):\r\n\r\n```\r\nmodel = GPT2Model.from_pretrained('./test_01/test_01.index', from_tf=True) # throws UnicodeDecodeError\r\nmodel = GPT2Model.from_pretrained('./test_01/test_01.index') # throws UnicodeDecodeError\r\nmodel = AutoModel.from_pretrained('./test_01/test_01.index', from_tf=True) # throws UnicodeDecodeError\r\nmodel = AutoModel.from_pretrained('./test_01/test_01.index') # throws UnicodeDecodeError\r\n```\r\n\r\nI could probably try every possible variation of loading that model and hit the same error.\r\n\r\nI've also used checkpoint models that, in theory, should work.\r\n\r\nIf I use `GPT2LMHeadModel.from_pretrained('gpt2-medium')` (or any thing that allows me to load a model by name) it works fine.\r\n\r\nMy env:\r\n\r\n```\r\n- `transformers` version: 3.0.2\r\n- Platform: macOS-10.15.5-x86_64-i386-64bit\r\n- Python version: 3.8.3\r\n- PyTorch version (GPU?): 1.6.0 (False)\r\n- Tensorflow version (GPU?): 2.3.0 (False)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: Not sure\r\n```\r\n",
"@elbowdonkey, \r\n\r\ncan you try just running:\r\n```python \r\nmodel = GPT2Model.from_pretrained('./test_01/\", from_tf=True)\r\n```\r\n\r\nwhere the relevant files can be found in `test_01`?",
"I get a different error:\r\n\r\n```python\r\nmodel = GPT2Model.from_pretrained(\"./test_02/\", from_tf=True)\r\n2020-08-10 12:32:10.047629: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2020-08-10 12:32:10.064526: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f8eb76b8f80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\r\n2020-08-10 12:32:10.064543: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\r\n2020-08-10 12:32:10.071086: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.\r\n2020-08-10 12:32:27.584426: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open ./test_02/pytorch_model.bin: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/--/.pyenv/versions/3.8.3/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 808, in from_pretrained\r\n model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)\r\n File \"/Users/--/.pyenv/versions/3.8.3/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py\", line 261, in load_tf2_checkpoint_in_pytorch_model\r\n tf_model.load_weights(tf_checkpoint_path, by_name=True)\r\n File \"/Users/--/.pyenv/versions/3.8.3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py\", line 2204, in load_weights\r\n with h5py.File(filepath, 'r') as f:\r\n File \"/Users/--/.pyenv/versions/3.8.3/lib/python3.8/site-packages/h5py/_hl/files.py\", line 406, in __init__\r\n fid = make_fid(name, mode, userblock_size,\r\n File \"/Users/--/.pyenv/versions/3.8.3/lib/python3.8/site-packages/h5py/_hl/files.py\", line 173, in make_fid\r\n fid = h5f.open(name, flags, fapl=fapl)\r\n File \"h5py/_objects.pyx\", line 54, in h5py._objects.with_phil.wrapper\r\n File \"h5py/_objects.pyx\", line 55, in h5py._objects.with_phil.wrapper\r\n File \"h5py/h5f.pyx\", line 88, in h5py.h5f.open\r\nOSError: Unable to open file (file signature not found)\r\n```\r\n\r\nThe model I'm trying to use is a model that was converted from a checkpoint to a pytorch model. I have no idea what kind of checkpoint model it was (it has several files: `checkpoint` and `vocab.bpe`, `hparams.json`, and `test_02.data-00000-of-00001`, among others.)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi,\r\n\r\nI would like to request that this ticket be opened back up. I'm having the same issue but with the default pretrained model:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nOSError Traceback (most recent call last)\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 654 if resolved_archive_file is None:\r\n--> 655 raise EnvironmentError\r\n 656 except EnvironmentError:\r\n\r\nOSError: \r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-81-7e9fe0224671> in <module>\r\n 3 #Load AutoModel from huggingface model repository\r\n 4 tokenizer = AutoTokenizer.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\")\r\n----> 5 model = AutoModel.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\", from_tf=True)\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 500 for config_class, model_class in MODEL_MAPPING.items():\r\n 501 if isinstance(config, config_class):\r\n--> 502 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)\r\n 503 raise ValueError(\r\n 504 \"Unrecognized configuration class {} for this kind of AutoModel: {}.\\n\"\r\n\r\n~\\Anaconda3\\lib\\site-packages\\transformers\\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 660 f\"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME}.\\n\\n\"\r\n 661 )\r\n--> 662 raise EnvironmentError(msg)\r\n 663 \r\n 664 if resolved_archive_file == archive_file:\r\n\r\nOSError: Can't load weights for 'sentence-transformers/bert-base-nli-mean-tokens'. Make sure that:\r\n\r\n- 'sentence-transformers/bert-base-nli-mean-tokens' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'sentence-transformers/bert-base-nli-mean-tokens' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.\r\n```\r\n\r\n**To reproduce:**\r\n```\r\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\n#Load AutoModel from huggingface model repository\r\ntokenizer = AutoTokenizer.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\")\r\nmodel = AutoModel.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\", from_tf=True)\r\n```\r\n\r\nMy env:\r\n\r\n- `transformers` version: 3.0.2\r\n- Platform: Window 10 Enterprise, version 1909, 16GB RAM, 64 Bit OS, x64-based processor\r\n- Python version: 3.8.3\r\n- Torch version: 1.6.0+cpu",
"Hey @Ecanlilar,\r\n\r\nThis model exists only in PT so either you do:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\n#Load AutoModel from huggingface model repository\r\ntokenizer = AutoTokenizer.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\")\r\nmodel = AutoModel.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\")\r\n```\r\n\r\nor \r\n\r\n```python\r\nfrom transformers import AutoTokenizer, TFAutoModel\r\n\r\n#Load AutoModel from huggingface model repository\r\ntokenizer = AutoTokenizer.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\")\r\nmodel = TFAutoModel.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\", from_pt=True)\r\n```",
"> Hey @Ecanlilar,\r\n> \r\n> This model exists only in PT so either you do:\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer, AutoModel\r\n> \r\n> #Load AutoModel from huggingface model repository\r\n> tokenizer = AutoTokenizer.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\")\r\n> model = AutoModel.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\")\r\n> ```\r\n> \r\n> or\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer, TFAutoModel\r\n> \r\n> #Load AutoModel from huggingface model repository\r\n> tokenizer = AutoTokenizer.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\")\r\n> model = TFAutoModel.from_pretrained(\"sentence-transformers/bert-base-nli-mean-tokens\", from_pt=True)\r\n> ```\r\n\r\nThis isn't working for me. I'm using the latest version of the transformers library (4.10.2). I'm getting the same error as Ecanlilar."
] | 1,594 | 1,631 | 1,603 | NONE | null | I got the following error when running
AutoModelWithLMHead.from_pretrained("bert-base-chinese")
OSError: Can't load weights for 'bert-base-chinese'. Make sure that:
- 'bert-base-chinese' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-chinese' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5799/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5798/comments | https://api.github.com/repos/huggingface/transformers/issues/5798/events | https://github.com/huggingface/transformers/pull/5798 | 657,877,381 | MDExOlB1bGxSZXF1ZXN0NDQ5OTIxMTc4 | 5,798 | Lightning Updates for v0.8.5 | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer have any guidance on these two errors?\r\n\r\n\r\n## T5\r\n\r\n```python\r\n________________ test_finetune[patrickvonplaten/t5-tiny-random] ________________\r\n[gw3] linux -- Python 3.6.11 /usr/local/bin/python\r\n\r\nmodel = 'patrickvonplaten/t5-tiny-random'\r\n\r\n @pytest.mark.parametrize(\r\n [\"model\"], [pytest.param(T5_TINY), pytest.param(BART_TINY), pytest.param(MBART_TINY), pytest.param(MARIAN_TINY)]\r\n )\r\n def test_finetune(model):\r\n args_d: dict = CHEAP_ARGS.copy()\r\n task = \"translation\" if model in [MBART_TINY, MARIAN_TINY] else \"summarization\"\r\n tmp_dir = make_test_data_dir()\r\n output_dir = tempfile.mkdtemp(prefix=\"output_\")\r\n args_d.update(\r\n data_dir=tmp_dir,\r\n model_name_or_path=model,\r\n tokenizer_name=None,\r\n train_batch_size=2,\r\n eval_batch_size=2,\r\n output_dir=output_dir,\r\n do_predict=True,\r\n task=task,\r\n src_lang=\"en_XX\",\r\n tgt_lang=\"ro_RO\",\r\n freeze_encoder=True,\r\n freeze_embeds=True,\r\n )\r\n assert \"n_train\" in args_d\r\n args = argparse.Namespace(**args_d)\r\n> module = main(args)\r\n\r\nexamples/seq2seq/test_seq2seq_examples.py:233: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nexamples/seq2seq/finetune.py:298: in main\r\n model: SummarizationModule = SummarizationModule(args)\r\nexamples/seq2seq/finetune.py:95: in __init__\r\n freeze_params(self.model.model.encoder) # TODO: this will break for t5\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = T5ForConditionalGeneration(\r\n (shared): Embedding(32128, 64)\r\n (encoder): T5Stack(\r\n (embed_tokens): Embedding(32128...\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (lm_head): Linear(in_features=64, out_features=32128, bias=False)\r\n)\r\nname = 'model'\r\n\r\n def __getattr__(self, name):\r\n if '_parameters' in self.__dict__:\r\n _parameters = self.__dict__['_parameters']\r\n if name in _parameters:\r\n return _parameters[name]\r\n if '_buffers' in self.__dict__:\r\n _buffers = self.__dict__['_buffers']\r\n if name in _buffers:\r\n return _buffers[name]\r\n if '_modules' in self.__dict__:\r\n modules = self.__dict__['_modules']\r\n if name in modules:\r\n return modules[name]\r\n raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n> type(self).__name__, name))\r\nE AttributeError: 'T5ForConditionalGeneration' object has no attribute 'model'\r\n\r\n/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py:594: AttributeError\r\n```\r\n\r\n## MBart\r\n```python\r\n_____________________ test_finetune[sshleifer/tiny-mbart] ______________________\r\n[gw3] linux -- Python 3.6.11 /usr/local/bin/python\r\n\r\nmodel = 'sshleifer/tiny-mbart'\r\n\r\n @pytest.mark.parametrize(\r\n [\"model\"], [pytest.param(T5_TINY), pytest.param(BART_TINY), pytest.param(MBART_TINY), pytest.param(MARIAN_TINY)]\r\n )\r\n def test_finetune(model):\r\n args_d: dict = CHEAP_ARGS.copy()\r\n task = \"translation\" if model in [MBART_TINY, MARIAN_TINY] else \"summarization\"\r\n tmp_dir = make_test_data_dir()\r\n output_dir = tempfile.mkdtemp(prefix=\"output_\")\r\n args_d.update(\r\n data_dir=tmp_dir,\r\n model_name_or_path=model,\r\n tokenizer_name=None,\r\n train_batch_size=2,\r\n eval_batch_size=2,\r\n output_dir=output_dir,\r\n do_predict=True,\r\n task=task,\r\n src_lang=\"en_XX\",\r\n tgt_lang=\"ro_RO\",\r\n freeze_encoder=True,\r\n freeze_embeds=True,\r\n )\r\n assert \"n_train\" in args_d\r\n args = argparse.Namespace(**args_d)\r\n> module = main(args)\r\n\r\nexamples/seq2seq/test_seq2seq_examples.py:233: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nexamples/seq2seq/finetune.py:324: in main\r\n logger=logger,\r\nexamples/lightning_base.py:312: in generic_train\r\n trainer.fit(model)\r\n/usr/local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py:1038: in fit\r\n model.setup('fit')\r\nexamples/lightning_base.py:125: in setup\r\n dataloader = self.get_dataloader(\"train\", train_batch_size)\r\nexamples/seq2seq/finetune.py:193: in get_dataloader\r\n dataset = self.get_dataset(type_path)\r\nexamples/seq2seq/finetune.py:188: in get_dataset\r\n **self.dataset_kwargs,\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <seq2seq.utils.SummarizationDataset object at 0x7ff21a4592e8>\r\ntokenizer = <transformers.tokenization_bart.MBartTokenizer object at 0x7ff21f7c0b00>\r\ndata_dir = PosixPath('/tmp/tmpmc70afs6'), type_path = 'train'\r\nmax_source_length = 12, max_target_length = 12, n_obs = None\r\noverwrite_cache = False, prefix = '', src_lang = None, tgt_lang = None\r\n\r\n def __init__(\r\n self,\r\n tokenizer,\r\n data_dir,\r\n type_path=\"train\",\r\n max_source_length=1024,\r\n max_target_length=56,\r\n n_obs=None,\r\n overwrite_cache=False,\r\n prefix=\"\",\r\n src_lang=None,\r\n tgt_lang=None,\r\n ):\r\n super().__init__()\r\n # FIXME: the rstrip logic strips all the chars, it seems.\r\n tok_name = tokenizer.__class__.__name__.lower().rstrip(\"tokenizer\")\r\n if hasattr(tokenizer, \"set_lang\") and src_lang is not None:\r\n tokenizer.set_lang(src_lang) # HACK: only applies to mbart\r\n self.source = encode_file(\r\n tokenizer,\r\n os.path.join(data_dir, type_path + \".source\"),\r\n max_source_length,\r\n overwrite_cache=overwrite_cache,\r\n prefix=prefix,\r\n tok_name=tok_name,\r\n )\r\n tgt_path = os.path.join(data_dir, type_path + \".target\")\r\n if hasattr(tokenizer, \"set_lang\"):\r\n> assert tgt_lang is not None, \"--tgt_lang must be passed to build a translation\"\r\nE AssertionError: --tgt_lang must be passed to build a translation\r\n\r\nexamples/seq2seq/utils.py:112: AssertionError\r\n```",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=h1) Report\n> Merging [#5798](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/615be03f9d961c0c9722fe10e7830e011066772e&el=desc) will **decrease** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5798 +/- ##\n==========================================\n- Coverage 78.66% 78.48% -0.19% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n- Hits 20611 20563 -48 \n- Misses 5589 5637 +48 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5798/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5798/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5798/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=footer). Last update [615be03...ee864a0](https://codecov.io/gh/huggingface/transformers/pull/5798?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Merging this now.\r\ncc @moscow25 this bumps us to `pytorch_lightning==0.8.5`, let us know if any issues.\r\ncc @clmnt , @patil-suraj, @williamFalcon \r\n\r\nThanks for the big PR @nateraw and @williamFalcon !",
"Thanks @sshleifer -- `0.8.5` has been good for us this week. Much appreciated. "
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | Fixing #5361 ...battling with unittests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5798/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5798",
"html_url": "https://github.com/huggingface/transformers/pull/5798",
"diff_url": "https://github.com/huggingface/transformers/pull/5798.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5798.patch",
"merged_at": 1595040187000
} |
https://api.github.com/repos/huggingface/transformers/issues/5797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5797/comments | https://api.github.com/repos/huggingface/transformers/issues/5797/events | https://github.com/huggingface/transformers/issues/5797 | 657,876,278 | MDU6SXNzdWU2NTc4NzYyNzg= | 5,797 | Can I use the pretrained BERT-Base model directly for predict isNextSentence task? | {
"login": "nomadlx",
"id": 10513140,
"node_id": "MDQ6VXNlcjEwNTEzMTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/10513140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nomadlx",
"html_url": "https://github.com/nomadlx",
"followers_url": "https://api.github.com/users/nomadlx/followers",
"following_url": "https://api.github.com/users/nomadlx/following{/other_user}",
"gists_url": "https://api.github.com/users/nomadlx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nomadlx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nomadlx/subscriptions",
"organizations_url": "https://api.github.com/users/nomadlx/orgs",
"repos_url": "https://api.github.com/users/nomadlx/repos",
"events_url": "https://api.github.com/users/nomadlx/events{/privacy}",
"received_events_url": "https://api.github.com/users/nomadlx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have find anwser in [code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1138)."
] | 1,594 | 1,594 | 1,594 | NONE | null | # ❓ Can I use the pretrained BERT-Base model directly for predict isNextSentence task?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I has a document-level corpus, but it don't have document boundaries.
I want to confirm that whether can I predict isNextSentence task by pretrained BERT-Base model? The model does not use any data finetune and don't mask any token when i used it.
Is prediction reliable by this way? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5797/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5796/comments | https://api.github.com/repos/huggingface/transformers/issues/5796/events | https://github.com/huggingface/transformers/pull/5796 | 657,861,294 | MDExOlB1bGxSZXF1ZXN0NDQ5OTA3ODI4 | 5,796 | Moving transformers package import statements to relative imports in some files | {
"login": "afcruzs",
"id": 4340932,
"node_id": "MDQ6VXNlcjQzNDA5MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4340932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afcruzs",
"html_url": "https://github.com/afcruzs",
"followers_url": "https://api.github.com/users/afcruzs/followers",
"following_url": "https://api.github.com/users/afcruzs/following{/other_user}",
"gists_url": "https://api.github.com/users/afcruzs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afcruzs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afcruzs/subscriptions",
"organizations_url": "https://api.github.com/users/afcruzs/orgs",
"repos_url": "https://api.github.com/users/afcruzs/repos",
"events_url": "https://api.github.com/users/afcruzs/events{/privacy}",
"received_events_url": "https://api.github.com/users/afcruzs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=h1) Report\n> Merging [#5796](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7214954db42ec96603ea596c5f68b16f574fba89&el=desc) will **increase** coverage by `0.42%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5796 +/- ##\n==========================================\n+ Coverage 78.38% 78.80% +0.42% \n==========================================\n Files 146 146 \n Lines 26318 26318 \n==========================================\n+ Hits 20629 20741 +112 \n+ Misses 5689 5577 -112 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.90% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <100.00%> (ø)` | |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `72.72% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/5796/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=footer). Last update [7214954...39e85f8](https://codecov.io/gh/huggingface/transformers/pull/5796?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | When using the transformers library as a local submodule (eg git submodule) instead of a python package, it's important to have relative import instead of doing `from transformers` directly which would look at the installed version of the python package. Regardless, it seems like the codebase favors relative imports in general, but it seems a few cases were not added that way.
This pull request moves to relative paths occurrences in some files under the `src` folder, except by comments and the `convert_*` files which are likely importing the python package intentionally. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5796/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5796",
"html_url": "https://github.com/huggingface/transformers/pull/5796",
"diff_url": "https://github.com/huggingface/transformers/pull/5796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5796.patch",
"merged_at": 1595924357000
} |
https://api.github.com/repos/huggingface/transformers/issues/5795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5795/comments | https://api.github.com/repos/huggingface/transformers/issues/5795/events | https://github.com/huggingface/transformers/issues/5795 | 657,828,556 | MDU6SXNzdWU2NTc4Mjg1NTY= | 5,795 | LongFormerAttention For AutoRegressive Models | {
"login": "santhoshkolloju",
"id": 4193817,
"node_id": "MDQ6VXNlcjQxOTM4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4193817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santhoshkolloju",
"html_url": "https://github.com/santhoshkolloju",
"followers_url": "https://api.github.com/users/santhoshkolloju/followers",
"following_url": "https://api.github.com/users/santhoshkolloju/following{/other_user}",
"gists_url": "https://api.github.com/users/santhoshkolloju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santhoshkolloju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santhoshkolloju/subscriptions",
"organizations_url": "https://api.github.com/users/santhoshkolloju/orgs",
"repos_url": "https://api.github.com/users/santhoshkolloju/repos",
"events_url": "https://api.github.com/users/santhoshkolloju/events{/privacy}",
"received_events_url": "https://api.github.com/users/santhoshkolloju/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes! This will be added when starting the `Longformer` Encoder framework :-) Closing this in favor of https://github.com/huggingface/transformers/issues/5170 and https://github.com/huggingface/transformers/issues/4225"
] | 1,594 | 1,594 | 1,594 | NONE | null | Longformer Currently supports only Bidirectional Attention,It would be a great feature to finetune current language models like GPT-2 on longer sequences . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5795/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5794/comments | https://api.github.com/repos/huggingface/transformers/issues/5794/events | https://github.com/huggingface/transformers/issues/5794 | 657,796,341 | MDU6SXNzdWU2NTc3OTYzNDE= | 5,794 | Print all next tokens of a sentence over a certain probability threshold. | {
"login": "BigSalmon2",
"id": 61605789,
"node_id": "MDQ6VXNlcjYxNjA1Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BigSalmon2",
"html_url": "https://github.com/BigSalmon2",
"followers_url": "https://api.github.com/users/BigSalmon2/followers",
"following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}",
"gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions",
"organizations_url": "https://api.github.com/users/BigSalmon2/orgs",
"repos_url": "https://api.github.com/users/BigSalmon2/repos",
"events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}",
"received_events_url": "https://api.github.com/users/BigSalmon2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @zanderbush, we are trying to move special feature requests / research questions to https://discuss.huggingface.co/ - would you mind posting it there again? "
] | 1,594 | 1,594 | 1,594 | NONE | null | How would I do this using GPT-2? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5794/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5793/comments | https://api.github.com/repos/huggingface/transformers/issues/5793/events | https://github.com/huggingface/transformers/pull/5793 | 657,782,227 | MDExOlB1bGxSZXF1ZXN0NDQ5ODQyNzU5 | 5,793 | Adding the LXMERT pretraining model (MultiModal languageXvision) to HuggingFace's suite of models | {
"login": "eltoto1219",
"id": 14030663,
"node_id": "MDQ6VXNlcjE0MDMwNjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/14030663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eltoto1219",
"html_url": "https://github.com/eltoto1219",
"followers_url": "https://api.github.com/users/eltoto1219/followers",
"following_url": "https://api.github.com/users/eltoto1219/following{/other_user}",
"gists_url": "https://api.github.com/users/eltoto1219/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eltoto1219/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eltoto1219/subscriptions",
"organizations_url": "https://api.github.com/users/eltoto1219/orgs",
"repos_url": "https://api.github.com/users/eltoto1219/repos",
"events_url": "https://api.github.com/users/eltoto1219/events{/privacy}",
"received_events_url": "https://api.github.com/users/eltoto1219/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thank you so much!!! I really appreciate you offering to help too!\r\n\r\nTo cover what you mentioned quickly, I can upload the model weights tomorrow! For updating the documentation, that should be no problem either. I think for the model outputs regarding this model, there probably is quite a bit of information to return (pooled output, hidden states, and attentions for the language, vision, and cross-modality encoders). I can see about adding these tomorrow and get your thoughts on that. And lastly, I am glad I can help with adding more model heads! \r\n\r\nI have a bit more testing to do especially with the tensorflow model, but i will see what I can get done and let you know if I run into any blockers or questions. Look forward to getting back to you!",
"Hi, sorry for the slight delay. I added a new dataclass for Lxmert outputs, added the model card, finished all tests for the torch model, among a couple of other things. \r\n\r\nFor the tests, I have forgone the ModelMixin parent as I have found that some of the tests are hard to apply to lxmert. I have also temporarily forgone adding example commands for lxmert. Is this a neccesity, or would it be alright to leave these out?\r\n\r\nI think I updated the documentation to the new standard, but if there are still some errors, any help would be appreciated!\r\n\r\nI am running a donwstream task with the pretrained weights right now to ensure that the results are still the same! I will get back to you with these in about a day!",
"Hi thank you so much for a really quick response and review! In my next commit, Ill implement the following changes and suggestions. If it wouldn't be to much to ask, I think I could actually do with much help adding the mixin tester. Given that there are quite a few tests that seem lxmert seems to be incompatable with (for example I think one of the tests required that the LxmertConfig had the 'num_hidden_layers' attribute, which wouldn't be applicable since I suppose instead, we let the user specify the number of hidden layers for each of the visual, language, and cross-modality encoders. You're judgement on deciding what makes sense with regards to Lxmert test compatibility is probably greater than mine. \r\n\r\nAlso just one last implementation detail that I should probably bring up is that for the output of the cross-modality encoder attentions, I only output the attentions when the language-hidden states are used as the input to the cross attention. Since this encoder is used for bidirectional attention, I do not store the attentions when the visual-hidden states are used as the input. Also For every layer of the cross attention encoder, each of the visual and language states are further processed by a separate self-attention layer, and I do not keep track of the attention outputs for these either. The only reason I keep track of the language attentions when used as input to the cross-modality attention layer is because it is those hidden states that are used for downstream pooling. \r\n\r\nI have yet to change the output for the TFLxmertModeling classes, which I will probably add in the commit following the one that addresses your review, but it probably would save me some time if you were able to cover that and would be greatly appreciated aswell!",
"Commited the above changes, I need to do some testing with the conversion script again, but besides that. I will wait to change the tensorflow code and the modeling tests for your response. If I do make a change in my next commit to modeling_tf_lxmert, it will just be to the documentation",
"I also just added some functionality to edit the number of question answering labels, simialr to the \"resize_output_embeddings\" utility in the LxmertPretrainedModel class. However, what I added seems a lot less complex than the process for resizing embeddings, so if you could take a look at these new functions, that would be awesome! I added some tests, and they do seem to work.",
"Hi! Thanks for the thorough explanation. I'm adding the common tests to the tests, and will report here. May I push directly on your fork?",
"Actually, since I'm bound to make changes to some of the files, I'd like you to review the changes I'm making so that we may discuss. I'll open a PR on your branch when it's in a good state.",
"Okay sounds good! the output hidden states argument should be added in\nabout 30 minutes, and then I can see about adding the TF changes in the\nnext hour.\n\nOn Mon, Aug 10, 2020 at 8:12 AM Lysandre Debut <[email protected]>\nwrote:\n\n> Actually, since I'm bound to make changes to some of the files, I'd like\n> you to review the changes I'm making so that we may discuss. I'll open a PR\n> on your branch when it's in a good state.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/5793#issuecomment-671318378>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ADLBORYS3OXI4RBT4LHTJLLR77P3FANCNFSM4O3KPDOA>\n> .\n>\n",
"Great, I'll take a look at doing the Mixin tomorrow once your changes are up!",
"the lxmert model in pytorch should be ready to go!",
"Just pushed the same changes for TF as the PyTorch ones in https://github.com/eltoto1219/transformers/pull/1, alongside docs changes and a few patches to the PyTorch version. \r\n\r\nI'll take care of the merge commit once everything is done, it's due to `isort==5` and `black==20.8b` being released.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=h1) Report\n> Merging [#5793](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/930153e7d2d658267b7630a047a4bfc85b86042d?el=desc) will **decrease** coverage by `1.74%`.\n> The diff coverage is `79.63%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5793 +/- ##\n==========================================\n- Coverage 79.36% 77.62% -1.75% \n==========================================\n Files 157 161 +4 \n Lines 28569 29816 +1247 \n==========================================\n+ Hits 22675 23144 +469 \n- Misses 5894 6672 +778 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `27.58% <0.00%> (-1.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <70.01%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `88.31% <88.31%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.30% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.33% <100.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/configuration\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.85% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.82% <100.00%> (+2.27%)` | :arrow_up: |\n| [src/transformers/tokenization\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbHhtZXJ0LnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-79.30%)` | :arrow_down: |\n| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/5793/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=footer). Last update [930153e...4ed21b4](https://codecov.io/gh/huggingface/transformers/pull/5793?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,599 | 1,599 | CONTRIBUTOR | null | LXMERT is considered a dual-stream language-vision model. Dual stream meaning that it uses a transformer encoder to perform self-attention between each modality of data and then a cross-modality transformer encoder for a fine-grained cross-attention between each modality. It's achieved tremendous (STOTA results) success across a wide variety of downstream tasks (GQA, VQA2, NLVR2).
Here is the original link to the paper: https://arxiv.org/pdf/1908.07490.pdf
Here is the link to the original implementation: https://github.com/airsplay/lxmert
and here is the weight to the original model weights: https://nlp1.cs.unc.edu/data/model_LXRT.pth
Please let me know if there is anything i missed, and I would be very grateful for any help if I end up running into any blockers, but I will do my best to follow the detailed instructions in the templates.
This is also a work in progress! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5793/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5793",
"html_url": "https://github.com/huggingface/transformers/pull/5793",
"diff_url": "https://github.com/huggingface/transformers/pull/5793.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5793.patch",
"merged_at": 1599120146000
} |
https://api.github.com/repos/huggingface/transformers/issues/5792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5792/comments | https://api.github.com/repos/huggingface/transformers/issues/5792/events | https://github.com/huggingface/transformers/pull/5792 | 657,768,278 | MDExOlB1bGxSZXF1ZXN0NDQ5ODMxNzI5 | 5,792 | Seq2SeqDataset uses linecache to save memory by @Pradhy729 (#5792) | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"The isort check doesn't fail when I run it on my end. Other than that - I would say this is complete.\r\n",
"Awesome, I'm testing it now on WMT english-romanian translation.",
"OK, I have it running with many modifications and it works much better. Very little cpu ram wasted!\r\n\r\nIs it OK if I make a new PR or do you prefer to add me as a contributor to your fork and so I can push to this branch? either way you will get all the credit in the release notes/pr summary :) \r\n\r\n",
"Thanks - I'll add you as a contributor in mine. :)",
"> Thanks - I'll add you as a contributor in mine. :)\r\n\r\nYou can just click on \"Allow edits from maintainers\" on your PR. (in case you didn't know this feature)",
"Any updates here? Is it good to go?\r\n",
"I'm still working on cleaning up my code. Sorry for the delay. ",
"Biggest change:\r\n- For MBart Tokenizer, we can't use the `encode_line` approach because there are special tokens all over the place, so I made a separate dataset. \r\n\r\nStylistic:\r\n- Address the `linecache` off by 1 error inside of `__getitem__` instead other places.\r\n- `_get_examples` -> `get_char_lens`.\r\n- `MbartTokenizer` cleanup.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=h1) Report\n> Merging [#5792](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787&el=desc) will **increase** coverage by `0.91%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5792 +/- ##\n==========================================\n+ Coverage 77.54% 78.46% +0.91% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n+ Hits 20318 20559 +241 \n+ Misses 5882 5641 -241 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.45% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=footer). Last update [eae6d8d...79d73ee](https://codecov.io/gh/huggingface/transformers/pull/5792?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5792/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5792",
"html_url": "https://github.com/huggingface/transformers/pull/5792",
"diff_url": "https://github.com/huggingface/transformers/pull/5792.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5792.patch",
"merged_at": 1595095053000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5791/comments | https://api.github.com/repos/huggingface/transformers/issues/5791/events | https://github.com/huggingface/transformers/pull/5791 | 657,718,587 | MDExOlB1bGxSZXF1ZXN0NDQ5Nzg5NzQ2 | 5,791 | Add script to convert tf2.x checkpoint to PyTorch | {
"login": "mar-muel",
"id": 19345805,
"node_id": "MDQ6VXNlcjE5MzQ1ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/19345805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mar-muel",
"html_url": "https://github.com/mar-muel",
"followers_url": "https://api.github.com/users/mar-muel/followers",
"following_url": "https://api.github.com/users/mar-muel/following{/other_user}",
"gists_url": "https://api.github.com/users/mar-muel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mar-muel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mar-muel/subscriptions",
"organizations_url": "https://api.github.com/users/mar-muel/orgs",
"repos_url": "https://api.github.com/users/mar-muel/repos",
"events_url": "https://api.github.com/users/mar-muel/events{/privacy}",
"received_events_url": "https://api.github.com/users/mar-muel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=h1) Report\n> Merging [#5791](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3b924fabeef717be8399f1888280c29c69e9ab00&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5791 +/- ##\n==========================================\n- Coverage 78.13% 78.05% -0.09% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20352 20330 -22 \n- Misses 5695 5717 +22 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=footer). Last update [3b924fa...378b034](https://codecov.io/gh/huggingface/transformers/pull/5791?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Other than that, LGTM! Thanks for your work!",
"Great, just renamed it - let me know if anything else should be changed!"
] | 1,594 | 1,596 | 1,596 | CONTRIBUTOR | null | The script converts the newer TF2.x checkpoints (as published on their [official GitHub](https://github.com/tensorflow/models/tree/master/official/nlp/bert) to Pytorch. The [existing script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py) only works with checkpoints from the [original BERT repository](https://github.com/google-research/bert) which uses TF 1.4.
The script currently only converts the encoder part (but no MLM/NSP heads). The official checkpoints published by the tensorflow team unfortunately also don't contain these heads. I have written a script which takes care of these, but it does add a fair bit of complexity.
I have tested on my side by comparing all model weights with the official Huggingface version:
```python
from transformers import BertModel
import torch
def validate_model(bert_original, bert_converted):
assert bert_original.num_parameters() == bert_converted.num_parameters()
assert len(bert_original.state_dict()) == len(bert_converted.state_dict())
for (layer_original, value_original), (layer_converted, value_converted) in zip(bert_original.state_dict().items(), bert_converted.state_dict().items()):
assert layer_original == layer_converted
if not torch.eq(value_original, value_converted).all():
raise ValueError(f'Incorrect weights for {layer_original}')
print('Success! Both models are identical!')
if __name__ == "__main__":
validate_against = 'bert-base-uncased'
path_to_converted_model = './converted_bert_base_uncased'
bert_converted = BertModel.from_pretrained(path_to_converted_model)
bert_original = BertModel.from_pretrained(validate_against)
validate_model(bert_original, bert_converted)
```
I'm happy to write some tests for this if needed (and if possible) or have any other input. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5791/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5791",
"html_url": "https://github.com/huggingface/transformers/pull/5791",
"diff_url": "https://github.com/huggingface/transformers/pull/5791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5791.patch",
"merged_at": 1596441219000
} |
https://api.github.com/repos/huggingface/transformers/issues/5790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5790/comments | https://api.github.com/repos/huggingface/transformers/issues/5790/events | https://github.com/huggingface/transformers/pull/5790 | 657,707,204 | MDExOlB1bGxSZXF1ZXN0NDQ5NzgwMTEw | 5,790 | github issue template suggests who to tag | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=h1) Report\n> Merging [#5790](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d088d744adb4e5aa45262a34acab3ae9e81de169&el=desc) will **decrease** coverage by `0.84%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5790 +/- ##\n==========================================\n- Coverage 78.10% 77.26% -0.85% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20344 20125 -219 \n- Misses 5703 5922 +219 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=footer). Last update [d088d74...46f7d0f](https://codecov.io/gh/huggingface/transformers/pull/5790?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I like it! Maybe we can also put in **big** that people should add their env information and not insert screenshots - this happens still quite often from what I see",
"@sshleifer I think I've committed instead of suggesting an edit as I wasn't a reviewer, sorry, lemme know if it broke anything!",
"ok to merge @julien-c ?",
"Like this too! Added myself for issues linked to the documentation, feel free to add more my way.",
"@sshleifer could you tag me for `examples/token-classification` 🤔"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | Fewer issues get lost if people tag the relevant developer. It also saves @LysandreJik time.
While he is out, I figured we could experiment with trying to nudge issue raisers to tag.
Here is the comment at the beginning of the "Bug Report" template:
Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of who to tag.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @julien-c
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
blenderbot: @mariamabarham
Bart: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
I wrote it in 3 minutes in case people hate this idea, so I am sure it is missing people!
Suggestions very much appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5790/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5790/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5790",
"html_url": "https://github.com/huggingface/transformers/pull/5790",
"diff_url": "https://github.com/huggingface/transformers/pull/5790.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5790.patch",
"merged_at": 1595940088000
} |
https://api.github.com/repos/huggingface/transformers/issues/5789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5789/comments | https://api.github.com/repos/huggingface/transformers/issues/5789/events | https://github.com/huggingface/transformers/pull/5789 | 657,700,109 | MDExOlB1bGxSZXF1ZXN0NDQ5Nzc0MTMx | 5,789 | Update README.md | {
"login": "mar-muel",
"id": 19345805,
"node_id": "MDQ6VXNlcjE5MzQ1ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/19345805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mar-muel",
"html_url": "https://github.com/mar-muel",
"followers_url": "https://api.github.com/users/mar-muel/followers",
"following_url": "https://api.github.com/users/mar-muel/following{/other_user}",
"gists_url": "https://api.github.com/users/mar-muel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mar-muel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mar-muel/subscriptions",
"organizations_url": "https://api.github.com/users/mar-muel/orgs",
"repos_url": "https://api.github.com/users/mar-muel/repos",
"events_url": "https://api.github.com/users/mar-muel/events{/privacy}",
"received_events_url": "https://api.github.com/users/mar-muel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=h1) Report\n> Merging [#5789](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3b924fabeef717be8399f1888280c29c69e9ab00&el=desc) will **increase** coverage by `0.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5789 +/- ##\n==========================================\n+ Coverage 78.13% 78.25% +0.11% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n+ Hits 20352 20383 +31 \n+ Misses 5695 5664 -31 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=footer). Last update [3b924fa...f284d8c](https://codecov.io/gh/huggingface/transformers/pull/5789?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Created PyTorch version of model. Minor update on README. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5789/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5789",
"html_url": "https://github.com/huggingface/transformers/pull/5789",
"diff_url": "https://github.com/huggingface/transformers/pull/5789.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5789.patch",
"merged_at": 1594891577000
} |
https://api.github.com/repos/huggingface/transformers/issues/5788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5788/comments | https://api.github.com/repos/huggingface/transformers/issues/5788/events | https://github.com/huggingface/transformers/issues/5788 | 657,674,990 | MDU6SXNzdWU2NTc2NzQ5OTA= | 5,788 | add attention_dropout, relu_dropout command line args to lightning_base.py | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2206883508,
"node_id": "MDU6TGFiZWwyMjA2ODgzNTA4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/lightning",
"name": "lightning",
"color": "a707bc",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This is a duplicate."
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | then pass them to config in `__init__`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5788/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5787/comments | https://api.github.com/repos/huggingface/transformers/issues/5787/events | https://github.com/huggingface/transformers/issues/5787 | 657,627,823 | MDU6SXNzdWU2NTc2Mjc4MjM= | 5,787 | Can't load weights for GPT2 error | {
"login": "aclifton314",
"id": 53267795,
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aclifton314",
"html_url": "https://github.com/aclifton314",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! This is probably because the error reponse for a request is silenced . You can place a breakpont [here](https://github.com/huggingface/transformers/blob/0533cf470659b97c6279bd04f65536a1ec88404a/src/transformers/file_utils.py#L681) and check. Mine was SSL error, so I set `REQUESTS_CA_BUNDLE` env var to `/etc/ssl/certs/ca-certificates.crt`",
"@festeh Which variable should I be looking at in the breakpoint? If it's `response`, which attribute?",
"You can just type `requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)` in debugger console and check exception traceback.",
"I gave that a shot and got the following:\r\n```python\r\nIn[7]: requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)\r\nOut[7]: <Response [200]>\r\n```",
"Interesting, probably you have an error around this request then, when you're actually downloading weights\r\nhttps://github.com/huggingface/transformers/blob/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024/src/transformers/file_utils.py#L678\r\n\r\nif you place a breakpoint here, would the program hit it? and if not could you send this request manually?",
"I think the program hits it:\r\n```\r\nrequests.get(url, stream=True, proxies=proxies, headers=headers)\r\nOut[2]: <Response [200]>\r\n```",
"Well, if you get the message from the first post it means that some line of code has raised an `EnvironmentError` or `TimeoutError`. I think you need to advance over all lines in `get_from_cache` and find out which line is responsible for that. After you find this line, you can re-run it in console and see the actual exception.",
"Ok here is what I found out. If I place a breakpoint at `etag = None` in `get_from_cache()` in `file_utils.py` and try to run things, it stops by that breakpoint twice. The first time, I step through to `response = requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)` and get the following response (with the inputs shown from the debugger):\r\n```python\r\ncache_dir = {str} '/path/to/.cache/torch/transformers'\r\netag = {NoneType} None\r\netag_timeout = {int} 10\r\nforce_download = {bool} False\r\nlocal_files_only = {bool} False\r\nproxies = {dict: 3} {'http': 'http://myproxy.com:port', 'https': 'https://myproxy.com:port', 'no': ',127.0.0.1,127.0.0.111,127.0.0.2'}\r\nresponse = {Response} <Response [200]>\r\nresume_download = {bool} False\r\nurl = {str} 'https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json'\r\nuser_agent = {NoneType} None\r\n\r\nrequests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)\r\nOut[1]: <Response [200]>\r\n```\r\n\r\non the second time, I get:\r\n```python\r\ncache_dir = {str} '/path/to/.cache/torch/transformers'\r\netag = {NoneType} None\r\netag_timeout = {int} 10\r\nforce_download = {bool} False\r\nlocal_files_only = {bool} False\r\nproxies = {dict: 3} {'http': 'http://myproxy.com:port', 'https': 'https://myproxy.com:port', 'no': ',127.0.0.1,127.0.0.111,127.0.0.2'}\r\nresume_download = {bool} False\r\nurl = {str} 'https://cdn.huggingface.co/gpt2-pytorch_model.bin'\r\nuser_agent = {NoneType} None\r\n\r\nrequests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)\r\nTraceback (most recent call last):\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py\", line 485, in wrap_socket\r\n cnx.do_handshake()\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/OpenSSL/SSL.py\", line 1934, in do_handshake\r\n self._raise_ssl_error(self._ssl, result)\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/OpenSSL/SSL.py\", line 1671, in _raise_ssl_error\r\n _raise_current_error()\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/OpenSSL/_util.py\", line 54, in exception_from_error_queue\r\n raise exception_type(errors)\r\nOpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')]\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py\", line 662, in urlopen\r\n self._prepare_proxy(conn)\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py\", line 948, in _prepare_proxy\r\n conn.connect()\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/urllib3/connection.py\", line 360, in connect\r\n ssl_context=context,\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/urllib3/util/ssl_.py\", line 370, in ssl_wrap_socket\r\n return context.wrap_socket(sock, server_hostname=server_hostname)\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py\", line 491, in wrap_socket\r\n raise ssl.SSLError(\"bad handshake: %r\" % e)\r\nssl.SSLError: (\"bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])\",)\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/requests/adapters.py\", line 449, in send\r\n timeout=timeout\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py\", line 720, in urlopen\r\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/urllib3/util/retry.py\", line 436, in increment\r\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\r\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='cdn.huggingface.co', port=443): Max retries exceeded with url: /gpt2-pytorch_model.bin (Caused by SSLError(SSLError(\"bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])\")))\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py\", line 3331, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-3-5b27aae00c67>\", line 1, in <module>\r\n requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/requests/api.py\", line 101, in head\r\n return request('head', url, **kwargs)\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/requests/api.py\", line 60, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/requests/sessions.py\", line 533, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/requests/sessions.py\", line 646, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/path/to/anaconda3/lib/python3.7/site-packages/requests/adapters.py\", line 514, in send\r\n raise SSLError(e, request=request)\r\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='cdn.huggingface.co', port=443): Max retries exceeded with url: /gpt2-pytorch_model.bin (Caused by SSLError(SSLError(\"bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])\")))\r\n```\r\nAlso on the second time, it skips right over the `if response.status_code == 200:` check and goes straight to `except (EnvironmentError, requests.exceptions.Timeout):` when I advance a step from `response = requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)`\r\n```python\r\netag = None\r\n if not local_files_only:\r\n try:\r\n response = requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)\r\n if response.status_code == 200:\r\n etag = response.headers.get(\"ETag\")\r\n except (EnvironmentError, requests.exceptions.Timeout):\r\n # etag is already None\r\n pass\r\n```",
"Any further thoughts on this?",
"I'm suffering same problem. Cannot use behind the proxy servers (private network).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I have the same problem. `AutoModelForSeq2SeqLM.from_pretrained(\"google/pegasus-xsum\")` just idles forever.\r\nLast version of transformers and torch. \r\nIm not using proxy, but if I use vpn 'from_pretrained' is actually downloading the model",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,594 | 1,614 | 1,614 | NONE | null | ## System Info
Pop!_OS 20.04
Pytorch: 1.5.1
Transformers: 2.11.0
Python: 3.7.6
## Details
I am working behind a proxy. If I run the following:
```python
from transformers import GPT2Tokenizer
proxies = {'http':'http://my.proxy.com:port', 'https':'https://my.proxy.com:port'}
tokenizer = GPT2Tokenizer.from_pretrained("gpt2", proxies=proxies)
```
The tokenizer gets downloaded. However, if I run:
```python
from transformers import GPT2LMHeadModel
proxies = {'http':'http://my.proxy.com:port', 'https':'https://my.proxy.com:port'}
model = GPT2LMHeadModel.from_pretrained("gpt2", proxies=proxies)
```
I get the following error:
```python
Traceback (most recent call last):
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 638, in from_pretrained
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/text_gen_w_transformers/finetune_test.py", line 28, in <module>
model = GPT2LMHeadModel.from_pretrained("gpt2", proxies=proxies)
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 645, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load weights for 'gpt2'. Make sure that:
- 'gpt2' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'gpt2' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
```
Any thoughts about what might be the issue? Thanks in advance for your help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5787/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5786/comments | https://api.github.com/repos/huggingface/transformers/issues/5786/events | https://github.com/huggingface/transformers/issues/5786 | 657,583,981 | MDU6SXNzdWU2NTc1ODM5ODE= | 5,786 | Faster mBART finetuning | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @sshleifer , can embed pruning affect accuracy(bleu) ? TPU option seems good as it won't result in less accuracy ",
"https://github.com/pytorch/fairseq/issues/2120#issuecomment-647915216 suggests it costs 2 BLEU points.\r\n\r\nOne of our goals with examples is to make train loops that people can run on their hardware+data. If we can get 36 on a 16GB gpu that is super useful, just like `--freeze_encoder` and `--freeze_embeds` are useful, even if they probably hurt final performance a bit.\r\n",
"Broke into smaller issues, but leaving this open in case people have other ideas!",
"note: fairseq wmt_en_de batch wps=14721.8 on V100 with https://github.com/pytorch/fairseq/issues/2506#issuecomment-678630596\r\n\r\ncheck tpb/seconds for seq2seq/finetune.py\r\n\r\n",
"I am trying to fine-tune on wmt-en-ro (facebook/mbart-large-cc25) using Colab and Kaggle 16GB gpu. On Colab, I get cuda oom and on Kaggle I get out of disk apace (20 GB limit). Is there a way I can skip checkpoints to fit the process in 20GB disk space? \r\nI even tried 1 bs, 8 max len, fp16 and freeze_encoder.",
"High level, I would use an opus-mt model instead of mbart for most tasks. They are smaller and tend to be nearly as good at translation, if not better. I have run ~50~ experiments finetuning mbart on wmt-en-ro and it was not a particularly pleasant experience.\r\n\r\nDisk space: You can see what happens if you remove `checkpoint_callback` from \r\nhttps://github.com/huggingface/transformers/blob/9336086ab5d232cccd9512333518cf4299528882/examples/seq2seq/finetune.py#L362\r\n\r\nand just call `model.model.save_pretrained` `model.model.half().save_pretrained` after.\r\n\r\n\r\n\r\n\r\n",
"Thanks for the advice @sshleifer - just wanted to try it out and test other language pairs. \r\nOn Kaggle, as I just found out, the trick is to create the output_dir outside the working directory (where total disk space is just 5GB). Kaggle won't save it with kernel commit though.\r\n",
"Cool! LMK if you have good results!",
"I got it working using a small subset of data, then went on training English -> Arabic translator (just a proof of concept). Model uploaded to [HF](https://huggingface.co/akhooli/mbart-large-cc25-en-ar).",
"Hi, I had written the code to do vocab pruning and because of the number of people wanting help with it I converted the code into a standalone library. I hope its okay if I link it here for other people to find. \r\n\r\n[Link to repo.](https://github.com/IamAdiSri/hf-trim)\r\n\r\nReferencing issues #5896 #6132"
] | 1,594 | 1,658 | 1,600 | CONTRIBUTOR | null | Goal: Get BLEU 20 in 1 epoch on wmt-en-ro.
Can't run a bs=1 without `--freeze_embeds`.
1 epoch takes 6H on 16GB GPU with fp16, with `--freeze_embeds` and `--freeze_encoder`. Max bs=4
Ideas:
- [ ] Dataset that fits as many sentences as possible into an example, to increase gpu utilization.
- [ ] Only store embeddings once
- [ ] prune embeddings: https://github.com/pytorch/fairseq/issues/2120
- [ ] `label_smoothing=0.1`
- [ ] TPU?
Fairseq finetune command:
https://github.com/pytorch/fairseq/issues/2179
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5786/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5785/comments | https://api.github.com/repos/huggingface/transformers/issues/5785/events | https://github.com/huggingface/transformers/issues/5785 | 657,565,974 | MDU6SXNzdWU2NTc1NjU5NzQ= | 5,785 | Sentence-transformers model outputs different than when loaded in HuggingFace | {
"login": "petulla",
"id": 3466817,
"node_id": "MDQ6VXNlcjM0NjY4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3466817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petulla",
"html_url": "https://github.com/petulla",
"followers_url": "https://api.github.com/users/petulla/followers",
"following_url": "https://api.github.com/users/petulla/following{/other_user}",
"gists_url": "https://api.github.com/users/petulla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petulla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petulla/subscriptions",
"organizations_url": "https://api.github.com/users/petulla/orgs",
"repos_url": "https://api.github.com/users/petulla/repos",
"events_url": "https://api.github.com/users/petulla/events{/privacy}",
"received_events_url": "https://api.github.com/users/petulla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is just one of those days.. There's a[ mean pooling function](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens) here that can be adapted."
] | 1,594 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
## Information
The outputs I'm seeing from generating sentence embeddings with the steps in the model zoo for getting sentence-transformers embeddings deviate from those generated from the SentenceTransformers package.
I'm assuming this is due to the lack of pooling. Is there a way to convert a ST model to HF with pooling?
English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load a pre-trained model from SentenceTransformers
2. Generate sentence embeddings from tokenized inputs
3. Print sentence embeddings
HuggingFace:
```python
#Sentences we want sentence embeddings for
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
tokenizer = AutoTokenizer.from_pretrained("roberta-large-nli-stsb-mean-tokens/0_RoBERTa/")
>>> model = AutoModel.from_pretrained("roberta-large-nli-stsb-mean-tokens/0_RoBERTa/")
>>> encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
>>> with torch.no_grad():
... model_output = model(**encoded_input)
... sentence_embeddings = model_output[0][:,0]
...
>>> print("Sentence embeddings:")
Sentence embeddings:
>>> print(sentence_embeddings)
tensor([[ 0.0057, -0.7690, 0.0702, ..., 0.0734, -1.4343, 0.3418],
[ 0.2066, -0.8213, 0.1272, ..., 0.2649, -1.2799, -0.1636],
[-0.4860, -0.5176, -0.5924, ..., -0.4880, -0.1880, -0.0554]])
```
SentenceTransformers
```python
model = SentenceTransformer('roberta-large-nli-stsb-mean-tokens')
sentence_embeddings = model.encode(sentences)>>> sentence_embeddings = model.encode(sentences)
>>> sentence_embeddings = model.encode(sentences)
>>> print(sentence_embeddings)
[array([ 0.6306487 , -0.2879937 , 0.05334993, ..., 0.26865923,
-2.2382815 , 0.22505784], dtype=float32), array([ 0.22068763, -0.8045991 , 0.18439776, ..., 0.6993382 ,
-1.7670776 , 0.11258417], dtype=float32), array([-0.17819108, 0.08762542, -0.7614953 , ..., -0.6983883 ,
-0.13175072, -0.11123852], dtype=float32)]
```
## Expected behavior
SentenceTransformer and HF outputs should be the same
## Environment info
```
- `transformers` version: 3.0.2
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5785/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5784/comments | https://api.github.com/repos/huggingface/transformers/issues/5784/events | https://github.com/huggingface/transformers/pull/5784 | 657,550,464 | MDExOlB1bGxSZXF1ZXN0NDQ5NjQ3NjQx | 5,784 | [fix] Style. Trying again | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5784/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5784",
"html_url": "https://github.com/huggingface/transformers/pull/5784",
"diff_url": "https://github.com/huggingface/transformers/pull/5784.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5784.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5783/comments | https://api.github.com/repos/huggingface/transformers/issues/5783/events | https://github.com/huggingface/transformers/issues/5783 | 657,549,975 | MDU6SXNzdWU2NTc1NDk5NzU= | 5,783 | Marian Conversion Script | {
"login": "Ahmedkoptan",
"id": 25284719,
"node_id": "MDQ6VXNlcjI1Mjg0NzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/25284719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ahmedkoptan",
"html_url": "https://github.com/Ahmedkoptan",
"followers_url": "https://api.github.com/users/Ahmedkoptan/followers",
"following_url": "https://api.github.com/users/Ahmedkoptan/following{/other_user}",
"gists_url": "https://api.github.com/users/Ahmedkoptan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ahmedkoptan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ahmedkoptan/subscriptions",
"organizations_url": "https://api.github.com/users/Ahmedkoptan/orgs",
"repos_url": "https://api.github.com/users/Ahmedkoptan/repos",
"events_url": "https://api.github.com/users/Ahmedkoptan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ahmedkoptan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"The script you want is at `src/transformers/convert_marian_to_pytorch.py` \r\nIt requires you to download the marian model you wish to convert and also to clone\r\n```bash\r\ngit clone [email protected]:Helsinki-NLP/Opus-MT-train.git\r\n```\r\nyou may have to adjust some paths in the script (like `repo_path`) based on where things are.\r\nhttps://github.com/huggingface/transformers/blob/448c467256332e4be8c122a159b482c1ef039b98/src/transformers/convert_marian_to_pytorch.py#L189\r\n",
"Also note that we have ported 1000+ of them, and some were renamed. Which one are you looking for?",
"I have trained a Transformer model to translate from Italian to Dutch with Marian based on https://github.com/marian-nmt/marian-examples/tree/master/transformer and data from OPUS. The model is using BPE for tokenization. I have the itnl.bpe, vocab file, model.npz file, etc. on my computer. \r\n\r\nThe example with the conver_marian_to_pytorch.py is using the model files from the Helsinki-NLP/Opus-MT-train repo instead of local model files. So how can I use the local model files (that I trained with marian) to convert them to a pytorch model that can be used with huggingface?\r\n\r\n@sshleifer ",
"I don't think BPE tokenizer will work. To answer the local model question, you need to (roughly) (a) run the converter on a model from the repo and see what files get downloaded, (b) make your filesystem look like that (c) update the code to not download things and not make model cards.",
"I am probably missing something here. I am trying to buid a grammar corrector using Marian NMT generated model with Huggingface's transformer. Source language is the text with erros and target language the text without. I have trained the model following this example (https://github.com/marian-nmt/marian-examples/tree/master/transformer). As it does not generate source and target spm files, I created both of them them using \"build/spm_train\" provided with Marian implementation and, of course, using for each one their respective training files, the same used for training the model.\r\n\r\nThe commands to generate spm files are:\r\n ../../build/spm_train --input data/src_sentences_dev.txt --model_prefix=source --vocab_size=16000 --character_coverage=1.0\r\n../../build/spm_train --input data/ref_sentences_dev.txt --model_prefix=target --vocab_size=16000 --character_coverage=1.0\r\n\r\nAfter that I proceeded with the convertion to pythorch using https://github.com/huggingface/transformers/blob/master/src/transformers/models/marian/convert_marian_to_pytorch.py. This convertion went fine. The problem is that when I use this model in Huggingface's transformer.MarianMTModel and MarianTokenizer (from_pretrained) I get weired results.\r\n\r\nSo to mitigate that I tried to perform another convertion, this time using en-de files downloaded from https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models/en-de. Convertion went fine, unfortunately with the same weired results, example at the bottom, as with the model I created for grammar correction.\r\n\r\nI further downloaded the required files from https://huggingface.co/Helsinki-NLP/opus-mt-en-de/tree/main and substituted the ones genereated by convert_marian_to_pytorch.py above with these and, as expected, the translation went fine.\r\n\r\nThe converter, to my knowledge, requires 4 files: source.spm, target.spm, vocab.yml and model.npz. As these weired results raises after I use the converter, both with my model and the en-de downloaded from https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models/en-de, I guess I am missing a piece of info that I cannot identify.\r\n\r\nIt is worth noting that my grammar corrector model works correctly with marian_decoder.\r\n\r\nAny help will be very much appreciated! @sshleifer ?\r\n\r\nCheers\r\n\r\nExample of English -> German translation:\r\nSource: \"Today is Sunday\"\r\nTranslation: \r\n▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Nachricht▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Nachricht▁Rumänien▁Rumänien▁Nachricht▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁Nachricht▁Rumänien▁Rumänien▁Rumänien▁Rumänien▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige▁verdächtige\r\n\r\n\r\n",
"I hope it's not bad practice to necro an issue but since I can't seem to link it in a new one, I'd like to add that I'm encountering th same behaviour using the script. \r\n\r\nThe conversion went fine, but the generation output is, for me : \r\n\r\n??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????\r\n\r\nor, sometimes \r\n\r\nlinkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage linkage (ad lib).\r\n\r\nI should note that I have used OPUS-CAT to fine tune an OPUS-MT model, and that it performs relatively well inside the app. Also conversion with the script went on without issues. \r\n\r\nLike the OP, if I replace the model files with the original ones (in my case,Helsinki-NLP/opus-mt-zh-en), everything is fixed, so I don't think this is a pure script/params issue.\r\n\r\nThanks in advance !"
] | 1,594 | 1,688 | 1,595 | NONE | null | I want to utilize and train the machine translation models posted on Github: https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models on my own corpus. These models are C++ based.
Are these models exactly the same as the ones posted by Hugging Face’s website: https://huggingface.co/Helsinki-NLP ?
If they are, what is the transformers conversion script that loads these Github models and transforms them to models that are loadable via transformers (for example utilizing: transformers.AutoTokenizer.from_pretrained(‘path’), transformers.AutoModelWithLMHead.from_pretrained(‘path’))?
@sshleifer @jackalhan
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5783/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5782/comments | https://api.github.com/repos/huggingface/transformers/issues/5782/events | https://github.com/huggingface/transformers/pull/5782 | 657,540,660 | MDExOlB1bGxSZXF1ZXN0NDQ5NjM5NTA5 | 5,782 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5782/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5782",
"html_url": "https://github.com/huggingface/transformers/pull/5782",
"diff_url": "https://github.com/huggingface/transformers/pull/5782.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5782.patch",
"merged_at": 1594844240000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5781/comments | https://api.github.com/repos/huggingface/transformers/issues/5781/events | https://github.com/huggingface/transformers/pull/5781 | 657,535,944 | MDExOlB1bGxSZXF1ZXN0NDQ5NjM1NjE4 | 5,781 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5781/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5781",
"html_url": "https://github.com/huggingface/transformers/pull/5781",
"diff_url": "https://github.com/huggingface/transformers/pull/5781.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5781.patch",
"merged_at": 1594844246000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5780/comments | https://api.github.com/repos/huggingface/transformers/issues/5780/events | https://github.com/huggingface/transformers/issues/5780 | 657,521,947 | MDU6SXNzdWU2NTc1MjE5NDc= | 5,780 | Error in conversion to tensorflow | {
"login": "AIshutin",
"id": 30271345,
"node_id": "MDQ6VXNlcjMwMjcxMzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/30271345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AIshutin",
"html_url": "https://github.com/AIshutin",
"followers_url": "https://api.github.com/users/AIshutin/followers",
"following_url": "https://api.github.com/users/AIshutin/following{/other_user}",
"gists_url": "https://api.github.com/users/AIshutin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AIshutin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AIshutin/subscriptions",
"organizations_url": "https://api.github.com/users/AIshutin/orgs",
"repos_url": "https://api.github.com/users/AIshutin/repos",
"events_url": "https://api.github.com/users/AIshutin/events{/privacy}",
"received_events_url": "https://api.github.com/users/AIshutin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @Alshutin, \r\n\r\nI am not able to reproduce the error. It might be because PyTorch uses a GPU and Tensorflow does not. \r\nCould you try to run your code when disabling GPU (`export CUDA_VISIBLE_DEVICES=\"\"`) and see whether the \r\nerror persists?",
"Hi! I just tried it with another version of TensorFlow. With 2.2.0 it just works.",
"With 2.0.0-beta1 and CUDA_VISIBLE_DEVICES=\"\" the error persists.",
"Interesting - thanks for checking! \r\n\r\n\r\nDoes it crash as well for `bert-base-uncased` and TF 2.0.0?\r\n\r\nCould you run these lines to verify?\r\n\r\n```python\r\nfrom transformers import TFAutoModel, AutoTokenizer, AutoModel\r\nimport os\r\n\r\nmodel = AutoModel.from_pretrained('bert-base-uncased')\r\nos.system('mkdir bert')\r\nmodel.save_pretrained('bert')\r\nmodel = TFAutoModel.from_pretrained('bert', from_pt=True) # crashes\r\n```\r\n\r\n@thomwolf @jplu - are we gonna force TF 2.2 in `transformers` ? ",
"Can you try with the 2.0.0 release and not beta? The beta was know to have a lot of issue and a lot of fixes have been applied since.\n\n@patrickvonplaten I proposed indeed to fix the TensorFlow version to 2.2, because of some welcomed features from it. But nothing has been decided yet.",
"It works with 2.0.0 stable TensorFlow release. "
] | 1,594 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): DistilBERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```
from transformers import TFAutoModel, AutoTokenizer, AutoModel
import os
model = AutoModel.from_pretrained('distilbert-base-uncased')
os.system('mkdir distilbert')
model.save_pretrained('distilbert')
model = TFAutoModel.from_pretrained('distilbert', from_pt=True) # crashes
```
## Expected behavior
Model is converted from pytorch to tensorflow
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-62-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.0.0-beta1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Actual Behaviour
```
Traceback (most recent call last):
File "pt2tf.py", line 8, in <module>
model = TFAutoModel.from_pretrained('distilbert', from_pt=True)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_auto.py", line 423, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py", line 482, in from_pretrained
return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_pytorch_utils.py", line 93, in load_pytorch_checkpoint_in_tf2_model
tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_pytorch_utils.py", line 125, in load_pytorch_weights_in_tf2_model
tf_model(tf_inputs, training=False) # Make sure model is built
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 712, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py", line 603, in call
outputs = self.distilbert(inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 712, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py", line 493, in call
embedding_output = self.embeddings(input_ids, inputs_embeds=inputs_embeds) # (bs, seq_length, dim)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 709, in __call__
self._maybe_build(inputs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 1966, in _maybe_build
self.build(input_shapes)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py", line 112, in build
"weight", shape=[self.vocab_size, self.dim], initializer=get_initializer(self.initializer_range)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 389, in add_weight
aggregation=aggregation)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py", line 713, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 154, in make_variable
shape=variable_shape if variable_shape else None)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py", line 221, in _variable_v1_call
shape=shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py", line 2502, in default_variable_creator
shape=shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 464, in __init__
shape=shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 608, in _init_from_args
initial_value() if init_from_fn else initial_value,
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 134, in <lambda>
init_val = lambda: initializer(shape, dtype=dtype)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 341, in __call__
dtype = _assert_float_dtype(dtype)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 769, in _assert_float_dtype
raise ValueError("Expected floating point type, got %s." % dtype)
ValueError: Expected floating point type, got <dtype: 'int32'>.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5780/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5779/comments | https://api.github.com/repos/huggingface/transformers/issues/5779/events | https://github.com/huggingface/transformers/issues/5779 | 657,484,930 | MDU6SXNzdWU2NTc0ODQ5MzA= | 5,779 | [bart] decoder.last_hidden_state shape changes when passing labels | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"NVM, it's a me problem.",
"FFR, You have to manually pass `use_cache=False` to bart forward now if you're not generating"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | ```
config = BartConfig(
vocab_size=99,
d_model=24,
encoder_layers=2,
decoder_layers=2,
encoder_attention_heads=2,
decoder_attention_heads=2,
encoder_ffn_dim=32,
decoder_ffn_dim=32,
max_position_embeddings=48,
add_final_layer_norm=True,
)
lm_model = BartForConditionalGeneration(config).to(torch_device)
context = torch.Tensor([[71, 82, 18, 33, 46, 91, 2], [68, 34, 26, 58, 30, 2, 1]]).long().to(torch_device)
summary = torch.Tensor([[82, 71, 82, 18, 2], [58, 68, 2, 1, 1]]).long().to(torch_device)
loss, logits, enc_features = lm_model(input_ids=context, decoder_input_ids=summary, labels=summary)
expected_shape = (*summary.shape, config.vocab_size)
self.assertEqual(logits.shape, expected_shape)
outputs2 = lm_model(input_ids=context, decoder_input_ids=summary)
self.assertEqual(outputs2.logits.shape, expected_shape)
# Fails torch.Size([2, 1, 99]) != (2, 5, 99)
```
Is this expected @sgugger ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5779/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5778/comments | https://api.github.com/repos/huggingface/transformers/issues/5778/events | https://github.com/huggingface/transformers/issues/5778 | 657,455,474 | MDU6SXNzdWU2NTc0NTU0NzQ= | 5,778 | Error using DataParallel with reformer model: There were no tensor arguments to this function | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Update: This seems relevant https://github.com/pytorch/pytorch/issues/36035",
"I face the same error when using multi GPUs on Reformer model:\r\n```Traceback (most recent call last):\r\n File \"src/run_language_modeling.py\", line 305, in <module>\r\n main()\r\n File \"src/run_language_modeling.py\", line 269, in main\r\n trainer.train(model_path=model_path)\r\n File \"/project/6006286/qiwu/from_git/transformers/src/transformers/trainer.py\", line 499, in train\r\n tr_loss += self._training_step(model, inputs, optimizer)\r\n File \"/project/6006286/qiwu/from_git/transformers/src/transformers/trainer.py\", line 632, in _training_step\r\n outputs = model(**inputs)\r\n File \"/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 155, in forward\r\n outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n File \"/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 165, in parallel_apply\r\n return parallel_apply(replicas,wandb: Waiting for W&B process to finish, PID 20542\r\n inputs, kwargs, self.device_ids[:len(replicas)])\r\n File \"/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 85, in parallel_apply\r\n output.reraise()\r\n File \"/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/_utils.py\", line 395, in reraise\r\n raise self.exc_type(msg)\r\nRuntimeError: Caught RuntimeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 60, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/project/6006286/qiwu/from_git/transformers/src/transformers/modeling_reformer.py\", line 1746, in forward\r\n return_tuple=return_tuple,\r\n File \"/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/project/6006286/qiwu/from_git/transformers/src/transformers/modeling_reformer.py\", line 1610, in forward\r\n embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, inputs_embeds=inputs_embeds)\r\n File \"/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/project/6006286/qiwu/from_git/transformers/src/transformers/modeling_reformer.py\", line 236, in forward\r\n position_embeddings = self.position_embeddings(position_ids)\r\n File \"/home/qiwu/protein-reformer-env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/project/6006286/qiwu/from_git/transformers/src/transformers/modeling_reformer.py\", line 143, in forward\r\n weights = torch.cat(broadcasted_weights, dim=-1)\r\nRuntimeError: There were no tensor arguments to this function (e.g., wandb: Program failed with code 1. Press ctrl-c to abort syncing.\r\nyou passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPUTensorId, CUDATensorId, QuantizedCPUTensorId, VariableTensorId]\r\n```",
"Out of curiosity, do you have the same error on PyTorch 1.4?",
"> Out of curiosity, do you have the same error on PyTorch 1.4?\r\n\r\nI stopped my GC instance - now there are none available. Maybe someone else can check?",
"In my case there's no error using torch-1.4.0, but got a warning:\r\n```07/16/2020 11:58:13 - INFO - transformers.trainer - ***** Running training *****\r\n07/16/2020 11:58:13 - INFO - transformers.trainer - Num examples = 5444\r\n07/16/2020 11:58:13 - INFO - transformers.trainer - Num Epochs = 12\r\n07/16/2020 11:58:13 - INFO - transformers.trainer - Instantaneous batch size per device = 32\r\n07/16/2020 11:58:13 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n07/16/2020 11:58:13 - INFO - transformers.trainer - Gradient Accumulation steps = 1\r\n07/16/2020 11:58:13 - INFO - transformers.trainer - Total optimization steps = 1000\r\nEpoch: 0%| | 0/12 [00:00<?, ?it/s\r\nhome/qiwu/torch-1.4/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: \r\nUserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\r\n```\r\n\r\nFound a relevant issue : https://github.com/huggingface/transformers/issues/852\r\nhttps://discuss.pytorch.org/t/how-to-fix-gathering-dim-0-warning-in-multi-gpu-dataparallel-setting/41733/2",
"To be honest, I didn't check Reformer on multi-GPU yet - will note this issue down\r\n",
"@qwu01, what version of transformers are you using, and do you also have tokenizers installed? I get a vague segmentation fault error when I attempt to run about the same training script as above using `torch==1.4.0`, `transformers==2.9.0`, and `tokenizers==0.7.0`.\r\n\r\n",
"@jstremme I have these installed:\r\n\r\nPackage Version\r\n--------------- ---------\r\nargh 0.26.2\r\ncertifi 2020.6.20\r\nchardet 3.0.4\r\nclick 7.1.2\r\nconfigparser 5.0.0\r\ndocker-pycreds 0.4.0\r\nfilelock 3.0.12\r\ngitdb 4.0.5\r\nGitPython 3.1.7\r\ngql 0.2.0\r\ngraphql-core 1.1\r\nidna 2.10\r\njoblib 0.16.0\r\nnumpy 1.18.4\r\nnvidia-ml-py3 7.352.0\r\npackaging 20.4\r\npathtools 0.1.2\r\npip 19.1.1\r\npromise 2.3\r\npsutil 5.7.0\r\npyparsing 2.4.7\r\npython-dateutil 2.8.1\r\nPyYAML 5.3.1\r\nregex 2019.11.1\r\nrequests 2.24.0\r\nsacremoses 0.0.43\r\nsentencepiece 0.1.90\r\nsentry-sdk 0.16.1\r\nsetuptools 41.0.1\r\nshortuuid 1.0.1\r\nsix 1.15.0\r\nsmmap 3.0.4\r\nsubprocess32 3.5.3\r\n**tokenizers 0.8.1rc1**\r\n**torch 1.4.0**\r\ntqdm 4.47.0\r\n**transformers 3.0.2**\r\nurllib3 1.25.9\r\nwandb 0.9.3\r\nwatchdog 0.9.0\r\nwheel 0.33.4\r\n",
"Thanks very much @qwu01. Just to confirm, downgrading torch to `1.4.0` allowed you to train Reformer with multiple GPUs? Did this impact anything else? \r\n\r\nThe environment I'm using does not allow me to install `tokenizers 0.8.1rc1` and `transformers 3.0.2` currently, but I will test your environment config as soon as I'm able :) \r\n\r\nIf you are actively working on training a large Reformer model, I would be interested in discussing your parameters. I'm dealing with sequences of max length around 300k SentencePiece tokens and am limited to batch size = 1. Multi-GPU should get me to batch size = 4.",
"@jstremme Yes, I'm sure that torch 1.4.0 with multiple GPUs worked for Reformer training. AFAICT it's not impacting anything else.\r\n",
"@qwu01, @anthonyfuller7, downgrading to `torch==1.4.0` worked for me as well :D",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Was this issue ever solved? I have managed to use multiple GPUs in Reformer training by downgrading to PyTorch 1.4.0 and transformers 3.0.2. However, I would like to not be constrained to this version setup, because it is leading to some inefficiencies (functions for which arguments have changed in the new version, etc.) and also because I'd like the version to be up to date.",
"@anthonyfuller7, perhaps you could reopen this? I'm in a similar position to @JellePiepenbrock where having to use `torch==1.4.0` is a suboptimal workaround.",
"Sorry guys that this was never really solved -> could you try to post the problem on the forum: https://discuss.huggingface.co/ instead? It's usually more active for problems with multi-gpu training",
"Sure, @patrickvonplaten. I created a [post here](https://discuss.huggingface.co/t/reformer-for-multi-gpu-not-possible-for-torch-1-4-0/9422?u=jstremme). Is there someone from Hugging Face who would be able to help resolve this? As mentioned in my post, I'd be happy to help, but I don't think I understand the code well enough to spearhead the fix. Thanks for your reply to my comment!",
"> \r\n\r\nHello @jstremme \r\n\r\nI tried to downgrade to Pytorch 1.4.0 to remove the warning (and possibly increase training speed, as mentioned [here](https://github.com/huggingface/transformers/issues/852)) but I got this error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_mlm_arrow_dataset.py\", line 552, in <module>\r\n main()\r\n File \"run_mlm_arrow_dataset.py\", line 501, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/trainer.py\", line 1214, in train\r\n self.create_optimizer_and_scheduler(num_training_steps=max_steps)\r\n File \"/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/trainer.py\", line 803, in create_optimizer_and_scheduler\r\n self.create_optimizer()\r\n File \"/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/trainer.py\", line 836, in create_optimizer\r\n self.optimizer = optimizer_cls(optimizer_grouped_parameters, **optimizer_kwargs)\r\n File \"/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/optimization.py\", line 311, in __init__\r\n require_version(\"torch>=1.5.0\") # add_ with alpha\r\n File \"/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/utils/versions.py\", line 114, in require_version\r\n _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)\r\n File \"/home/claudio/python-environments/test_venv/lib/python3.6/site-packages/transformers/utils/versions.py\", line 50, in _compare_versions\r\n f\"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}\"\r\nImportError: torch>=1.5.0 is required for a normal functioning of this module, but found torch==1.4.0.\r\n```\r\n\r\nDid you find some solution to the issue?"
] | 1,594 | 1,646 | 1,602 | NONE | null | # 🐛 Bug
## Information
I'm having some issues using DataParallel with the reformer model with 4 GPUs. I am trying to feed the ReformerModel input embeddings, and output the last hidden state. I am using apex amp, however I get the same error when I don't use amp. I also get the same error when I use input IDs, rather than embeddings. And I've tested the same script using other HuggingFace models with no issues (Bert, and Roberta).
## To reproduce
Simple code:
```
import torch
from apex import amp
import transformers
from transformers import ReformerModel
from torch.utils.data import TensorDataset, DataLoader
import torch.nn as nn
print(transformers.__version__)
print(torch.__version__)
device = torch.device("cuda:0")
batch_size = 4
model_rf = ReformerModel.from_pretrained('google/reformer-crime-and-punishment')
model_rf.to(device)
opt_rf = torch.optim.AdamW(model_rf.parameters(), lr=0.0002)
model_rf, opt_rf = amp.initialize(model_rf, opt_rf)
model_rf = nn.DataParallel(model_rf)
embeds = torch.randn(80, 64, 256)
training_set = TensorDataset(embeds, embeds)
training_generator = DataLoader(training_set, batch_size=batch_size, shuffle=True)
for i, batch in enumerate(training_generator):
embeds, _ = batch
h_final = model_rf(inputs_embeds=embeds.to(device))
```
And the error:
```
Traceback (most recent call last):
File "rf_4.py", line 35, in <module>
h_final = model_rf(inputs_embeds=embeds)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 155, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_reformer.py", line 1621, in forward
embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, inputs_embeds=inputs_embeds)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_reformer.py", line 234, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_reformer.py", line 170, in forward
[weight[:, :required_pos_encodings_columns] for weight in broadcasted_weights], dim=-1
File "/usr/local/lib/python3.6/dist-packages/apex/amp/wrap.py", line 81, in wrapper
return orig_fn(seq, *args, **kwargs)
RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPUTensorId, CUDATensorId, QuantizedCPUTensorId, VariableTensorId]
```
## Expected behavior
This code kicks an error at the h_final line
## Environment info
- `transformers` version: 3.0.2
- Platform: Ubuntu 18.04
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?): no
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: Yes, 4 GPUs
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5778/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5778/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5777/comments | https://api.github.com/repos/huggingface/transformers/issues/5777/events | https://github.com/huggingface/transformers/issues/5777 | 657,453,753 | MDU6SXNzdWU2NTc0NTM3NTM= | 5,777 | Bug in MiniLM-L12-H384-uncased modelhub model files | {
"login": "kolk",
"id": 9049591,
"node_id": "MDQ6VXNlcjkwNDk1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolk",
"html_url": "https://github.com/kolk",
"followers_url": "https://api.github.com/users/kolk/followers",
"following_url": "https://api.github.com/users/kolk/following{/other_user}",
"gists_url": "https://api.github.com/users/kolk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolk/subscriptions",
"organizations_url": "https://api.github.com/users/kolk/orgs",
"repos_url": "https://api.github.com/users/kolk/repos",
"events_url": "https://api.github.com/users/kolk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I will be off for the next two weeks - maybe @sshleifer @sgugger @julien-c can take a look? ",
"@JetRunner Do you know who from @microsoft uploaded the MiniLM model?",
"@patrickvonplaten did it if I remember it right. He's on vocation so I'll take a look.\n",
"Here's the diff\r\n\r\n@patrickvonplaten when u are back to work, pls check why this happened.\r\nI'll re-upload `vocab.txt` to resolve the problem for now.",
"@julien-c I've re-uploaded it. However, CDN seems to have cached the incorrect version (https://cdn.huggingface.co/microsoft/MiniLM-L12-H384-uncased/vocab.txt).",
"Yes, the CDN caches files for up to 24 hours on each POP. However AFAIK the library doesn't load tokenizer files from the CDN anyways.",
"The model is working now"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): microsoft/MiniLM-L12-H384-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQuAD ver.2
* [ ] my own task or dataset: (give details below)
Problem: The vocab for microsoft/MiniLM-L12-H384-uncased is missing a token => wrong tokenization => bad performance for SQuAD finetuning
Potential fix: Upload the original vocab that was published in the original Microsoft Repository (https://github.com/microsoft/unilm/tree/master/minilm)
## To reproduce
Steps to reproduce the behavior:
1. While tokenizing the a sample english sentence with miniLM model downloaded from modelhub
2. Comparing for modelhub tokenizer vocab size vs. modelhub model vocab size
```
from transformers import AutoTokenizer, AutoModel
tokenizer_modelhub = AutoTokenizer.from_pretrained("microsoft/MiniLM-L12-H384-uncased")
model_mod
elhub = AutoModel.from_pretrained("microsoft/MiniLM-L12-H384-uncased")
assert tokenizer_modelhub.vocab_size == model_modelhub.embeddings.word_embeddings.num_embeddings, "tokenizer vocab_size {} doesn't match embedding vocab size {} ".format(tokenizer.vocab_size, model.embeddings.word_embeddings.num_embeddings)
```
Output
```
AssertionError: tokenizer vocab_size 30521 doesn't match embedding vocab size 30522
```
3. Download "original" MiniLM model from Microsoft's MiniLM GitHub Repo (https://1drv.ms/u/s!AjHn0yEmKG8qixAYyu2Fvq5ulnU7?e=DFApTA)
4. Comparing the modelhub MiniLM tokenizer and "original" MiniLM tokenizer token ids
```
import torch
input_ids_modelhub = torch.tensor([tokenizer_modelhub.encode("Let's see all hidden-states and attentions on this text")])
config_github = AutoConfig.from_pretrained("<github_minilm_model_directory>")
tokenizer_github = AutoTokenizer.from_pretrained(
... "<github_minilm_model_directory>", config=config_github)
model_github_finetuned = AutoModelForQuestionAnswering.from_pretrained(
... "<github_minilm_model_directory>", config=config_github)
assert tokenizer_github.vocab_size == model_github.embeddings.word_embeddings.num_embeddings, "tokenizer vocab_size {} doesn't match embedding vocab size {} ".format(tokenizer.vocab_size, model.embeddings.word_embeddings.num_embeddings)
input_ids_github = torch.tensor([tokenizer_github.encode("Let's see all hidden-states and attentions on this text")])
```
```
print(input_ids_github)
tensor([[ 101, 2292, 1005, 1055, 2156, 2035, 5023, 1011, 2163, 1998, 3086, 2015,
2006, 2023, 3793, 102]])
```
```
print(input_ids_modelhub)
tensor([[ 100, 2291, 1004, 1054, 2155, 2034, 5022, 1010, 2162, 1997, 3085, 2014,
2005, 2022, 3792, 101]])
```
5. Fine-tune modelhub MiniLM model for SQuAD ver 2
```
python examples/question-answering/run_squad.py --model_type bert \
--model_name_or_path microsoft/Multilingual-MiniLM-L12-H384 \
--output_dir finetuned_modelhub_minilm \
--data_dir data/squad20 \
--train_file train-v2.0.json \
--predict_file dev-v2.0.json \
--learning_rate 4e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--per_gpu_train_batch_size 12 \
--per_gpu_eval_batch_size 12 \
--gradient_accumulation_steps 4 \
--version_2_with_negative \
--do_lower_case \
--verbose_logging \
--do_train \
--do_eval \
--seed 42 \
--save_steps 5000 \
--overwrite_output_dir \
--overwrite_cache
```
Results:
```
{'exact': 59.681630590415224, 'f1': 63.78250778488946, 'total': 11873, 'HasAns_exact': 49.73009446693657, 'HasAns_f1': 57.94360913123985, 'HasAns_total': 5928, 'NoAns_exact': 69.60470984020185, 'NoAns_f1': 69.60470984020185, 'NoAns_total': 5945, 'best_exact': 59.690053061568264, 'best_exact_thresh': 0.0, 'best_f1': 63.79093025604285, 'best_f1_thresh': 0.0}
```
6. Fine-tune original MiniLM model for SQuAD ver 2
```
python examples/question-answering/run_squad.py --model_type bert \
--model_name_or_path <saved_githubModel_local_path> \
--output_dir finetuned_github_minilm \
--data_dir data/squad20 \
--train_file train-v2.0.json \
--predict_file dev-v2.0.json \
--learning_rate 4e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--per_gpu_train_batch_size 12 \
--per_gpu_eval_batch_size 12 \
--gradient_accumulation_steps 4 \
--version_2_with_negative \
--do_lower_case \
--verbose_logging \
--do_train \
--do_eval \
--seed 42 \
--save_steps 5000 \
--overwrite_output_dir \
--overwrite_cache
```
Results:
```
{'exact': 76.23178640613156, 'f1': 79.57013365427773, 'total': 11873, 'HasAns_exact': 78.50877192982456, 'HasAns_f1': 85.1950399590485, 'HasAns_total': 5928, 'NoAns_exact': 73.96131202691338, 'NoAns_f1': 73.96131202691338, 'NoAns_total': 5945, 'best_exact': 76.23178640613156, 'best_exact_thresh': 0.0, 'best_f1': 79.57013365427775, 'best_f1_thresh': 0.0}
```
## Expected behavior
1. Assertions should pass.
2. `input_ids_modelhub` and `input_ids_github` should produce same results
3. Reproduce the Downstream results on MiniLM modelhub files as mentioned in MiniLM model card
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Ubuntu 18.04.4 LTS
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1 (Yes)
- Tensorflow version (GPU?): Not using
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5777/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5777/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5776/comments | https://api.github.com/repos/huggingface/transformers/issues/5776/events | https://github.com/huggingface/transformers/pull/5776 | 657,453,077 | MDExOlB1bGxSZXF1ZXN0NDQ5NTY2Mzc5 | 5,776 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Add cherry picked example for the widget | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5776/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5776",
"html_url": "https://github.com/huggingface/transformers/pull/5776",
"diff_url": "https://github.com/huggingface/transformers/pull/5776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5776.patch",
"merged_at": 1594844362000
} |
https://api.github.com/repos/huggingface/transformers/issues/5775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5775/comments | https://api.github.com/repos/huggingface/transformers/issues/5775/events | https://github.com/huggingface/transformers/pull/5775 | 657,431,896 | MDExOlB1bGxSZXF1ZXN0NDQ5NTQ4Nzg5 | 5,775 | [squad] make examples and dataset accessible from SquadDataset object | {
"login": "lazovich",
"id": 678679,
"node_id": "MDQ6VXNlcjY3ODY3OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/678679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lazovich",
"html_url": "https://github.com/lazovich",
"followers_url": "https://api.github.com/users/lazovich/followers",
"following_url": "https://api.github.com/users/lazovich/following{/other_user}",
"gists_url": "https://api.github.com/users/lazovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lazovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lazovich/subscriptions",
"organizations_url": "https://api.github.com/users/lazovich/orgs",
"repos_url": "https://api.github.com/users/lazovich/repos",
"events_url": "https://api.github.com/users/lazovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/lazovich/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There seem to be some issues with CircleCI right now causing all the integration tests to fail. Please let me know if there is an issue on my end."
] | 1,594 | 1,598 | 1,598 | CONTRIBUTOR | null | In order to do evaluation on the SQuAD dataset using `squad_evaluate`, the user needs access to both the examples loaded in the dataset and the `TensorDataset` that contains values like `unique_id` and the like that are used in constructing the list of `SquadResult` objects. This PR surfaces the examples and dataset to the user so that they can access it directly.
For example of why access to those is needed, see how evaluation is currently done in `examples/run_squad.py`. The `SquadDataset` object attempts to wrap up some of this functionality, but without access to examples and dataset the evaluation is not possible. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5775/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5775",
"html_url": "https://github.com/huggingface/transformers/pull/5775",
"diff_url": "https://github.com/huggingface/transformers/pull/5775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5775.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5774/comments | https://api.github.com/repos/huggingface/transformers/issues/5774/events | https://github.com/huggingface/transformers/pull/5774 | 657,427,295 | MDExOlB1bGxSZXF1ZXN0NDQ5NTQ0OTMx | 5,774 | [fix] check_code_quality | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5774/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5774",
"html_url": "https://github.com/huggingface/transformers/pull/5774",
"diff_url": "https://github.com/huggingface/transformers/pull/5774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5774.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5773/comments | https://api.github.com/repos/huggingface/transformers/issues/5773/events | https://github.com/huggingface/transformers/pull/5773 | 657,410,400 | MDExOlB1bGxSZXF1ZXN0NDQ5NTMwOTg4 | 5,773 | Ensure OpenAI GPT position_ids is correctly initialized and registered at init. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Fixes for the failing tests are not detected by Github while being present on the branch ... Lets see if it comes back to life in a while ...",
"CI failure has been fixed on master.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=h1) Report\n> Merging [#5773](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/223bad242d0d64e20a39b956b73ab300231a9c70&el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `90.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5773 +/- ##\n==========================================\n- Coverage 78.67% 78.63% -0.05% \n==========================================\n Files 146 146 \n Lines 26210 26206 -4 \n==========================================\n- Hits 20621 20606 -15 \n- Misses 5589 5600 +11 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.44% <66.66%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.37% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <100.00%> (-0.06%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `90.00% <100.00%> (-0.03%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.51%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=footer). Last update [b01a884...722824f](https://codecov.io/gh/huggingface/transformers/pull/5773?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sshleifer I've updated all the models with `position_ids` created in the forward pass as it seems to be the right design to allow people to export the bunch of models impacted. \r\n\r\nAlso, it avoids allocating tensors at every forward call, so might reduce the pressure on PyTorch memory manager. ",
"Do we get the \"Uninitialized parameters\" warning because the position_ids are not in the `pytorch_model.bin` on S3?\r\n\r\n",
"I'll check the above, didn't catch at first 👌 ",
"Could this be merged? Would love to test these changes after merging :)",
"@vdantu Now merged, sorry for the delay, many people are off these days, it slows down a little bit the merging process 😃. \r\n\r\nLet us know if you have any follow up issue(s) / question(s).",
"@mfuntowicz : Thanks for the push. Would we have to wait for a new release of `transformers` package to get these changes? ",
"@vdantu: I'll let @LysandreJik handle this one "
] | 1,594 | 1,597 | 1,595 | MEMBER | null | This will make it compatible with TorchScript export and avoid hardcoded `position_ids` tensor's device in the generated graph.
Solve the following issue: #5664
Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5773/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5773",
"html_url": "https://github.com/huggingface/transformers/pull/5773",
"diff_url": "https://github.com/huggingface/transformers/pull/5773.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5773.patch",
"merged_at": 1595597872000
} |
https://api.github.com/repos/huggingface/transformers/issues/5772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5772/comments | https://api.github.com/repos/huggingface/transformers/issues/5772/events | https://github.com/huggingface/transformers/pull/5772 | 657,403,771 | MDExOlB1bGxSZXF1ZXN0NDQ5NTI1NDg5 | 5,772 | [fix] check code quality | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging without circleci."
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5772/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5772",
"html_url": "https://github.com/huggingface/transformers/pull/5772",
"diff_url": "https://github.com/huggingface/transformers/pull/5772.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5772.patch",
"merged_at": 1594839578000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5771/comments | https://api.github.com/repos/huggingface/transformers/issues/5771/events | https://github.com/huggingface/transformers/issues/5771 | 657,403,345 | MDU6SXNzdWU2NTc0MDMzNDU= | 5,771 | Fine-tune BERT for regression problem | {
"login": "rxlian",
"id": 35382484,
"node_id": "MDQ6VXNlcjM1MzgyNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/35382484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rxlian",
"html_url": "https://github.com/rxlian",
"followers_url": "https://api.github.com/users/rxlian/followers",
"following_url": "https://api.github.com/users/rxlian/following{/other_user}",
"gists_url": "https://api.github.com/users/rxlian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rxlian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rxlian/subscriptions",
"organizations_url": "https://api.github.com/users/rxlian/orgs",
"repos_url": "https://api.github.com/users/rxlian/repos",
"events_url": "https://api.github.com/users/rxlian/events{/privacy}",
"received_events_url": "https://api.github.com/users/rxlian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @rxlian, \r\n\r\nWe are trying to move more general and \"researchy\" questions to our discussion forum here: https://discuss.huggingface.co/ and use github issues mainly for bugs. \r\n\r\nWould you mind posting your questions at the forum - other people might be interested as well :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
I was wondering in order to fine-tune BERT for regression problem, do I just need to put num_labels = 1 inside the BertForSequenceClassification function? Does there anything else need to modify? I'm a newbie to transformers and still confused with basic things. Thanks in advance.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5771/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5770/comments | https://api.github.com/repos/huggingface/transformers/issues/5770/events | https://github.com/huggingface/transformers/pull/5770 | 657,372,863 | MDExOlB1bGxSZXF1ZXN0NDQ5NDk5Nzg1 | 5,770 | XLNet `use_cache` refactor | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=h1) Report\n> Merging [#5770](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3653d01f2af0389207f2239875a8ceae41bf0598&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5770 +/- ##\n==========================================\n- Coverage 77.26% 77.25% -0.02% \n==========================================\n Files 146 146 \n Lines 25948 25958 +10 \n==========================================\n+ Hits 20048 20053 +5 \n- Misses 5900 5905 +5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <ø> (ø)` | |\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.33% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.75% <100.00%> (+0.26%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=footer). Last update [3653d01...cfd7c66](https://codecov.io/gh/huggingface/transformers/pull/5770?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Not sure what is going on with Cicle CI here - might be related to the problems with github yesterday...",
"Yeah it doesn't want to let me re-run, I think I'll just add a small change to refresh it",
"After our discussion yesterday @TevenLeScao, I think I am more or less fine with this PR. A couple of things we should maybe think about:\r\n\r\n1) ***Consistency with `use_cache` of other models***\r\nAs it is implemented at the moment (correct me if I'm wrong), if `use_cache=False`, `mem_len` has no effect as `mems` will always be `None`. This is not ideal IMO because this means that one has to enable `use_cache=True` to correctly train the model => this is not really consistent with the way `use_cache` is used in the library in general. In all other models GPT2, CTRL, T5, BART, Reformer `use_cache` should only be used for inference and disabled for training. So as a solution I think, there are two options:\r\n A.) All `use_cache` statements should be updated with `use_cache or self.training` => this way I think it would be consistent with the logic / design of other models (Let me know if this is not clear)\r\n B.) Add an explicit warning that `use_cache` should be `True` for training to enable the `mems` => I don't like this very much because this is confusing for people that are used to GPT2 where `use_cache` is only for inference. \r\n\r\n2) ***Equality `use_cache=True/False`*** \r\nAfter our discussion yesterday, it seems that since `XLNet` uses a mixture of CLM and bi-directional self-attention (defined in the `perm_mask` input), it seems that it is **not** possible to have a (mathematically) identical output between `use_cache=True` and `use_cache=False`. I guess that's just how it is in `XLNet` and we can't really do anything against it. \r\n\r\n - A.) I guess we should change (...could be done in a 2nd PR though), is to add the parameter `offset` to `XLNet` config and add it to all `forward(...)` functions and maybe give it a better name... As I understood it yesterday, `offset` defines the trade-off between how many tokens should be used from `mems` (and are therefore not in the *query projection*) and how many are in the query projection and have a bi-directional mask on them. At the moment `offset` is hard-coded to `2` in the `prepare_generate` function which does not allow for a lot of flexibility and is somewhat arbitrary IMO. I think, this parameter should be handled in a similar way as `num_hashes` is handled in `Reformer`\r\n\r\n- B.) This is one is more important to me. We should add a test to `XLNet` that verifies that `use_cache` does give the **same** outputs if the `perm_mask` is set to a causal mask. We have tests for this in `GPT2, T5, Bart` I think. Here is the one for GPT2 e.g.: https://github.com/huggingface/transformers/blob/1f75f9317e381425ee56f7108e5ec8d3f3d6b6ad/tests/test_modeling_gpt2.py#L165 . The test here can be very similar - only that we need to define the `perm_mask` correctly. This test would be extremely useful a) to make sure we fully understand what's going on in `XLNet` and b) make sure that there is no bug.\r\n\r\n3) ***Benchmark performance improvement***\r\n\r\nSoon we will have benchmarking for the `generate` function, see PR here: https://github.com/huggingface/transformers/pull/5802 . So I took this case here as a good trial to test the benchmark scripts for generation. I started two benchmarks 1 on CPU 1 on GPU and will post the results here as soon as the benchmarking is done. It be great to add benchmark results in general to PRs like this.\r\n\r\n\r\nI think this is quite an important Model and an important PR, so I'd love to loop in @sshleifer and @thomwolf here as well to hear their opinions. \r\n\r\n@TevenLeScao - sorry for the long message! It's mostly because I want to better understand `XLNet` in detail. Let me know if some of my points are not clear or not 100% correct.",
"I used the following code for benchmarking:\r\n\r\n```python\r\n\r\n#!/usr/bin/env python3\r\nfrom transformers import XLNetConfig, PyTorchBenchmark, PyTorchBenchmarkArguments\r\n\r\nconfig_with_cache = XLNetConfig.from_pretrained(\"xlnet-base-cased\", use_cache=True, mem_len=1024)\r\nconfig_without_cache = XLNetConfig.from_pretrained(\"xlnet-base-cased\", use_cache=False, mem_len=1024)\r\nconfig_without_cache_no_mems = XLNetConfig.from_pretrained(\"xlnet-base-cased\", use_cache=False)\r\n\r\nassert config_without_cache_no_mems.mem_len is None, \"Configs are wrong\"\r\nassert config_with_cache.mem_len == config_without_cache.mem_len == 1024, \"Configs are wrong\"\r\nassert config_with_cache.use_cache is (not config_without_cache.use_cache) is True and (not config_without_cache_no_mems.use_cache) is True, \"Configs are wrong\"\r\n\r\nargs = PyTorchBenchmarkArguments(models=[\"xlnet-cache\", \"xlnet-no-cache\", \"xlnet-no-mems\"], sequence_lengths=[300], batch_sizes=[1], generate=True, no_inference=True)\r\n\r\nbenchmark = PyTorchBenchmark(args=args, configs=[config_with_cache, config_without_cache, config_without_cache_no_mems])\r\nbenchmark.run()\r\n\r\n\r\n```\r\nand the benchmark code from #5802 \r\n\r\n\r\n=> I could not see a real speed-up on GPU for the following case:\r\nStart with 1 token and generate up to `Seq Length` tokens:\r\n\r\n INFERENCE - SPEED - RESULT\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n xlnet-cache 1 500 11.8 \r\n xlnet-no-cache 1 500 11.925 \r\n xlnet-no-mems 1 500 11.911 \r\n--------------------------------------------------------------------------------\r\n\r\n INFERENCE - MEMORY - RESULT \r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Memory in MB \r\n--------------------------------------------------------------------------------\r\n xlnet-cache 1 500 1351 \r\n xlnet-no-cache 1 500 1397 \r\n xlnet-no-mems 1 500 1397 \r\n--------------------------------------------------------------------------------\r\n\r\n ENVIRONMENT INFORMATION \r\n- transformers_version: 3.0.2\r\n- framework: PyTorch\r\n- use_torchscript: False\r\n- framework_version: 1.5.0\r\n- python_version: 3.7.7\r\n- system: Linux\r\n- cpu: x86_64\r\n- architecture: 64bit\r\n- date: 2020-07-16\r\n- time: 11:28:43.644495\r\n- fp16: False\r\n- use_multiprocessing: True\r\n- only_pretrain_model: False\r\n- cpu_ram_mb: 32089\r\n- use_gpu: True\r\n- num_gpus: 1\r\n- gpu: TITAN RTX\r\n- gpu_ram_mb: 24217\r\n- gpu_power_watts: 280.0\r\n- gpu_performance_state: 0\r\n- use_tpu: False\r\n\r\n=> For CPU, there is a small speed up and the speed up is probably more significant for longer sequence length (so I run it again for longer sequences):\r\n\r\nINFERENCE - SPEED - RESULT\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n xlnet-cache 1 50 5.032 \r\n xlnet-no-cache 1 50 6.149 \r\n xlnet-no-mems 1 50 6.175 \r\n--------------------------------------------------------------------------------\r\n\r\nINFERENCE - MEMORY - RESULT \r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Memory in MB \r\n--------------------------------------------------------------------------------\r\n xlnet-cache 1 50 691 \r\n xlnet-no-cache 1 50 695 \r\n xlnet-no-mems 1 50 696 \r\n--------------------------------------------------------------------------------\r\n\r\nENVIRONMENT INFORMATION\r\n- transformers_version: 3.0.2\r\n- framework: PyTorch\r\n- use_torchscript: False\r\n- framework_version: 1.5.0\r\n- python_version: 3.7.7\r\n- system: Linux\r\n- cpu: x86_64\r\n- architecture: 64bit\r\n- date: 2020-07-16\r\n- time: 11:28:43.644495\r\n- fp16: False\r\n- use_multiprocessing: True\r\n- only_pretrain_model: False\r\n- cpu_ram_mb: 32089\r\n- use_gpu: False\r\n- use_tpu: False\r\n\r\n\r\nBut overall, the speed-up is not really significant... @TevenLeScao can you post the code you used to benchmark the CPU speed-up here? Maybe I am doing something wrong.\r\n\r\nI think if we manage to cache the key, value projections instead of recalculating them for every token we would see a bigger speed-up, but this has to checked.",
"- on GPU I also haven't observed a difference, and doing inference/text generation on 1 sentence isn't going to be very affected by the caching trick since the GPU is going to parallelize everything anyway\r\n- on CPU I've run from the text generation pipeline:\r\n\r\n```\r\n#!/usr/bin/env python3\r\nfrom transformers import pipeline, XLNetLMHeadModel\r\nimport time\r\n\r\n# xlnet = XLNetLMHeadModel.from_pretrained(\"xlnet-base-cased\")\r\nxlnet = XLNetLMHeadModel.from_pretrained(\"xlnet-base-cased\", use_cache=False)\r\ngenerator = pipeline(\"text-generation\", model=xlnet, tokenizer=\"xlnet-base-cased\")\r\nstart = time.time()\r\nnumber_gen = 100\r\nfor i in range(number_gen):\r\n output_to_check = generator(\"Today is a beautiful day and I, \")\r\n print(output_to_check)\r\n print((time.time() - start) / (i+1))\r\n print()\r\n```\r\n\r\nwhich makes I think a big (33.2 vs 8.6 seconds on my CPU) difference since XLNet has the 170-long padding text at the start! Which we should probably also take a look at since it heavily influences the generated text...",
"I can confirm a much bigger speed-up on CPU for longer sequences. Here the results:\r\n\r\n==================== INFERENCE - SPEED - RESULT ==================== \r\n-------------------------------------------------------------------------------- \r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n xlnet-cache 1 300 37.733 \r\n xlnet-no-cache 1 300 98.142 \r\n xlnet-no-mems 1 300 94.338 \r\n-------------------------------------------------------------------------------- \r\n \r\n==================== INFERENCE - MEMORY - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Memory in MB \r\n-------------------------------------------------------------------------------- \r\n xlnet-cache 1 300 728 \r\n xlnet-no-cache 1 300 752 \r\n xlnet-no-mems 1 300 750 \r\n-------------------------------------------------------------------------------- \r\n \r\n==================== ENVIRONMENT INFORMATION ==================== \r\n- transformers_version: 3.0.2 \r\n- framework: PyTorch \r\n- use_torchscript: False \r\n- framework_version: 1.5.0 \r\n- python_version: 3.7.7\r\n- system: Linux\r\n- cpu: x86_64\r\n- architecture: 64bit \r\n- date: 2020-07-16 \r\n- time: 13:49:43.409581 \r\n- fp16: False \r\n- use_multiprocessing: True \r\n- only_pretrain_model: False \r\n- cpu_ram_mb: 32089 \r\n- use_gpu: False \r\n- use_tpu: False\r\n",
"@patrickvonplaten I added the `self.training` check and the test for `use_cache` in autoregressive mode.",
"So weird, it passed the tests earlier but the CI just told me the push had been refused, I'll revert it and look at it again tomorrow"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | As discussed with @joeddav and @sgugger, this PR lightly refactors the `use_cache` argument in XLNet.
- In line with #5438, in the model methods `use_cache` defaults to None, which redirects to the model config value if no value is passed.
- `use_cache` is independent of `mem_len`: if `use_cache` is True and `mem_len` is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns `mems` to be used as `past` in generation.
- This changes functionality and will enable the default model to use caching; for instance, it should speed up the [inference widget](https://huggingface.co/xlnet-base-cased?text=My+name+is+Clara+and+I+am+) significantly (x3 speed-up on my CPU) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5770/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5770/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5770",
"html_url": "https://github.com/huggingface/transformers/pull/5770",
"diff_url": "https://github.com/huggingface/transformers/pull/5770.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5770.patch",
"merged_at": 1595010257000
} |
https://api.github.com/repos/huggingface/transformers/issues/5769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5769/comments | https://api.github.com/repos/huggingface/transformers/issues/5769/events | https://github.com/huggingface/transformers/pull/5769 | 657,317,032 | MDExOlB1bGxSZXF1ZXN0NDQ5NDUzNzM2 | 5,769 | [fix] T5 ONNX test: model.to(torch_device) | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Code Quality is failing due to files that were not modified in this PR, not addressing to avoid rebasing somewhere else.",
"ill fix code quality."
] | 1,594 | 1,594 | 1,594 | MEMBER | null | Should fix #5724
Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5769/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5769",
"html_url": "https://github.com/huggingface/transformers/pull/5769",
"diff_url": "https://github.com/huggingface/transformers/pull/5769.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5769.patch",
"merged_at": 1594822282000
} |
https://api.github.com/repos/huggingface/transformers/issues/5768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5768/comments | https://api.github.com/repos/huggingface/transformers/issues/5768/events | https://github.com/huggingface/transformers/issues/5768 | 657,315,849 | MDU6SXNzdWU2NTczMTU4NDk= | 5,768 | Confidence score prediction of pretrained models in extractive QA - similar to pipeline | {
"login": "govindraj513",
"id": 8316702,
"node_id": "MDQ6VXNlcjgzMTY3MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8316702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/govindraj513",
"html_url": "https://github.com/govindraj513",
"followers_url": "https://api.github.com/users/govindraj513/followers",
"following_url": "https://api.github.com/users/govindraj513/following{/other_user}",
"gists_url": "https://api.github.com/users/govindraj513/gists{/gist_id}",
"starred_url": "https://api.github.com/users/govindraj513/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/govindraj513/subscriptions",
"organizations_url": "https://api.github.com/users/govindraj513/orgs",
"repos_url": "https://api.github.com/users/govindraj513/repos",
"events_url": "https://api.github.com/users/govindraj513/events{/privacy}",
"received_events_url": "https://api.github.com/users/govindraj513/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Have you got your answer? I got the same question.."
] | 1,594 | 1,606 | 1,600 | NONE | null | How can we calculate the confidence of a single sentence predicted from extractive question answering using autotokenizer. we will get a score using the pipeline method for a sentence, but what we get from the extractive qa are answer_Start_scores and answer_end_scores. How can we get one single score ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5768/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5767/comments | https://api.github.com/repos/huggingface/transformers/issues/5767/events | https://github.com/huggingface/transformers/issues/5767 | 657,284,096 | MDU6SXNzdWU2NTcyODQwOTY= | 5,767 | How to get parameters from a Query. | {
"login": "thiagomoeng",
"id": 64150563,
"node_id": "MDQ6VXNlcjY0MTUwNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/64150563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thiagomoeng",
"html_url": "https://github.com/thiagomoeng",
"followers_url": "https://api.github.com/users/thiagomoeng/followers",
"following_url": "https://api.github.com/users/thiagomoeng/following{/other_user}",
"gists_url": "https://api.github.com/users/thiagomoeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thiagomoeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thiagomoeng/subscriptions",
"organizations_url": "https://api.github.com/users/thiagomoeng/orgs",
"repos_url": "https://api.github.com/users/thiagomoeng/repos",
"events_url": "https://api.github.com/users/thiagomoeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/thiagomoeng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It seems you're trying to identify entities in your query? You could use an NER model to do so, but I'm not sure the entities the NER models on the hub are trained on would work in your use-case. You can still check them out, and check the [NER script](https://github.com/huggingface/transformers/tree/master/examples/token-classification) if you want to train a model with your data.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,601 | 1,601 | NONE | null | # ❓ Questions & Help
Hi maybe someone can help me with a problem, I need to get parameters from a query.
I already use question answering pipeline to identify a report and now I want to get some parameters to use as filter, example:
Query: How many electric cars sold in 2019?
Output: (Item: eletric cars / Year:2019)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5767/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5766/comments | https://api.github.com/repos/huggingface/transformers/issues/5766/events | https://github.com/huggingface/transformers/issues/5766 | 657,206,339 | MDU6SXNzdWU2NTcyMDYzMzk= | 5,766 | Hello,I have this problem in running 'run_glue.py'! | {
"login": "BCWang93",
"id": 31853251,
"node_id": "MDQ6VXNlcjMxODUzMjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/31853251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BCWang93",
"html_url": "https://github.com/BCWang93",
"followers_url": "https://api.github.com/users/BCWang93/followers",
"following_url": "https://api.github.com/users/BCWang93/following{/other_user}",
"gists_url": "https://api.github.com/users/BCWang93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BCWang93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BCWang93/subscriptions",
"organizations_url": "https://api.github.com/users/BCWang93/orgs",
"repos_url": "https://api.github.com/users/BCWang93/repos",
"events_url": "https://api.github.com/users/BCWang93/events{/privacy}",
"received_events_url": "https://api.github.com/users/BCWang93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @BCWang93, I had the same issue, you need to have scikit-learn installed to run 'run_glue.py'.\r\n\r\n`pip install scikit-learn` solved the issue for me.",
"Indeed, this issue happens when `scikit-learn` is not installed. Thanks @nassim-yagoub!"
] | 1,594 | 1,595 | 1,595 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
hello ,I have this problem in running 'run_glue.py'.Anyone can help me ?Thanks!
from transformers import glue_compute_metrics
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'glue_compute_metrics' from 'transformers' (/home/wangbingchen/wangbingchen/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5766/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5765/comments | https://api.github.com/repos/huggingface/transformers/issues/5765/events | https://github.com/huggingface/transformers/issues/5765 | 657,172,707 | MDU6SXNzdWU2NTcxNzI3MDc= | 5,765 | When using "transformers.WarmUp" with tensorflow 2.0.0, warming up restart in each epoch! | {
"login": "Wchenguang",
"id": 22413087,
"node_id": "MDQ6VXNlcjIyNDEzMDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/22413087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wchenguang",
"html_url": "https://github.com/Wchenguang",
"followers_url": "https://api.github.com/users/Wchenguang/followers",
"following_url": "https://api.github.com/users/Wchenguang/following{/other_user}",
"gists_url": "https://api.github.com/users/Wchenguang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wchenguang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wchenguang/subscriptions",
"organizations_url": "https://api.github.com/users/Wchenguang/orgs",
"repos_url": "https://api.github.com/users/Wchenguang/repos",
"events_url": "https://api.github.com/users/Wchenguang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wchenguang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nCan you give a code that better explain your issue and a way to reproduce it please? Thanks :) ",
"Sorry! I check again and it works well!\r\nClose it!\r\n"
] | 1,594 | 1,596 | 1,596 | NONE | null | When using "transformers.WarmUp" with tensorflow 2.0.0, warming up restart in each epoch!
That is because, in keras "Callbacks.on_batch_begin(self, batch, logs)", batch start from zero in each epoch when using "fit" method. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5765/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5764/comments | https://api.github.com/repos/huggingface/transformers/issues/5764/events | https://github.com/huggingface/transformers/pull/5764 | 657,153,899 | MDExOlB1bGxSZXF1ZXN0NDQ5MzE5OTg5 | 5,764 | TF Longformer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In `RUN_SLOW=1` the new tests: `test_saved_model_with_attentions_outputs` and `test_saved_model_with_hidden_states_output` fail @jplu from https://github.com/huggingface/transformers/pull/5468. The problem is that I have to use `tf.cond(...)` and it seems like this forces me to also use `cast_bool_...`. Not sure if you have any ideas on how to fix this @jplu .",
"Yes, It is still an issue with the AutoGraph thing :cry: I suggest to comment them for now.",
"I'm currently thinking on how to properly rework all the booleans handling in TF. As this is the main issue.",
"Ok...will leave it for now since the test are only in `RUN_SLOW` mode, so they won't show up in Circle CI",
"Confirmed that this PR will not slow down the Longformer PyTorch version.\r\n\r\nRunning the following benchmark on master:\r\n```\r\npython examples/benchmarking/run_benchmark.py --models allenai/longformer-base-4096 --no_memory --sequence_length 512 1024\r\n```\r\ngives same performance is in as in https://github.com/huggingface/transformers/pull/5811.",
"Benchmarking the model in TF leads to a slow-down vs. PyTorch of ca. 1.5, which is reasonable IMO:\r\n\r\nRunning: \r\n```\r\npython examples/benchmarking/run_benchmark_tf.py --models allenai/longformer-base-4096 --no_memory --sequence_length 512 1024\r\n```\r\n\r\ngives:\r\n\r\n```\r\n \r\n==================== INFERENCE - SPEED - RESULT ==================== \r\n-------------------------------------------------------------------------------- \r\n Model Name Batch Size Seq Length Time in s \r\n-------------------------------------------------------------------------------- \r\n allenai/longformer-base-4096 8 512 0.226 \r\n allenai/longformer-base-4096 8 1024 0.446 \r\n-------------------------------------------------------------------------------- \r\n\r\n==================== ENVIRONMENT INFORMATION ====================\r\n- transformers_version: 3.0.2\r\n- framework: TensorFlow\r\n- eager_mode: False\r\n- use_xla: False\r\n- framework_version: 2.3.0\r\n- python_version: 3.6.10\r\n- system: Linux\r\n- cpu: x86_64\r\n- architecture: 64bit\r\n- date: 2020-08-06\r\n- time: 15:55:55.754100\r\n- fp16: False\r\n- use_multiprocessing: True\r\n- only_pretrain_model: False\r\n- cpu_ram_mb: 32088\r\n- use_gpu: True\r\n- num_gpus: 1\r\n- gpu: TITAN RTX\r\n- gpu_ram_mb: 24217\r\n- gpu_power_watts: 280.0\r\n- gpu_performance_state: 0\r\n- use_tpu: False\r\n```\r\n\r\nAt the moment running the model in XLA on GPU fails...=> should take a closer look in a next PR.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=h1) Report\n> Merging [#5764](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1b8a7ffcfdfe37f5440ac0eafb58089ff5aef00a&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `24.52%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5764 +/- ##\n==========================================\n+ Coverage 79.33% 79.38% +0.05% \n==========================================\n Files 148 149 +1 \n Lines 27196 27670 +474 \n==========================================\n+ Hits 21577 21967 +390 \n- Misses 5619 5703 +84 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `15.76% <15.76%> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.76% <92.98%> (+0.54%)` | :arrow_up: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.25% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `66.86% <100.00%> (+0.20%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+1.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.60% <0.00%> (+1.59%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5764/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=footer). Last update [1b8a7ff...41cb64f](https://codecov.io/gh/huggingface/transformers/pull/5764?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Where do I send the pizza & beer to get this merged?",
"I'm curious if you think the longformer should be added to the language_generation_model? ",
"@sgugger - I added the `return_dict` functionality and adapted the doc strings -> the new docstring functions are awesome!",
"> I'm curious if you think the longformer should be added to the language_generation_model?\r\n\r\nWe will add it to the EncoderDecoderModel framework, where it can be used with `generate()`"
] | 1,594 | 1,597 | 1,597 | MEMBER | null | This PR adds Longformer
In a first step, it is made sure that code is clean and that all tests pass. Todo:
### ToDo List:
- [x] same output for local attention only
- [x] same output for local + global attention only
- [x] same output for aggressive test
- [x] add all other tests
- [x] add longformer QA
- [x] refactor code and run benchmark
- [x] adds weights to all QA models and check performance via notebook
### ToDo after PR is merged:
- [ ] Add Longformer for SeqClass, MC, ... ("good first issue")
- [ ] Speed up performance and make GPU XLA work -> use Benchmark tools and possible TF Profiler
### For Review
This PR adds `TFLongformer` and the two most important parent classes `TFLongformerForMaskedLM` and `TFLongformerForQuestionAnswering`. Many tests are added to verify that TFLongformer gives identical results to PT Longformer and a colab notebook (see below) is attached to show performance on a real task.
Below you can find a Benchmark showing that TFLongformer is about 1.5x slower than PT on GPU. For now this is acceptable IMO, but in a future PR I want to take a deeper look at how TF code can be optimized and also solve a problem there is currently with TF XLA.
I spent a lot of time, trying to solve this issue: https://github.com/huggingface/transformers/issues/5815 for TFLongformer and didn't manage to find a good solution. The corresponding tests are in `SLOW` mode so they won't fail on this PR. Since we are currently thinking about a better solution than using `cast_bool_to_primitive` to solve the know TF graph boolean error, I think I will leave this small bug in TFLongformer for now (it's quite an edge IMO anymays).
Docs are added and checked, comments are added, performance on TriviaQA is verified in TF colab: https://colab.research.google.com/drive/1UmU3T1nPmJ2LgXQtPcaEVtXUnoBJ1onF?usp=sharing and TF weights were added to all longformer models here: https://huggingface.co/models?search=longformer.
Would be happy about a review @jplu @ibeltagy @LysandreJik @sshleifer @sgugger @julien-c @thomwolf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5764/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5764",
"html_url": "https://github.com/huggingface/transformers/pull/5764",
"diff_url": "https://github.com/huggingface/transformers/pull/5764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5764.patch",
"merged_at": 1597094706000
} |
https://api.github.com/repos/huggingface/transformers/issues/5763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5763/comments | https://api.github.com/repos/huggingface/transformers/issues/5763/events | https://github.com/huggingface/transformers/pull/5763 | 657,122,525 | MDExOlB1bGxSZXF1ZXN0NDQ5Mjk0OTkz | 5,763 | ADD ERNIE model | {
"login": "nghuyong",
"id": 16462374,
"node_id": "MDQ6VXNlcjE2NDYyMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/16462374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nghuyong",
"html_url": "https://github.com/nghuyong",
"followers_url": "https://api.github.com/users/nghuyong/followers",
"following_url": "https://api.github.com/users/nghuyong/following{/other_user}",
"gists_url": "https://api.github.com/users/nghuyong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nghuyong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nghuyong/subscriptions",
"organizations_url": "https://api.github.com/users/nghuyong/orgs",
"repos_url": "https://api.github.com/users/nghuyong/repos",
"events_url": "https://api.github.com/users/nghuyong/events{/privacy}",
"received_events_url": "https://api.github.com/users/nghuyong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"We shouldn't have to add those because the library should work out of the box with the models at https://huggingface.co/nghuyong\r\n\r\nThe discoverability of those models on huggingface.co is a different thing and we're open to suggestions to improve it\r\n\r\nFor instance, you should upload model cards describing the models including metadata etc. See https://huggingface.co/docs for instance\r\n\r\nPinging @JetRunner and @sshleifer for feedback",
"Thanks for your contribution!\n\nYep, I think this PR is not very necessary here since ERNIE serves as weights for BERT. We have shifted into a \"model hub\" model so I can't think of a reason to keep hard-coding urls here (and we think we should even remove the existing ones at some point) @julien-c ",
"Also, what do you mean by \"especially on Chinese\"? Aren't these checkpoints in English I think?",
"We should add some docs and tweet!",
"You could copy https://github.com/nghuyong/ERNIE-Pytorch/blob/master/Readme.md to each model card.\r\n",
"@JetRunner \r\nERNIE1.0 is for Chinese, more performance detail: https://arxiv.org/abs/1904.09223\r\nERNIE2.0 and ERNIE-tiny are for English.",
"> @JetRunner \n> \n> ERNIE1.0 is for Chinese, more performance detail: https://arxiv.org/abs/1904.09223\n> \n> ERNIE2.0 and ERNIE-tiny are for English.\n\nYeah, this raises some confusion here. Thanks for your clarification!",
"@julien-c @JetRunner @sshleifer \r\nI have withdrew my previous submission, and add model_card in this new commit.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=h1) Report\n> Merging [#5763](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8ab565a4be5a7fd96b19ef88d474037ef31f27e5&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5763 +/- ##\n==========================================\n- Coverage 77.32% 77.24% -0.09% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20141 20120 -21 \n- Misses 5906 5927 +21 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=footer). Last update [8ab565a...0c8a23f](https://codecov.io/gh/huggingface/transformers/pull/5763?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @nghuyong! The model cards look awesome.\nWe'll take it from here and add some metadata and then merge the model cards. ",
"@JetRunner Thanks",
"Hi @nghuyong ,\r\nI just tried [your script](https://github.com/nghuyong/ERNIE-Pytorch#reproduce-ernie-papers-case), and found that\r\n```\r\nSome weights of BertForMaskedLM were not initialized from the model checkpoint at nghuyong/ernie-1.0 and are newly initialized: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nThis error is very common if you upload a model that was loaded by `BertModel`. Could you re-upload these checkpoints by loading them in `BertForMaskedLM` first? That would be something like\r\n\r\n```python\r\ntokenizer = BertTokenizer.from_pretrained('./convert')\r\nmodel = BertForMaskedLM.from_pretrained('./convert')\r\n# instead of `model = BertModel.from_pretrained('./convert')`\r\n\r\nmodel.save_pretrained('./saved')\r\n```\r\nThis will allow users to directly use the checkpoints to do mask filling without fine-tuning them. Also, could you help convert the checkpoints to TensorFlow as well? That would be super easy. We have a tutorial here: https://huggingface.co/transformers/model_sharing.html\r\n\r\nThank you!",
"@JetRunner OK,I will update soon",
"@JetRunner have updated now~"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | ERNIE is a serial model released by Baidu including:
ERNIE1.0: [Ernie: Enhanced representation through knowledge integration](https://arxiv.org/abs/1904.09223)
ERNIE2.0: [ERNIE 2.0: A Continual Pre-training Framework for Language Understanding](https://arxiv.org/abs/1907.12412)
ERNIE-tiny: Use data distillation from ERNIE 2.0
These models have state-of-the-art performances on NLU tasks, especially on Chinese tasks.
So, ERNIE is a very important model in the transformer model family.
Actually, ERNIE model is the same structure as BERT including the above three ERNIE viersions, so we don't add a new model, and just need to convert the weight.
> Note that: ERNIE2.0 introduces a task embedding, but the official released version doesn't have this embedding weight, and all released weights are the same with BERT.
I have successfully converted ERNIE models into PyTorch version and do serial experiments to check the conversion is the same with the original paddlepaddle implement.
More detail: https://github.com/nghuyong/ERNIE-Pytorch
In this PR, I directly add ERNIE model to BERT model related files.
This PR is link to issue: [#issue5117](https://github.com/huggingface/transformers/issues/5117),[#issue928](https://github.com/huggingface/transformers/issues/928) and [#issue514](https://github.com/huggingface/transformers/issues/514)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5763/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5763/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5763",
"html_url": "https://github.com/huggingface/transformers/pull/5763",
"diff_url": "https://github.com/huggingface/transformers/pull/5763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5763.patch",
"merged_at": 1594868585000
} |
https://api.github.com/repos/huggingface/transformers/issues/5762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5762/comments | https://api.github.com/repos/huggingface/transformers/issues/5762/events | https://github.com/huggingface/transformers/issues/5762 | 657,062,034 | MDU6SXNzdWU2NTcwNjIwMzQ= | 5,762 | Feature request of Sparselty Gated Mixture-of-Experts and PowerNorm | {
"login": "AranKomat",
"id": 29173653,
"node_id": "MDQ6VXNlcjI5MTczNjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/29173653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AranKomat",
"html_url": "https://github.com/AranKomat",
"followers_url": "https://api.github.com/users/AranKomat/followers",
"following_url": "https://api.github.com/users/AranKomat/following{/other_user}",
"gists_url": "https://api.github.com/users/AranKomat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AranKomat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AranKomat/subscriptions",
"organizations_url": "https://api.github.com/users/AranKomat/orgs",
"repos_url": "https://api.github.com/users/AranKomat/repos",
"events_url": "https://api.github.com/users/AranKomat/events{/privacy}",
"received_events_url": "https://api.github.com/users/AranKomat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Is there any update on this? Thanks.",
"@lucidrains made this [repo](https://github.com/lucidrains/mixture-of-experts) with me.",
"Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,605 | 1,605 | NONE | null | # 🚀 Feature request
- The newer variant of MoE for Transformer as in [GShard: Scaling Giant Models with Conditional
Computation and Automatic Sharding](https://arxiv.org/abs/2006.16668) ([Relevant codes](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/research/moe.py))
- PowerNorm of [PowerNorm: Rethinking Batch Normalization in Transformers](https://arxiv.org/abs/2003.07845) ([Relevant codes](https://github.com/sIncerass/powernorm/blob/master/fairseq/modules/norms/mask_powernorm.py))
## Motivation
MoE as in GShard should be a crucial addition for this library, since the performance gain from it is rather significant as demonstrated in the paper. For example, Transformer+MoE achieved 44.3 avg BLEU on various NMT tasks with 10x less computes than with the baseline Transformer to get 36.9 avg BLEU. The codes for GShard has not been available yet, and if it will be published, it is most likely to be published in Tensorflow (or JAX) rather than PyTorch due to their use of TPUs. However, the codes for MoE does not seem to be complicated, so it must be easy to implement it.
PowerNorm improved Wikitext-103 perplexity from 20.9 to 17.9 without additional computes simply by replacing LayerNorm with PowerNorm. Given that there is already PyTorch codes available, which is easy to transplant into this library, I think it's reasonable to suggest its feature request.
## Your contribution
As for MoE, I'm currently writing my implementation of non-hierarchical MoE in PyTorch (no model parallelism as in GShard for numerous GPUs), and I'm going to compare its performance against the baseline. It should serve as a reference. My collaborator (@lucidrains) may write the same one with more flexibilities later, but no guarantee. You may want hierarchical MoE and GShard components as well. If you want them, my codes may not be as useful.
As for PowerNorm, I don't think you need much help from me, as all you need is to replace LayerNorm with the above implemenation. I'll try to verify its performance gain, since replication of the results hasn't been done yet. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5762/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5761/comments | https://api.github.com/repos/huggingface/transformers/issues/5761/events | https://github.com/huggingface/transformers/pull/5761 | 657,015,861 | MDExOlB1bGxSZXF1ZXN0NDQ5MjA4MjE4 | 5,761 | [cleanup] T5 test, warnings | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | - simplify shape checking, task specific params usage
- add task specific params logger statement to examples
- Let model.config.max_length determine document length for evaluation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5761/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5761",
"html_url": "https://github.com/huggingface/transformers/pull/5761",
"diff_url": "https://github.com/huggingface/transformers/pull/5761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5761.patch",
"merged_at": 1594815803000
} |
https://api.github.com/repos/huggingface/transformers/issues/5760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5760/comments | https://api.github.com/repos/huggingface/transformers/issues/5760/events | https://github.com/huggingface/transformers/pull/5760 | 656,963,482 | MDExOlB1bGxSZXF1ZXN0NDQ5MTY3MTM3 | 5,760 | Zero shot classification pipeline | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> The UI for inputting a list of potential labels + possibly a hypothesis template is non trivial\r\n\r\nI think for purposes of the early inference API, customizing the hypothesis template is not all that important. I added support for providing labels as a list of comma-delimited strings instead of a list.\r\n\r\n> one model is currently linked to one (and one only) Pipeline type in the inference API.\r\n\r\nWould it be possible to have one model, e.g. `bart-large-mnli`, linked to the zero shot pipeline and have the others remain linked to text classification?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=h1) Report\n> Merging [#5760](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8ab565a4be5a7fd96b19ef88d474037ef31f27e5&el=desc) will **increase** coverage by `0.72%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5760 +/- ##\n==========================================\n+ Coverage 77.32% 78.05% +0.72% \n==========================================\n Files 146 146 \n Lines 26047 26089 +42 \n==========================================\n+ Hits 20141 20363 +222 \n+ Misses 5906 5726 -180 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `78.47% <100.00%> (+1.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.44% <0.00%> (-1.17%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=footer). Last update [8ab565a...afcf86a](https://codecov.io/gh/huggingface/transformers/pull/5760?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Would it be possible to have one model, e.g. `bart-large-mnli`, linked to the zero shot pipeline and have the others remain linked to text classification?\r\n\r\nYes we can override manually",
"So I think I found a possible bug in the calculation of the probabilities. Consider this example:\r\n```python\r\nsequences = [\"Who are you voting for in 2020?\"]\r\ncandidate_labels = [\"politics\", \"public health\", \"economics\", \"elections\"]\r\n\r\nclassifier(sequences[0], candidate_labels)\r\n```\r\nwhich gives the output:\r\n```\r\n{'labels': ['politics', 'elections', 'economics', 'public health'],\r\n 'scores': [0.5225354433059692,\r\n 0.4626988470554352,\r\n 0.007836099714040756,\r\n 0.006929598283022642],\r\n 'sequence': 'Who are you voting for in 2020?'}\r\n```\r\n\r\nI was able to replicate the probability calculations by doing the following:\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-large-mnli\")\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"facebook/bart-large-mnli\")\r\n\r\nsequences = [\"Who are you voting for in 2020?\"]\r\ncandidate_labels = [\"politics\", \"public health\", \"economics\", \"elections\"]\r\ntemplate = \"This example is {}.\"\r\nx = tokenizer(sequences*len(candidate_labels), [template.format(label) for label in candidate_labels], return_tensors=\"pt\", padding=True)\r\nwith torch.no_grad():\r\n y = model(**x)[0]\r\n probs = F.softmax(y[:,-1], dim=0) # Think this is wrong\r\n\r\nprint(probs)\r\n# tensor([0.5225, 0.0069, 0.0078, 0.4627])\r\n```\r\n\r\nSo I think the probability calculation is wrong. `y` is of shape `(4, 3)` and the softmax should be over the 3 not the the 4 dimensions, since it is over contradiction, neutral, entailment. So what I am suggesting is that the probability calculation should be the following instead:\r\n```python\r\nprobs2 = F.softmax(y, dim=-1)\r\nprobs2 = probs2[:,-1] / sum(probs2[:,-1])\r\nprint(probs2)\r\n# tensor([0.4977, 0.0016, 0.0049, 0.4958])\r\n```\r\n\r\nIt's especially important to do it this way since there is no guarantee that the exponential of `y` will sum to one since they are logits.\r\n",
"Hey @sachinruk, thanks for the comment. This isn't a bug though, it's just the way we've chosen to use the outputs. When `multi_class=False`, we ignore the contradiction and neutral logits and just do a softmax over the entailment logits. This does guarantee you will sum to 1. Your snippet is an alternative way of interpreting the model outputs to get probabilities that sum to 1, but I don't see a reason to think it is more correct.",
"I have included this in my local Jupyter notebook:\r\n\r\n!pip install git+https://github.com/huggingface/transformers.git --user\r\n\r\nnlp = pipeline(\"zero-shot-classification\") gives the following error.\r\nKeyError: \"Unknown task zero-shot-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_en_to_fr', 'translation_en_to_de', 'translation_en_to_ro', 'text-generation']\"\r\n\r\nWhat could be the missing steps? Thanks.",
"@SophiaLeigh you probably just had another version of transformers installed already so the pip install didn’t do anything. Try just passing`--upgrade` with the pip install.",
"EDIT: \r\n\r\n**pip install transformers** delivers a version of pipelines.py that is not the one found in the current master\r\n\r\n**pip install pip install git+https://github.com/huggingface/transformers** delivers the correct version obviously. \r\n\r\nI dont know anything about the inner workings of pip but I have the current version _pip 20.2.2_ installed.\r\n\r\n\r\nSame issue as @SophiaLeigh with \r\n\r\n_transformers 3.0.2_ \r\n\r\n---------------------------------------------------------------------------\r\n```\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-6-1f0825594ce1> in <module>\r\n----> 1 classifier = pipeline(\"zero-shot-classification\")\r\n\r\n~/.conda/envs/zero-shot/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs)\r\n 1819 # Retrieve the task\r\n 1820 if task not in SUPPORTED_TASKS:\r\n-> 1821 raise KeyError(\"Unknown task {}, available tasks are {}\".format(task, list(SUPPORTED_TASKS.keys())))\r\n 1822 \r\n 1823 framework = framework or get_framework(model)\r\n\r\nKeyError: \"Unknown task zero-shot-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_en_to_fr', 'translation_en_to_de', 'translation_en_to_ro', 'text-generation']\"\r\n \r\n```\r\n\r\n\r\n\r\n\r\nAlso huggingface shows either outdated or wrong info here: \r\n\r\nhttps://huggingface.co/transformers/main_classes/pipelines.html?highlight=pipelines\r\n\r\nand here:\r\n\r\nhttps://huggingface.co/transformers/_modules/transformers/pipelines.html#pipeline\r\n\r\n`Args:\r\n task (:obj:`str`):\r\n The task defining which pipeline will be returned. Currently accepted tasks are:\r\n\r\n - \"feature-extraction\": will return a :class:`~transformers.FeatureExtractionPipeline`\r\n - \"sentiment-analysis\": will return a :class:`~transformers.TextClassificationPipeline`\r\n - \"ner\": will return a :class:`~transformers.TokenClassificationPipeline`\r\n - \"question-answering\": will return a :class:`~transformers.QuestionAnsweringPipeline`\r\n - \"fill-mask\": will return a :class:`~transformers.FillMaskPipeline`\r\n - \"summarization\": will return a :class:`~transformers.SummarizationPipeline`\r\n - \"translation_xx_to_yy\": will return a :class:`~transformers.TranslationPipeline`\r\n - \"text-generation\": will return a :class:`~transformers.TextGenerationPipeline``\r\n\r\nWhile I can clearly see it`s here :)\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py\r\n\r\nI installed via pip install transformers and also via pip install git+https/thisrepository and ~~both versions~~ have the correct pipelines.py file that has all the implementations for zero-shot- Now I am really confused. ",
"the error persists though I have included --upgrade \r\n\r\n`pip install git+https://github.com/huggingface/transformers --upgrade\r\n!pip install git+https://github.com/huggingface/transformers --upgrade\r\nCollecting git+https://github.com/huggingface/transformers\r\n Cloning https://github.com/huggingface/transformers to c:\\users\\sophia\\appdata\\local\\temp\\pip-req-build-6_7vqcz3\r\nRequirement already satisfied, skipping upgrade: numpy in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (1.19.1)\r\nRequirement already satisfied, skipping upgrade: tokenizers==0.8.1.rc2 in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (0.8.1rc2)\r\nRequirement already satisfied, skipping upgrade: packaging in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (20.4)\r\nRequirement already satisfied, skipping upgrade: filelock in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (3.0.12)\r\nRequirement already satisfied, skipping upgrade: requests in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (2.24.0)\r\nRequirement already satisfied, skipping upgrade: tqdm>=4.27 in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (4.48.2)\r\nRequirement already satisfied, skipping upgrade: regex!=2019.12.17 in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (2020.7.14)\r\nRequirement already satisfied, skipping upgrade: sentencepiece!=0.1.92 in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (0.1.91)\r\nRequirement already satisfied, skipping upgrade: sacremoses in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (0.0.43)\r\nRequirement already satisfied, skipping upgrade: six in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from packaging->transformers==3.0.2) (1.15.0)\r\nRequirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from packaging->transformers==3.0.2) (2.4.7)\r\nRequirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from requests->transformers==3.0.2) (1.25.9)\r\nRequirement already satisfied, skipping upgrade: certifi>=2017.4.17 in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from requests->transformers==3.0.2) (2020.6.20)\r\nRequirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from requests->transformers==3.0.2) (3.0.4)\r\nRequirement already satisfied, skipping upgrade: idna<3,>=2.5 in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from requests->transformers==3.0.2) (2.10)\r\nRequirement already satisfied, skipping upgrade: joblib in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from sacremoses->transformers==3.0.2) (0.16.0)\r\nRequirement already satisfied, skipping upgrade: click in c:\\users\\sophia\\anaconda3\\lib\\site-packages (from sacremoses->transformers==3.0.2) (7.1.2)\r\nBuilding wheels for collected packages: transformers\r\n Building wheel for transformers (setup.py): started\r\n Building wheel for transformers (setup.py): finished with status 'done'\r\n Created wheel for transformers: filename=transformers-3.0.2-py3-none-any.whl size=873793 sha256=c363c4ea6e4ec438b88c6fafb162915d24a2c149dd4210111713e78496940af2\r\n Stored in directory: C:\\Users\\sophia\\AppData\\Local\\Temp\\pip-ephem-wheel-cache-ck6yr140\\wheels\\35\\2e\\a7\\d819e3310040329f0f47e57c9e3e7a7338aa5e74c49acfe522\r\nSuccessfully built transformers\r\nInstalling collected packages: transformers\r\n Attempting uninstall: transformers\r\n Found existing installation: transformers 3.0.2\r\n Uninstalling transformers-3.0.2:\r\n Successfully uninstalled transformers-3.0.2\r\nSuccessfully installed transformers-3.0.2`",
"@SophiaLei did you restart your kernel after upgrading?\r\n@Tabernakel In the top left corner of the docs, click on `v3.0.2` and switch to `master`.\r\n\r\nFeel free to open a topic on https://discuss.huggingface.co with any other questions.",
"It works now. Thank you.",
"So the online demo has two different models MNLI and MLNI + Yahoo Answers. I know the second one is Bart with a classification head trained on MNLI and then further fine-tuned on Yahoo Answers topic classification. Is there a specific scenario where MNLI + yahoo answers outperform just MNLI in the zero-shot classification task? ",
"@avinregmi For practical usage the base MNLI model is typically going to do better. The Yahoo Answers model will do better on Yahoo Answers, which seems unhelpful unless you recognize that was only fine-tuned on 5 labels out of the 10 in the corpus. So if you have a big labeled dataset but only covering a subset of the labels you want to be able classify into, fine-tuning an MNLI model as I did with Yahoo Answers will likely boost your performance. Otherwise, stick with the base MNLI model.",
"Is there a way to persist Zero shot classification pipeline and use it for deploying in production?\r\n\r\nThanks!",
"> Is there a way to persist Zero shot classification pipeline and use it for deploying in production?\r\n\r\n@mariyamiteva You have two options:\r\n\r\n1. Use the pipeline directly via our inference API and let us (Hugging Face engineers) take care of all the production serving for you. Check out [the documentation for our inference API](https://api-inference.huggingface.co/docs/python/html/index.html) and reach out to [email protected] to discuss further if you're interested. cc @jeffboudier \r\n\r\n2. Use [this distillation script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation) I wrote with some unlabeled data to distill the zero-shot pipeline to a more efficient sequence classification model, which can then be shared and deployed like any model.",
"Thanks @joeddav - and @mariyamiteva you can do both: 2) Distill the model to efficient sentence classification and then 1) upload it as a private model on Hugging Face to serve it via our hosted inference API.",
"Thanks @joeddav and @jeffboudier for the prompt feedback!\r\n\r\nI am currently trying to run `distill_classifier.py`. `roberta-large-mnli` has been used as a teacher:\r\n\r\n python distill_classifier.py\r\n\r\n --data_file ..\\model\\output\\unlabeled.txt \r\n --class_names_file ..\\model\\output\\class_names.txt\r\n --teacher_name_or_path roberta-large-mnli\r\n --multi_label 1\r\n --output_dir ..\\model\\output\\distilled\r\n\r\nMy `unlabeled.txt` has a single line text, e.g. \r\n\r\n\r\n\r\n\r\nMy `class_names.txt` takes the following form:\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nHere is the snippet of the error I get:\r\n\r\n File \"distill_classifier.py\", line 338, in <module>\r\n main()\r\n File \"distill_classifier.py\", line 328, in main\r\n trainer.train()\r\n File \"C:\\Users\\Maria\\anaconda3\\lib\\site-packages\\transformers\\trainer.py\", line 1222, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"C:\\Users\\Maria\\anaconda3\\lib\\site-packages\\transformers\\trainer.py\", line 1617, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"distill_classifier.py\", line 119, in compute_loss\r\n target_p = inputs[\"labels\"]\r\n File \"C:\\Users\\Maria\\anaconda3\\lib\\site-packages\\transformers\\tokenization_utils_base.py\", line 230, in __getitem__\r\n return self.data[item]\r\n KeyError: 'labels'\r\n\r\nUnfortunately, I was not able to deal with the error above. Could you please help? Thanks!",
"@mariyamiteva I'm not positive this is the issue, but you might need to end your `unlabeled.txt` with a newline. Also, you don't need the `'` single quotes around your text or class names.",
"Yes, it was not the issue. I performed the proposed changes and upgraded the version to 4.7.0.dev0. \r\n\r\nThe error occurred is not the same, but similar:\r\n\r\n Traceback (most recent call last):\r\n File \"distill_classifier.py\", line 338, in <module>\r\n main()\r\n File \"distill_classifier.py\", line 328, in main\r\n trainer.train()\r\n File \"C:\\Users\\Maria\\anaconda3\\lib\\site-packages\\transformers\\trainer.py\", line 1261, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"C:\\Users\\Maria\\anaconda3\\lib\\site-packages\\transformers\\trainer.py\", line 1734, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"distill_classifier.py\", line 119, in compute_loss\r\n target_p = inputs[\"labels\"]\r\n KeyError: 'labels'",
"@mariyamiteva I'm not sure this is this issue, but do you have tokenizers installed? Try `pip install tokenizers` and let me know what that gives you.",
"@joeddav Could the following transformer model used for zero-shot classification be optimized in terms of model inference through ONNX - ‘joeddav/xlm-roberta-large-xnli’?\r\n\r\nThanks!\r\nM.",
"@joeddav What if anyone wants to fine-tune the zero-shot classifier for a specific domain dataset. Is there any code or GitHub repo that may help us to train the zero-shot classifier?"
] | 1,594 | 1,656 | 1,595 | CONTRIBUTOR | null | This PR adds a pipeline for zero-shot classification using pre-trained NLI models as demonstrated in our [zero-shot topic classification demo](https://huggingface.co/zero-shot/) and [blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
Addresses #5756, where @clmnt requested zero-shot classification in the inference API. However, it should be noted that this model has a max sequence size of `1024`, so long documents would be truncated to this length when classifying.
The pipeline takes in collection of sequences and labels. The labels are converted into a hypothesis, e.g. `vulgar` ➡ `this example is vulgar.` Each sequence and each candidate label must be paired and passed through the model, so the total number of forward passes is `num_labels * num_sequences`.
#### Usage
The pipeline can be initialized using the `pipeline` factory:
```python
from transformers import pipeline
nlp = pipeline("zero-shot-classification")
```
Then any combination of sequences and candidate labels can be passed.
```python
sequence_to_classify = "Who are you voting for in 2020?"
candidate_labels = ["Europe", "public health", "politics"]
nlp(sequence_to_classify, candidate_labels)
>>> {'sequence': 'Who are you voting for in 2020?',
'labels': ['politics', 'Europe', 'public health'],
'scores': [0.9676316380500793, 0.019536184147000313, 0.012832209467887878]}
```
When more than one label is passed, we assume that there is only one true label and that the others are false so that the output probabilities add up to 1. This can be changed by passing `multi_class=True`:
```python
sequence_to_classify = "Who are you voting for in 2020?"
candidate_labels = ["Europe", "public health", "politics", "elections"]
nlp(sequence_to_classify, candidate_labels, multi_class=True)
>>> {'sequence': 'Who are you voting for in 2020?',
'labels': ['politics', 'elections', 'Europe', 'public health'],
'scores': [0.9720695614814758,
0.967610776424408,
0.060417089611291885,
0.03248738870024681]}
```
The single-label case is likely to be more reliable, however, since the guarantee of only one true label provides strong signal to the model which is very useful in the zero-shot setting.
By default, labels are turned into NLI-format hypotheses with the template `This example is {label}.`. You can change this by with the `hypothesis_template` argument, but the default template seems to work well in most settings I've experimented with.
A couple more examples:
```python
reviews = [
"I didn't care for this film, but the ending was o.k.",
"There were some weak moments, but the movie was pretty good overall"
]
nlp(reviews, ["positive", "negative"])
>>> [{'sequence': "I didn't care for this film, but the ending was o.k.",
'labels': ['negative', 'positive'],
'scores': [0.9887893199920654, 0.011210653930902481]},
{'sequence': 'There were some weak moments, but the movie was pretty good overall',
'labels': ['positive', 'negative'],
'scores': [0.6071907877922058, 0.3928091824054718]}]
```
```python
reviews = [
"I didn't care for this film, but the ending was o.k.",
"There were some weak moments, but the movie was pretty good overall"
]
hypothesis_template = 'The sentiment of this review is {}.'
nlp(reviews, ["positive", "negative"], hypothesis_template=hypothesis_template)
>>> [{'sequence': "I didn't care for this film, but the ending was o.k.",
'labels': ['negative', 'positive'],
'scores': [0.9774571061134338, 0.022542938590049744]},
{'sequence': 'There were some weak moments, but the movie was pretty good overall',
'labels': ['positive', 'negative'],
'scores': [0.9787198305130005, 0.021280216053128242]}]
```
```python
nlp("I am a bit discouraged by my grades.", ["sad", "happy", "angry"])
>>> {'sequence': 'I am a bit discouraged by my grades.',
'labels': ['sad', 'angry', 'happy'],
'scores': [0.9630885124206543, 0.0311590563505888, 0.005752446595579386]}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5760/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5760/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5760",
"html_url": "https://github.com/huggingface/transformers/pull/5760",
"diff_url": "https://github.com/huggingface/transformers/pull/5760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5760.patch",
"merged_at": 1595857379000
} |
https://api.github.com/repos/huggingface/transformers/issues/5759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5759/comments | https://api.github.com/repos/huggingface/transformers/issues/5759/events | https://github.com/huggingface/transformers/pull/5759 | 656,945,256 | MDExOlB1bGxSZXF1ZXN0NDQ5MTUyMzQw | 5,759 | T5 Model Cards | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"The file paths aren't correct. (we have a \"legacy\" format for non-namespaced models – model pages link to the correct paths)\r\n\r\nIn fact if you look at the model pages, e.g. https://huggingface.co/t5-base – they already have (non-textual) model cards. Can you update those? Thanks=)",
"Up, @sshleifer ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=h1) Report\n> Merging [#5759](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae67b2439fb15954bfd8f0fdf521cf1a650bafb9&el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5759 +/- ##\n==========================================\n+ Coverage 78.51% 78.69% +0.17% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n+ Hits 20581 20628 +47 \n+ Misses 5633 5586 -47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.83% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.88% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (ø)` | |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5759/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=footer). Last update [ae67b24...583295f](https://codecov.io/gh/huggingface/transformers/pull/5759?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"CircleCI jobs say they succeeded if you click through. Unclear why they are yellow on this page."
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | All are identical.
Happy to update w more info!
I didn't have task tags because they seem to already work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5759/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5759",
"html_url": "https://github.com/huggingface/transformers/pull/5759",
"diff_url": "https://github.com/huggingface/transformers/pull/5759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5759.patch",
"merged_at": 1595432317000
} |
https://api.github.com/repos/huggingface/transformers/issues/5758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5758/comments | https://api.github.com/repos/huggingface/transformers/issues/5758/events | https://github.com/huggingface/transformers/pull/5758 | 656,929,467 | MDExOlB1bGxSZXF1ZXN0NDQ5MTM5MjUx | 5,758 | metadata | {
"login": "piegu",
"id": 20000948,
"node_id": "MDQ6VXNlcjIwMDAwOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/20000948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piegu",
"html_url": "https://github.com/piegu",
"followers_url": "https://api.github.com/users/piegu/followers",
"following_url": "https://api.github.com/users/piegu/following{/other_user}",
"gists_url": "https://api.github.com/users/piegu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piegu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piegu/subscriptions",
"organizations_url": "https://api.github.com/users/piegu/orgs",
"repos_url": "https://api.github.com/users/piegu/repos",
"events_url": "https://api.github.com/users/piegu/events{/privacy}",
"received_events_url": "https://api.github.com/users/piegu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=h1) Report\n> Merging [#5758](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8ab565a4be5a7fd96b19ef88d474037ef31f27e5&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5758 +/- ##\n=======================================\n Coverage 77.32% 77.32% \n=======================================\n Files 146 146 \n Lines 26047 26047 \n=======================================\n Hits 20141 20141 \n Misses 5906 5906 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=footer). Last update [8ab565a...a67f412](https://codecov.io/gh/huggingface/transformers/pull/5758?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5758/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5758",
"html_url": "https://github.com/huggingface/transformers/pull/5758",
"diff_url": "https://github.com/huggingface/transformers/pull/5758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5758.patch",
"merged_at": 1594844009000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5757/comments | https://api.github.com/repos/huggingface/transformers/issues/5757/events | https://github.com/huggingface/transformers/issues/5757 | 656,923,974 | MDU6SXNzdWU2NTY5MjM5NzQ= | 5,757 | BART/T5 eli5 in model hub | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | MEMBER | null | # 🚀 Feature request
Would love to have https://huggingface.co/qa in the model hub mostly for the inference widget/api (the demo is down quite regularly)
## Motivation
A lot of companies might want to test it/use it!
## Your contribution
🔥 emojis on slack | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5757/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5756/comments | https://api.github.com/repos/huggingface/transformers/issues/5756/events | https://github.com/huggingface/transformers/issues/5756 | 656,902,341 | MDU6SXNzdWU2NTY5MDIzNDE= | 5,756 | BART MNLI + yahoo answer in the model hub for inference API | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | MEMBER | null | # 🚀 Feature request
would love to be able to use https://huggingface.co/zero-shot/ with the inference API
## Motivation
it would help many companies run zero-shot classification of long documents
## Your contribution
I can provide 🔥 emojis on slack
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5756/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 2,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5756/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5755/comments | https://api.github.com/repos/huggingface/transformers/issues/5755/events | https://github.com/huggingface/transformers/issues/5755 | 656,891,791 | MDU6SXNzdWU2NTY4OTE3OTE= | 5,755 | Problems with generating text using mbart-large-cc25 | {
"login": "Mehrad0711",
"id": 28717374,
"node_id": "MDQ6VXNlcjI4NzE3Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehrad0711",
"html_url": "https://github.com/Mehrad0711",
"followers_url": "https://api.github.com/users/Mehrad0711/followers",
"following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions",
"organizations_url": "https://api.github.com/users/Mehrad0711/orgs",
"repos_url": "https://api.github.com/users/Mehrad0711/repos",
"events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehrad0711/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for flagging. We are still trying to get to the bottom of which special tokens to use for mbart-large-cc25. See https://github.com/pytorch/fairseq/issues/2258 .",
"This might not be the best solution but after experimenting with the tokenizer special tokens a bit, it seems like the model is insensitive to the first input_id and lang_code used on the encoder side.\r\nSo after these modifications:\r\n\r\n```\r\n def set_src_lang_special_tokens(self, src_lang) -> None:\r\n \"\"\"Reset the special tokens to the source lang setting. No prefix and suffix=[eos, cur_lang_code].\"\"\"\r\n self.cur_lang_code = self.lang_code_to_id[src_lang]\r\n self.prefix_tokens = [self.bos_token_id]\r\n self.suffix_tokens = [self.eos_token_id]\r\n\r\n def set_tgt_lang_special_tokens(self, lang: str) -> None:\r\n \"\"\"Reset the special tokens to the target language setting. Prefix [tgt_lang_code], suffix =[eos].\"\"\"\r\n self.cur_lang_code = self.lang_code_to_id[lang]\r\n self.prefix_tokens = [self.cur_lang_code]\r\n self.suffix_tokens = [self.eos_token_id]\r\n```\r\n\r\nin tokenization_bart.py, the model seems to be doing the right thing and generating correct English output:\r\n```\r\nsrc_sent: UN Chief Says There Is No Military Solution in Syria\r\nsrc_ids: {'input_ids': tensor([[ 0, 8274, 127873, 25916, 7, 8622, 2071, 438, 67485,\r\n 53, 187895, 23, 51712, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}\r\noutput_ids: tensor([[250004, 0, 8274, 127873, 25916, 7, 8622, 2071, 438,\r\n 67485, 53, 187895, 23, 51712]])\r\noutput: UN Chief Says There Is No Military Solution in Syria\r\n```\r\n\r\nAlthough this is not how things are described in MBART paper so the core issue remains. \r\n\r\nAlso, this code segment in modeling_bart.py :\r\n\r\n```\r\n def adjust_logits_during_generation(self, logits, cur_len, max_length, **kwargs):\r\n if cur_len == 1:\r\n self._force_token_ids_generation(logits, self.config.bos_token_id)\r\n```\r\nis the culprit for generating 0 (corresponding to bos_token) at every first decoding step.\r\nIt might need to be changed for MBART model.\r\n\r\n",
"I find that the input does need to start with `<bos>`, and the decoder should be seeded with `<lang_code> <bos>`. With this setup, I am able to recover the input sequence during decoding. Like @Mehrad0711 I find that the input `lang_code` does not make a significant difference. ",
"@tomhosking I replicated what you wrote. Definitely needs `<s>`(bos) at the beginning of the input string to fix the off by one error. You can see the tests in this PR #6524 .\r\n\r\nI'm still having trouble squaring this/trying to find a unified fix to accomodate the behavior that mbart-large-en-ro seems to want, as shown in #6156 .\r\n\r\nMaybe the simplest change is just to add `<s>` to the start of the encoder side string?",
"Interestingly, `prepend_bos` is set to false by default in the fairseq mbart finetuning docs.\r\nI set a breakpoint during finetuning and there is no BOS to be found: here is [how batches look](https://gist.github.com/sshleifer/cba08bc2109361a74ac3760a7e30e4f4)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi,\r\nReviving this issue as it still persists in the recent version of transformers. The solution I proposed in Aug 2 comment seems to be working after a modification on prefix source token borrowed from `tokenization_mbart50.py`.\r\nProposed solution:\r\n```\r\n def set_src_lang_special_tokens(self, src_lang) -> None:\r\n \"\"\"Reset the special tokens to the source lang setting. Prefix=[src_lang_code] and suffix=[eos].\"\"\"\r\n self.cur_lang_code = self.lang_code_to_id[src_lang]\r\n self.prefix_tokens = [self.cur_lang_code]\r\n self.suffix_tokens = [self.eos_token_id]\r\n\r\n def set_tgt_lang_special_tokens(self, lang: str) -> None:\r\n \"\"\"Reset the special tokens to the target language setting. Prefix=[tgt_lang_code] and suffix=[eos].\"\"\"\r\n self.cur_lang_code = self.lang_code_to_id[lang]\r\n self.prefix_tokens = [self.cur_lang_code]\r\n self.suffix_tokens = [self.eos_token_id]\r\n```\r\nThis change will fix `mbart-large-cc25`'s output text while leaving `mbart-large-en-ro`'s untouched.\r\n\r\nCode to reproduce:\r\n```\r\nfrom transformers import MBartTokenizer, MBartForConditionalGeneration\r\n\r\ntokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-cc25')\r\nmodel = MBartForConditionalGeneration.from_pretrained('facebook/mbart-large-cc25')\r\n\r\nsrc_sent = \"UN Chief Says There Is No Military Solution in Syria\"\r\n\r\nbatch = tokenizer.prepare_seq2seq_batch(src_texts=[src_sent], src_lang=\"en_XX\", return_tensors=\"pt\")\r\n\r\noutput_ids = model.generate(**batch, decoder_start_token_id=tokenizer.lang_code_to_id[\"en_XX\"])\r\n\r\noutput = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]\r\n\r\nprint('src_sent: ', src_sent)\r\nprint('src_ids: ', batch[\"input_ids\"])\r\nprint('output_ids: ', output_ids)\r\nprint('output: ', output)\r\n\r\n```\r\n\r\nstdout (before change):\r\n```\r\nsrc_sent: UN Chief Says There Is No Military Solution in Syria\r\nsrc_ids: tensor([[ 8274, 127873, 25916, 7, 8622, 2071, 438, 67485, 53,\r\n 187895, 23, 51712, 2, 250004]])\r\noutput_ids: tensor([[250004, 0, 127873, 25916, 7, 8622, 2071, 438, 67485,\r\n 53, 187895, 23, 51712, 2]])\r\noutput: Chief Says There Is No Military Solution in Syria\r\n```\r\n\r\nstdout (after change):\r\n```\r\nsrc_sent: UN Chief Says There Is No Military Solution in Syria\r\nsrc_ids: tensor([[250004, 8274, 127873, 25916, 7, 8622, 2071, 438, 67485,\r\n 53, 187895, 23, 51712, 2]])\r\noutput_ids: tensor([[250004, 0, 8274, 127873, 25916, 7, 8622, 2071, 438,\r\n 67485, 53, 187895, 23, 51712, 2]])\r\noutput: UN Chief Says There Is No Military Solution in Syria\r\n\r\n```\r\n\r\nPotential reviewers: @patrickvonplaten, @patil-suraj, @sgugger \r\n",
"Hi @sgugger, @patrickvonplaten, @patil-suraj,\r\nit would be great if you could provide your feedback on this. I would be happy to provide more context if needed.",
"Hi @Mehrad0711 \r\n\r\nThank you for reporting this. I'll go through the original model code in fairseq to see how they are handling the prefix tokens and get back here.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @patil-suraj,\r\nWas wondering if you had the chance to take a look at this. Thanks.",
"Hi @Mehrad0711 \r\n\r\nYou are right, all mBART models actually use the language code as the prefix token and `<eos>` as the suffix token.\r\nBut unfortunately, we can't really change it now, because this will be backward incompatible with the other models trained using the existing format.\r\n\r\nAlso, this doesn't really make that much difference if you want to fine-tune the model. As long as a consistent format is used for fine-tuning and then for inference then it should work. However, it would change the output for the pre-trained models (as you reported). But as `mbart-large-cc25` is just a pre-trained model and should be fine-tuned to use it for the downstream tasks, this doesn't seem like a big issue.\r\n",
"Hi @patil-suraj!\r\nThanks for your reply. I understand the concern regarding backward incompatibility of this change.\r\nI was using `mbart-large-cc25` without fine-tuning for text denoising; that's how the problem popped up. Given newer mbart models are now available on huggingface, I'll switch to them."
] | 1,594 | 1,619 | 1,619 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): MBART
Language I am using the model on (English, Chinese ...): English, Romanian
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I'm examining 'facebook/mbart-large-en-ro' and 'facebook/mbart-large-cc25' checkpoints of MBART.
Here is my first script translating an English sentence to Romanian:
```
from transformers import MBartTokenizer, BartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro')
model = BartForConditionalGeneration.from_pretrained('facebook/mbart-large-en-ro')
src_sent = "UN Chief Says There Is No Military Solution in Syria"
src_ids = tokenizer.prepare_translation_batch([src_sent])
output_ids = model.generate(src_ids["input_ids"], decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"])
output = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print('src_sent: ', src_sent)
print('src_ids: ', src_ids)
print('output_ids: ', output_ids)
print('output: ', output)
```
stdout:
```
src_sent: UN Chief Says There Is No Military Solution in Syria
src_ids: {'input_ids': tensor([[ 8274, 127873, 25916, 7, 8622, 2071, 438, 67485, 53,
187895, 23, 51712, 2, 250004]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
output_ids: tensor([[250020, 0, 47711, 7844, 127666, 8, 18347, 18147, 1362,
315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577]])
output: Şeful ONU declară că nu există o soluţie militară în Siria
```
As seen in output_ids the model always generates 0 (corresponding to bos_token) at the first decoding step. However, this seems to not be a problem with this checkpoint as the output is still the correct translation.
Now I run the same script but using pretrained "facebook/mbart-large-cc25" and trying to denoise an English input. Since the input does not have mask tokens the output should be identical to the input given the pertaining objective of MBART.
However, the output always misses the first token from the input. I have observed this with different examples (even when you have masked tokens in the input).
```
from transformers import MBartTokenizer, BartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-cc25')
model = BartForConditionalGeneration.from_pretrained('facebook/mbart-large-cc25')
src_sent = "UN Chief Says There Is No Military Solution in Syria"
src_ids = tokenizer.prepare_translation_batch([src_sent])
output_ids = model.generate(src_ids["input_ids"], decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"])
output = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]
print('src_sent: ', src_sent)
print('src_ids: ', src_ids)
print('output_ids: ', output_ids)
print('output: ', output)
```
the stdout:
```
src_sent: UN Chief Says There Is No Military Solution in Syria
src_ids: {'input_ids': tensor([[ 8274, 127873, 25916, 7, 8622, 2071, 438, 67485, 53,
187895, 23, 51712, 2, 250004]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
output_ids: tensor([[250004, 0, 127873, 25916, 7, 8622, 2071, 438, 67485,
53, 187895, 23, 51712]])
output: Chief Says There Is No Military Solution in Syria
```
I have tried various approaches but haven't found any clear solutions to this. Appreciate any help on this.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (False)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5755/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5754/comments | https://api.github.com/repos/huggingface/transformers/issues/5754/events | https://github.com/huggingface/transformers/issues/5754 | 656,859,987 | MDU6SXNzdWU2NTY4NTk5ODc= | 5,754 | T5 fine-tuned model doesn't appear in the model hub | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can see it now: https://huggingface.co/mrm8488/t5-base-finetuned-wikiSQL\r\n\r\nLet us know if something looks wrong (otherwise, feel free to close)",
"Everything right. Maybe I searched it too soon and was not indexed yet."
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Hi guys,
I have fine-tuned T5-base on wikiSQL dataset and uploaded it to HF model hub. The problem is that the model doesn't appear there. AFAIK it is because the config file is not right. Should I change anything?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5754/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5753/comments | https://api.github.com/repos/huggingface/transformers/issues/5753/events | https://github.com/huggingface/transformers/issues/5753 | 656,838,895 | MDU6SXNzdWU2NTY4Mzg4OTU= | 5,753 | Can't load `facebook/mbart-large-cc25` tokenizer | {
"login": "SamuelLarkin",
"id": 7314973,
"node_id": "MDQ6VXNlcjczMTQ5NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7314973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelLarkin",
"html_url": "https://github.com/SamuelLarkin",
"followers_url": "https://api.github.com/users/SamuelLarkin/followers",
"following_url": "https://api.github.com/users/SamuelLarkin/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelLarkin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelLarkin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelLarkin/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelLarkin/orgs",
"repos_url": "https://api.github.com/users/SamuelLarkin/repos",
"events_url": "https://api.github.com/users/SamuelLarkin/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelLarkin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Should we be using `sshleifer/mbart-large-cc25` has it has a name very similar to `facebook/mbart-large-cc25`? @sshleifer is the name of a HuggingFace contributor and there could be a rename mismatch.\r\n\r\nAfter downloading both `sshleifer/mbart-large-cc25` and `facebook/mbart-large-cc25`\r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\nmodel_a = AutoModelWithLMHead.from_pretrained(\"sshleifer/mbart-large-cc25\")\r\nmodel_b = AutoModelWithLMHead.from_pretrained(\"facebook/mbart-large-cc25\")\r\n```\r\n\r\nIf we poke at the cache we find that both models have the same sha1sum\r\n\r\n```\r\nsha1sum 2593e721f6f9000d1c1a19f144236237dbad3b77d2380982baf030667eaf6f7c.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14{,.json} 31f54e8b3a7628593ed122d67a426dbf6ba0687b3e406d753b61fc3e2d9e5014.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14{,.json}\r\n040e8d684abb1ca97e9aabd8f5a61e1a42c5653b 2593e721f6f9000d1c1a19f144236237dbad3b77d2380982baf030667eaf6f7c.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14\r\n1aded004ab07f675042c03fe556744e47811831b 2593e721f6f9000d1c1a19f144236237dbad3b77d2380982baf030667eaf6f7c.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json\r\n040e8d684abb1ca97e9aabd8f5a61e1a42c5653b 31f54e8b3a7628593ed122d67a426dbf6ba0687b3e406d753b61fc3e2d9e5014.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14\r\nb3590a41726b003e3d15d997d52b1929f7608e02 31f54e8b3a7628593ed122d67a426dbf6ba0687b3e406d753b61fc3e2d9e5014.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json\r\n```\r\n\r\nWe can see that the model files are identical but not the json as the json contains to different names where one is `facebook` and the other is `sshleifer`.\r\n\r\n```\r\nhead \\\r\n\t2593e721f6f9000d1c1a19f144236237dbad3b77d2380982baf030667eaf6f7c.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json \\\r\n\t31f54e8b3a7628593ed122d67a426dbf6ba0687b3e406d753b61fc3e2d9e5014.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json\r\n\r\n==> 2593e721f6f9000d1c1a19f144236237dbad3b77d2380982baf030667eaf6f7c.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json <==\r\n{\"url\": \"https://cdn.huggingface.co/facebook/mbart-large-cc25/pytorch_model.bin\", \"etag\": \"\\\"77a6a0d3b317fe29dc30de34840c519c-292\\\"\"}\r\n\r\n==> 31f54e8b3a7628593ed122d67a426dbf6ba0687b3e406d753b61fc3e2d9e5014.2ed9a0461de33053de7171857e42bb9dd55d8722bb567aeaebff33a2a974fb14.json <==\r\n{\"url\": \"https://cdn.huggingface.co/sshleifer/mbart-large-cc25/pytorch_model.bin\", \"etag\": \"\\\"77a6a0d3b317fe29dc30de34840c519c-292\\\"\"}\r\n```",
"Yes they're identical. Use facebook/ . It seems you fixed your issue?",
"I see the same error: Model name 'facebook/mbart-large-cc25' was not found in tokenizers model name list (facebook/mbart-large-en-ro, sshleifer/mbart-large-cc25) \r\n`tokenizer = MBartTokenizer.from_pretrained(\"facebook/mbart-large-cc25\")` \r\nIt works with mbart-large-en-ro",
"``python\r\ntokenizer = MBartTokenizer.from_pretrained(\"facebook/mbart-large-cc25\")\r\n```\r\nworks for me on master",
"> \r\n> \r\n> ``python\r\n> tokenizer = MBartTokenizer.from_pretrained(\"facebook/mbart-large-cc25\")\r\n> \r\n> ```\r\n> works for me on master\r\n> ```\r\n\r\nThanks. So, code is not released yet. Was using transformer 3.0.2 but installing from source works."
] | 1,594 | 1,598 | 1,594 | CONTRIBUTOR | null | # 🐛 Bug
Trying to use the [example](https://huggingface.co/facebook/mbart-large-cc25) fails to load the tokenizer
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
```
## Information
Model I am using `facebook/mbart-large-cc25`
The problem arises when using:
* [ ] the official example scripts: (give details below)
[Example](https://huggingface.co/facebook/mbart-large-cc25) fails to load the tokenizer
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
```
## To reproduce
Steps to reproduce the behavior:
`AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")`
we get the following error indicating a missing model.
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-48-29412cfd6509> in <module>
----> 1 tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
/project/WMT20/opt/miniconda3/envs/HuggingFace-3.0.2_cu101/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
215 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
216 else:
--> 217 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
218
219 raise ValueError(
/project/WMT20/opt/miniconda3/envs/HuggingFace-3.0.2_cu101/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, *inputs, **kwargs)
1138
1139 """
-> 1140 return cls._from_pretrained(*inputs, **kwargs)
1141
1142 @classmethod
/project/WMT20/opt/miniconda3/envs/HuggingFace-3.0.2_cu101/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1244 ", ".join(s3_models),
1245 pretrained_model_name_or_path,
-> 1246 list(cls.vocab_files_names.values()),
1247 )
1248 )
OSError: Model name 'facebook/mbart-large-cc25' was not found in tokenizers model name list (facebook/mbart-large-en-ro, sshleifer/mbart-large-cc25). We assumed 'facebook/mbart-large-cc25' was a path, a model identifier, or url to a directory containing vocabulary files named ['sentencepiece.bpe.model'] but couldn't find such vocabulary files at this path or url.
```
## Expected behavior
Loading a tokenizer for `facebook/mbart-large-cc25` without failure.
## Environment info
```
transformers-cli env
WARNING:tensorflow:From /project/WMT20/opt/miniconda3/envs/HuggingFace-3.0.2_cu101/lib/python3.7/site-packages/transformers/commands/env.py:36: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2020-07-14 15:14:13.475823: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
2020-07-14 15:14:13.495994: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2100000000 Hz
2020-07-14 15:14:13.507620: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55555a4d87f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-07-14 15:14:13.507676: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 3.0.2
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5753/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5752/comments | https://api.github.com/repos/huggingface/transformers/issues/5752/events | https://github.com/huggingface/transformers/pull/5752 | 656,831,699 | MDExOlB1bGxSZXF1ZXN0NDQ5MDU4MzA3 | 5,752 | Update README.md | {
"login": "bashartalafha",
"id": 26685171,
"node_id": "MDQ6VXNlcjI2Njg1MTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/26685171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bashartalafha",
"html_url": "https://github.com/bashartalafha",
"followers_url": "https://api.github.com/users/bashartalafha/followers",
"following_url": "https://api.github.com/users/bashartalafha/following{/other_user}",
"gists_url": "https://api.github.com/users/bashartalafha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bashartalafha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bashartalafha/subscriptions",
"organizations_url": "https://api.github.com/users/bashartalafha/orgs",
"repos_url": "https://api.github.com/users/bashartalafha/repos",
"events_url": "https://api.github.com/users/bashartalafha/events{/privacy}",
"received_events_url": "https://api.github.com/users/bashartalafha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=h1) Report\n> Merging [#5752](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5d178954c909141363df4513eb5f0cc80e5e829c&el=desc) will **increase** coverage by `0.75%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5752 +/- ##\n==========================================\n+ Coverage 77.24% 78.00% +0.75% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n+ Hits 20121 20318 +197 \n+ Misses 5926 5729 -197 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-6.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `83.98% <0.00%> (-4.91%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5752/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=footer). Last update [baf93b0...9e4df6b](https://codecov.io/gh/huggingface/transformers/pull/5752?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5752/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5752",
"html_url": "https://github.com/huggingface/transformers/pull/5752",
"diff_url": "https://github.com/huggingface/transformers/pull/5752.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5752.patch",
"merged_at": 1594756140000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5751/comments | https://api.github.com/repos/huggingface/transformers/issues/5751/events | https://github.com/huggingface/transformers/pull/5751 | 656,746,816 | MDExOlB1bGxSZXF1ZXN0NDQ4OTg5Mjg2 | 5,751 | tiny ppl typo fix | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,598 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5751/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5751",
"html_url": "https://github.com/huggingface/transformers/pull/5751",
"diff_url": "https://github.com/huggingface/transformers/pull/5751.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5751.patch",
"merged_at": 1594744785000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5750/comments | https://api.github.com/repos/huggingface/transformers/issues/5750/events | https://github.com/huggingface/transformers/issues/5750 | 656,721,399 | MDU6SXNzdWU2NTY3MjEzOTk= | 5,750 | fail to run trainer.train() with huggingface transformer | {
"login": "wjzhan",
"id": 1878453,
"node_id": "MDQ6VXNlcjE4Nzg0NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1878453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wjzhan",
"html_url": "https://github.com/wjzhan",
"followers_url": "https://api.github.com/users/wjzhan/followers",
"following_url": "https://api.github.com/users/wjzhan/following{/other_user}",
"gists_url": "https://api.github.com/users/wjzhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wjzhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjzhan/subscriptions",
"organizations_url": "https://api.github.com/users/wjzhan/orgs",
"repos_url": "https://api.github.com/users/wjzhan/repos",
"events_url": "https://api.github.com/users/wjzhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/wjzhan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- I am trying to set up a TensorFlow fine-tune framework for a question-answering project. Using hugging-face/transformer as the prototype, but cannot run through the trainer.
The experiment is conducted at Databricks, the pre-trained model loaded is base-bert, train and dev sets are downloaded from hugging-face examples SQUAD 2.0 https://github.com/huggingface/transformers/tree/master/examples/question-answering
The error log complains about the unexpected keyword argument 'is_impossible', which is a SQUAD 2 data format feature. -->
<!-- Here is the link to my question on stackoverflow -->
**https://stackoverflow.com/questions/62879960/fail-to-run-trainer-train-with-huggingface-transformer**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5750/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5749/comments | https://api.github.com/repos/huggingface/transformers/issues/5749/events | https://github.com/huggingface/transformers/pull/5749 | 656,652,903 | MDExOlB1bGxSZXF1ZXN0NDQ4OTEzNDk1 | 5,749 | Reintroduce clean_text on BertTokenizer call which was removed by mistake in #4723 | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=h1) Report\n> Merging [#5749](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5668fdb09e1bcd888930c1ff242bf200649da39c?el=desc) will **increase** coverage by `2.02%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5749 +/- ##\n==========================================\n+ Coverage 75.33% 77.36% +2.02% \n==========================================\n Files 195 146 -49 \n Lines 39826 26048 -13778 \n==========================================\n- Hits 30003 20151 -9852 \n+ Misses 9823 5897 -3926 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `95.00% <100.00%> (+2.96%)` | :arrow_up: |\n| [src/transformers/commands/env.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9lbnYucHk=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |\n| [src/transformers/commands/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9fX2luaXRfXy5weQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |\n| [src/transformers/commands/transformers\\_cli.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFuc2Zvcm1lcnNfY2xpLnB5) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.56%)` | :arrow_down: |\n| [src/transformers/commands/download.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9kb3dubG9hZC5weQ==) | `0.00% <0.00%> (-65.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `25.82% <0.00%> (-63.85%)` | :arrow_down: |\n| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0.00% <0.00%> (-55.89%)` | :arrow_down: |\n| [src/transformers/commands/run.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9ydW4ucHk=) | `0.00% <0.00%> (-53.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `37.03% <0.00%> (-53.13%)` | :arrow_down: |\n| ... and [185 more](https://codecov.io/gh/huggingface/transformers/pull/5749/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=footer). Last update [5668fdb...25ff60c](https://codecov.io/gh/huggingface/transformers/pull/5749?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Do we have unit tests for that `clean_text` functionality 🤔",
"@stefan-it I'll add one, had to switch to something else in-between ",
"Do you mind making the code quality and the tests pass before we merge?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,602 | 1,602 | MEMBER | null | Signed-off-by: Morgan Funtowicz <[email protected]>
closes #7665 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5749/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5749",
"html_url": "https://github.com/huggingface/transformers/pull/5749",
"diff_url": "https://github.com/huggingface/transformers/pull/5749.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5749.patch",
"merged_at": 1602245249000
} |
https://api.github.com/repos/huggingface/transformers/issues/5748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5748/comments | https://api.github.com/repos/huggingface/transformers/issues/5748/events | https://github.com/huggingface/transformers/issues/5748 | 656,646,801 | MDU6SXNzdWU2NTY2NDY4MDE= | 5,748 | Long BERT TypeError: forward() takes from 2 to 4 positional arguments but 7 were given | {
"login": "paulthemagno",
"id": 38130299,
"node_id": "MDQ6VXNlcjM4MTMwMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38130299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulthemagno",
"html_url": "https://github.com/paulthemagno",
"followers_url": "https://api.github.com/users/paulthemagno/followers",
"following_url": "https://api.github.com/users/paulthemagno/following{/other_user}",
"gists_url": "https://api.github.com/users/paulthemagno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulthemagno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulthemagno/subscriptions",
"organizations_url": "https://api.github.com/users/paulthemagno/orgs",
"repos_url": "https://api.github.com/users/paulthemagno/repos",
"events_url": "https://api.github.com/users/paulthemagno/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulthemagno/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"**Update:** I have downgraded transformers to the version `transformers==2.11.0` and it seems working, even if for now I have used little datasets for test. I will update this issue if someone is interested",
"The code in Longformer has changed quite a bit. I think a simply remedy to make your code work with the current version of `Longformer` is to add `**kwargs` to every forward function in `modeling_longformer.py` that you copied into your notebook. This way it can handle an arbitrary number of input arguments and the above error should not occur.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> The code in Longformer has changed quite a bit. I think a simply remedy to make your code work with the current version of `Longformer` is to add `**kwargs` to every forward function in `modeling_longformer.py` that you copied into your notebook. This way it can handle an arbitrary number of input arguments and the above error should not occur.\r\n\r\nEDIT: To begin pre-training, make sure you LOAD the saved model exactly the way the notebook does BEFORE pre-training! Don't try and use the model straightaway!",
"I have same issue and the problem remains. It looks the problem comes from a higher transformer version."
] | 1,594 | 1,633 | 1,602 | NONE | null | I'm having an issue on the pretraining of a BERT-like model. I used the following function twice: the first time with [bert-base-multilingual-cased](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) and the second time with a simil version, but more efficient for **long documents**, exploiting the class [LongformerSelfAttention](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_longformer.py#L100) to make the normal BERT into a **LongBERT**.
```python
def pretrain_and_evaluate(args, model, tokenizer, eval_only, model_path):
val_dataset = TextDataset(tokenizer=tokenizer,
file_path=args.val_datapath,
block_size=tokenizer.max_len)
if eval_only:
train_dataset = val_dataset
else:
logger.info(f'Loading and tokenizing training data is usually slow: {args.train_datapath}')
train_dataset = TextDataset(tokenizer=tokenizer,
file_path=args.train_datapath,
block_size=tokenizer.max_len)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
trainer = Trainer(model=model, args=args, data_collator=data_collator,
train_dataset=train_dataset, eval_dataset=val_dataset, prediction_loss_only=True,)
eval_loss = trainer.evaluate()
eval_loss = eval_loss['eval_loss']
logger.info(f'Initial eval bpc: {eval_loss/math.log(2)}')
if not eval_only:
trainer.train(model_path=model_path)
trainer.save_model()
eval_loss = trainer.evaluate()
eval_loss = eval_loss['eval_loss']
logger.info(f'Eval bpc after pretraining: {eval_loss/math.log(2)}')
```
With the [bert-base-multilingual-cased](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) it works well: `model` and `tokenizer` passed as arguments to the function are respectively:
```python
model = BertForMaskedLM.from_pretrained('bert-base-multilingual-cased')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-cased')
```
But with the modified version of BERT this error occours:
```
Traceback (most recent call last):
File "convert_bert_to_long_bert.py", line 172, in <module>
pretrain_and_evaluate(training_args, model, tokenizer, eval_only=False, model_path=training_args.output_dir)
File "convert_bert_to_long_bert.py", line 86, in pretrain_and_evaluate
eval_loss = trainer.evaluate()
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/trainer.py", line 748, in evaluate
output = self._prediction_loop(eval_dataloader, description="Evaluation")
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/trainer.py", line 829, in _prediction_loop
outputs = model(**inputs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/modeling_bert.py", line 1098, in forward
return_tuple=return_tuple,
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/modeling_bert.py", line 799, in forward
return_tuple=return_tuple,
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/modeling_bert.py", line 460, in forward
output_attentions,
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/modeling_bert.py", line 391, in forward
hidden_states, attention_mask, head_mask, output_attentions=output_attentions,
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Users/user/Library/Python/3.7/lib/python/site-packages/transformers/modeling_bert.py", line 335, in forward
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions,
File "/Users/user/Library/Python/3.7/lib/python/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() takes from 2 to 4 positional arguments but 7 were given
```
I did few modifications to a working script to obtain a Long version of RoBERTa given the RoBERTa base model. What could be the mistake? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5748/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/5748/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5747/comments | https://api.github.com/repos/huggingface/transformers/issues/5747/events | https://github.com/huggingface/transformers/issues/5747 | 656,644,141 | MDU6SXNzdWU2NTY2NDQxNDE= | 5,747 | Unrecognized configuration class <class 'transformers.configuration_electra.ElectraConfig'> | {
"login": "cktsangal",
"id": 53075457,
"node_id": "MDQ6VXNlcjUzMDc1NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/53075457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cktsangal",
"html_url": "https://github.com/cktsangal",
"followers_url": "https://api.github.com/users/cktsangal/followers",
"following_url": "https://api.github.com/users/cktsangal/following{/other_user}",
"gists_url": "https://api.github.com/users/cktsangal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cktsangal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cktsangal/subscriptions",
"organizations_url": "https://api.github.com/users/cktsangal/orgs",
"repos_url": "https://api.github.com/users/cktsangal/repos",
"events_url": "https://api.github.com/users/cktsangal/events{/privacy}",
"received_events_url": "https://api.github.com/users/cktsangal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Even though the ELECTRA model was added in version v2.8.0, the `ElectraForQuestionAnswering` head was only added in v3.0.0. You would need to upgrade your `transformers` version to at at least v3.0.0 for your code to work!",
"I am trying to use pretrained Distilbert in EncoderDecoderModel but i am getting this error. How can i leverage pretrained distilbert in EncoderDecoderModel.\r\n\r\n`ValueError: Unrecognized configuration class <class 'transformers.configuration_distilbert.DistilBertConfig'> for this kind of AutoModel: AutoModelForCausalLM.\r\nModel type should be one of CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig.`"
] | 1,594 | 1,602 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using : ahotrod/electra_large_discriminator_squad2_512
This problem happened again, when I use ELECTRA on question-answering pipeline. My Transformers version is 2.11.0.
Code:
> from transformers import pipeline, AutoTokenizer, AutoModelForQuestionAnswering
>
> tokenizer = AutoTokenizer.from_pretrained("ahotrod/electra_large_discriminator_squad2_512")
> model = AutoModelForQuestionAnswering.from_pretrained("ahotrod/electra_large_discriminator_squad2_512")
Error:
> Traceback (most recent call last):
> File "albert_qa.py", line 5, in <module>
> model = AutoModelForQuestionAnswering.from_pretrained("ahotrod/electra_large_discriminator_squad2_512")
> File "/home/aim/ANDY-project/sentence-transformers-env/lib/python3.6/site-packages/transformers/modeling_auto.py", line 1004, in from_pretrained
> ", ".join(c.__name__ for c in MODEL_FOR_QUESTION_ANSWERING_MAPPING.keys()),
> ValueError: Unrecognized configuration class <class 'transformers.configuration_electra.ElectraConfig'> for this kind of AutoModel: AutoModelForQuestionAnswering.
- `transformers` version: 2.11.0
- Platform: Ubuntu
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5747/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5746/comments | https://api.github.com/repos/huggingface/transformers/issues/5746/events | https://github.com/huggingface/transformers/issues/5746 | 656,633,717 | MDU6SXNzdWU2NTY2MzM3MTc= | 5,746 | Where can I find raw code for char_to_token function. | {
"login": "jainnitk",
"id": 22263148,
"node_id": "MDQ6VXNlcjIyMjYzMTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/22263148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jainnitk",
"html_url": "https://github.com/jainnitk",
"followers_url": "https://api.github.com/users/jainnitk/followers",
"following_url": "https://api.github.com/users/jainnitk/following{/other_user}",
"gists_url": "https://api.github.com/users/jainnitk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jainnitk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jainnitk/subscriptions",
"organizations_url": "https://api.github.com/users/jainnitk/orgs",
"repos_url": "https://api.github.com/users/jainnitk/repos",
"events_url": "https://api.github.com/users/jainnitk/events{/privacy}",
"received_events_url": "https://api.github.com/users/jainnitk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I figured it out. Thanks."
] | 1,594 | 1,594 | 1,594 | NONE | null | I understand the function has been defined in tokenization_utils_base.py but in the return statement there is a recursive call to the same function. I am unable to understand where does the actual offset calculation takes place.
Tapan
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5746/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5745/comments | https://api.github.com/repos/huggingface/transformers/issues/5745/events | https://github.com/huggingface/transformers/issues/5745 | 656,630,486 | MDU6SXNzdWU2NTY2MzA0ODY= | 5,745 | google/reformer-enwik8 tokenizer was not found in tokenizers model name list | {
"login": "pzelasko",
"id": 15930688,
"node_id": "MDQ6VXNlcjE1OTMwNjg4",
"avatar_url": "https://avatars.githubusercontent.com/u/15930688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pzelasko",
"html_url": "https://github.com/pzelasko",
"followers_url": "https://api.github.com/users/pzelasko/followers",
"following_url": "https://api.github.com/users/pzelasko/following{/other_user}",
"gists_url": "https://api.github.com/users/pzelasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pzelasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pzelasko/subscriptions",
"organizations_url": "https://api.github.com/users/pzelasko/orgs",
"repos_url": "https://api.github.com/users/pzelasko/repos",
"events_url": "https://api.github.com/users/pzelasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/pzelasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"That's because only the crime and punishment modell has an uploaded tokenizer.\r\n",
"`google/reformer-enwik8` is the only model that is a char language model and does not need a tokenizer. If you take a look here: https://huggingface.co/google/reformer-enwik8#reformer-language-model-on-character-level-and-trained-on-enwik8 , you can see that the model does not need a tokenier but a simple python encode and decode function. \r\n\r\n@julien-c @mfuntowicz - how do you think we can include char lms to `pipelines`? Should we maybe introduce a `is_char_lm` config variable? Or just wrap a dummy tokenizer around the python encode and decode functions?",
"Add a `tokenizer_class` optional attribute to config.json which overrides the type of Tokenizer that's instantiated when calling `.from_pretrained()`?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Reformer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Enter https://huggingface.co/google/reformer-enwik8
2. Look at "Hosted inference API"
The model's tokenizer cannot be found; I'm getting the same error in scripts as the one displayed on your webpage:
```⚠️ This model could not be loaded by the inference API. ⚠️
Error loading tokenizer Model name 'google/reformer-enwik8' was not found in tokenizers model name list (google/reformer-crime-and-punishment). We assumed 'google/reformer-enwik8' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url. OSError("Model name 'google/reformer-enwik8' was not found in tokenizers model name list (google/reformer-crime-and-punishment). We assumed 'google/reformer-enwik8' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.")
```
## Expected behavior
Tokenizer loaded without issues
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: latest
- Platform: your own
- Python version: ?
- PyTorch version (GPU?): ?
- Tensorflow version (GPU?): ?
- Using GPU in script?: ?
- Using distributed or parallel set-up in script?: ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5745/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5745/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5744/comments | https://api.github.com/repos/huggingface/transformers/issues/5744/events | https://github.com/huggingface/transformers/pull/5744 | 656,605,425 | MDExOlB1bGxSZXF1ZXN0NDQ4ODc1MjAy | 5,744 | Create README.md for the model card of GPorTuguese-2 model (Portuguese GPT-2 small) | {
"login": "piegu",
"id": 20000948,
"node_id": "MDQ6VXNlcjIwMDAwOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/20000948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piegu",
"html_url": "https://github.com/piegu",
"followers_url": "https://api.github.com/users/piegu/followers",
"following_url": "https://api.github.com/users/piegu/following{/other_user}",
"gists_url": "https://api.github.com/users/piegu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piegu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piegu/subscriptions",
"organizations_url": "https://api.github.com/users/piegu/orgs",
"repos_url": "https://api.github.com/users/piegu/repos",
"events_url": "https://api.github.com/users/piegu/events{/privacy}",
"received_events_url": "https://api.github.com/users/piegu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=h1) Report\n> Merging [#5744](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/838950ee44360ca427f345441502d4e7ab2772b8&el=desc) will **increase** coverage by `1.12%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5744 +/- ##\n==========================================\n+ Coverage 77.33% 78.45% +1.12% \n==========================================\n Files 146 146 \n Lines 26055 26047 -8 \n==========================================\n+ Hits 20149 20436 +287 \n+ Misses 5906 5611 -295 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.74% <100.00%> (-0.06%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=footer). Last update [b2505f7...ae466d5](https://codecov.io/gh/huggingface/transformers/pull/5744?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5744/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5744",
"html_url": "https://github.com/huggingface/transformers/pull/5744",
"diff_url": "https://github.com/huggingface/transformers/pull/5744.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5744.patch",
"merged_at": 1594738132000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5743/comments | https://api.github.com/repos/huggingface/transformers/issues/5743/events | https://github.com/huggingface/transformers/pull/5743 | 656,528,525 | MDExOlB1bGxSZXF1ZXN0NDQ4ODExMzE4 | 5,743 | Customize inference widget input | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=h1) Report\n> Merging [#5743](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/838950ee44360ca427f345441502d4e7ab2772b8&el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5743 +/- ##\n==========================================\n- Coverage 77.33% 77.24% -0.10% \n==========================================\n Files 146 146 \n Lines 26055 26047 -8 \n==========================================\n- Hits 20149 20119 -30 \n- Misses 5906 5928 +22 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.74% <100.00%> (-0.06%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=footer). Last update [b2505f7...f8d259d](https://codecov.io/gh/huggingface/transformers/pull/5743?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@JetRunner There's still the possibility to override the default inputs if it makes more sense for the model: https://twitter.com/mrm8488/status/1282778743598194688",
"(up to @mrm8488 if it does here)",
"In this case it makes more sense because in last phase the model was fine tuned on Spanish Wikipedia data"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5743/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5743",
"html_url": "https://github.com/huggingface/transformers/pull/5743",
"diff_url": "https://github.com/huggingface/transformers/pull/5743.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5743.patch",
"merged_at": 1594738726000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5742/comments | https://api.github.com/repos/huggingface/transformers/issues/5742/events | https://github.com/huggingface/transformers/issues/5742 | 656,499,473 | MDU6SXNzdWU2NTY0OTk0NzM= | 5,742 | How to use pytorch_model.bin to classify a single sentence? | {
"login": "HiroshigeAoki",
"id": 58395317,
"node_id": "MDQ6VXNlcjU4Mzk1MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/58395317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HiroshigeAoki",
"html_url": "https://github.com/HiroshigeAoki",
"followers_url": "https://api.github.com/users/HiroshigeAoki/followers",
"following_url": "https://api.github.com/users/HiroshigeAoki/following{/other_user}",
"gists_url": "https://api.github.com/users/HiroshigeAoki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HiroshigeAoki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HiroshigeAoki/subscriptions",
"organizations_url": "https://api.github.com/users/HiroshigeAoki/orgs",
"repos_url": "https://api.github.com/users/HiroshigeAoki/repos",
"events_url": "https://api.github.com/users/HiroshigeAoki/events{/privacy}",
"received_events_url": "https://api.github.com/users/HiroshigeAoki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Which task did you fine tune the model on? For single sentence it is probably Cola or SST-2 task right? \r\nYou must use predict function by specifying do_predict in the input parameters.",
"Thank you for answering my question!\r\nI fine-tuned my model on a original task for binary classifications of Japanese sentences. The processor for original task is below.\r\n\r\nin transformers/data/processors/glue.py\r\n```\r\nclass OriginalProcessor(DataProcessor):\r\n \"\"\"Processor for the original data set.\"\"\"\r\n def get_example_from_tensor_dict(self, tensor_dict):\r\n \"\"\"See base class.\"\"\"\r\n return InputExample(\r\n tensor_dict[\"idx\"].numpy(),\r\n tensor_dict[\"sentence\"].numpy().decode(\"utf-8\"),\r\n None,\r\n str(tensor_dict[\"label\"].numpy()),\r\n )\r\n\r\n def get_train_examples(self, data_dir):\r\n \"\"\"See base class.\"\"\"\r\n return self._create_examples(self._read_tsv(os.path.join(data_dir, \"train.tsv\")), \"train\")\r\n\r\n def get_dev_examples(self, data_dir):\r\n \"\"\"See base class.\"\"\"\r\n return self._create_examples(self._read_tsv(os.path.join(data_dir, \"dev.tsv\")), \"dev\")\r\n\r\n def get_labels(self):\r\n \"\"\"See base class.\"\"\"\r\n return [\"0\", \"1\"]\r\n\r\n def _create_examples(self, lines, set_type):\r\n \"\"\"Creates examples for the training and dev sets.\"\"\"\r\n examples = []\r\n for (i, line) in enumerate(lines):\r\n # if tsv files have a header, remove #\r\n # if i == 0:\r\n # continue\r\n guid = \"%s-%s\" % (set_type, i)\r\n text_a = line[0]\r\n label = line[1]\r\n examples.append(InputExample(guid=guid, text_a=text_a, text_b=None, label=label))\r\n return examples\r\n```\r\n\r\nAnd please let me ask two questions.\r\n1. Which is better to execute run_glue.py as an external process or to use my own script which mimic run_glue.py's predict function?\r\n2. Can I load my fine-tuned model by specifying the directory which have 'pytorch_model.bin' in parameters or my script as written below? The directory includes output of fine-turning.\r\n\r\n```\r\n#parameter\r\n--model_name_or_path=\"the path for the directory\"\r\n\r\n#script\r\nmodel = BertForSequenceClassification.from_pretrained('the path for the directory')\r\n```",
"I made it with first one. Thank you!",
"Both are doable. I would turn off training option and just use prediction option. \r\nGood to know you did it. "
] | 1,594 | 1,594 | 1,594 | NONE | null | Hi!
I fine-turned BERT on my own datasets by using run_glue.py. And I got pytorch_model.bin as output. I want to use pytorch_model.bin in another system to classify a single sentence from web browser.
I would be grateful if you could teach me about usage. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5742/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5741/comments | https://api.github.com/repos/huggingface/transformers/issues/5741/events | https://github.com/huggingface/transformers/issues/5741 | 656,494,117 | MDU6SXNzdWU2NTY0OTQxMTc= | 5,741 | FileNotFoundError: File not found when running run_squad.py to fine-tune the BERT on SQuAD v1.1. | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This is a mismatch between the file location and what you're indicating to the script. Are you sure you're pointing to the correct directory? If so, can you try using absolute paths?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,601 | 1,601 | NONE | null | Hi, I am just following this tutorial https://github.com/huggingface/transformers/tree/master/examples/question-answering and created a folder named SQUAD_DIR under transformers. The train and test file were downloaded and put into the SQUAD_DIR folder. But the error is that such a file can not be found. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5741/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5740/comments | https://api.github.com/repos/huggingface/transformers/issues/5740/events | https://github.com/huggingface/transformers/pull/5740 | 656,491,822 | MDExOlB1bGxSZXF1ZXN0NDQ4NzgxNTcz | 5,740 | [ModelOutput] Proposal to fix compatibility issue with torch.DataParallel | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Update: this is actually not working because it breaks the possibility to unpack the output of the model's forward pass which is obviously a common pattern (cf failure in the tests) :-/",
"This is superseded by #6138 now."
] | 1,594 | 1,651 | 1,596 | MEMBER | null | This is a proposal to fix #5693 by making `ModelOutput` inherit from a dictionary and behave like a dictionary on iteration (i.e. iterate over the keys rather than the values).
This could break backward compatibility when users iterates over the output tuple rather than indexing it.
On the other hand, we regain backward compatibility with `torch.DataParallel` and from a more general design point of view the `ModelOutput` class should probably be closer to a dictionary than a tuple in the future. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5740/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5740",
"html_url": "https://github.com/huggingface/transformers/pull/5740",
"diff_url": "https://github.com/huggingface/transformers/pull/5740.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5740.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5739/comments | https://api.github.com/repos/huggingface/transformers/issues/5739/events | https://github.com/huggingface/transformers/issues/5739 | 656,467,567 | MDU6SXNzdWU2NTY0Njc1Njc= | 5,739 | TypeError: join() argument must be str or bytes, not 'NoneType' | {
"login": "marton-avrios",
"id": 59836119,
"node_id": "MDQ6VXNlcjU5ODM2MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/59836119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marton-avrios",
"html_url": "https://github.com/marton-avrios",
"followers_url": "https://api.github.com/users/marton-avrios/followers",
"following_url": "https://api.github.com/users/marton-avrios/following{/other_user}",
"gists_url": "https://api.github.com/users/marton-avrios/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marton-avrios/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marton-avrios/subscriptions",
"organizations_url": "https://api.github.com/users/marton-avrios/orgs",
"repos_url": "https://api.github.com/users/marton-avrios/repos",
"events_url": "https://api.github.com/users/marton-avrios/events{/privacy}",
"received_events_url": "https://api.github.com/users/marton-avrios/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hopefully fixed by https://github.com/huggingface/transformers/pull/5361",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | Trying to run seq2seq example scripts with multiple gpus and wandb logging and I get
```
Traceback (most recent call last):
File "/home/martongyorgy/projects/tmp/transformers/examples/seq2seq/finetune.py", line 364, in <module>
xsum_rouge.json
File "/home/martongyorgy/projects/tmp/transformers/examples/seq2seq/finetune.py", line 342, in main
logger=logger,
File "/home/martongyorgy/projects/tmp/transformers/examples/lightning_base.py", line 330, in generic_train
trainer.fit(model)
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 891, in fit
self.ddp_train(task, model)
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 530, in ddp_train
self.run_pretrain_routine(model)
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1046, in run_pretrain_routine
self.configure_checkpoint_callback()
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_config.py", line 60, in configure_checkpoint_callback
"checkpoints"
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/posixpath.py", line 94, in join
genericpath._check_arg_types('join', a, *p)
File "/home/martongyorgy/miniconda3/envs/transformers/lib/python3.7/genericpath.py", line 153, in _check_arg_types
(funcname, s.__class__.__name__)) from None
TypeError: join() argument must be str or bytes, not 'NoneType'
```
To reproduce follow the instructions in the seq2seq example Readme to download XSUM then run
```
export PYTHONPATH="../":"${PYTHONPATH}"
python finetune.py \
--learning_rate=3e-5 \
--fp16 \
--gpus 2 \
--do_train \
--n_val 1000 \
--val_check_interval 0.1 \
--data_dir xsum \
--output_dir xsum_frozen_embs \
--model_name_or_path t5-small \
--train_batch_size 16 --eval_batch_size 16 --freeze_embeds --freeze_encoder \
--num_train_epochs 6 \
--max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 \
--logger wandb
```
Happens in pytorch-lightning 0.8.1, fixed in 0.8.4. But there is another problem in 0.8.4, see #5584
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5739/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5738/comments | https://api.github.com/repos/huggingface/transformers/issues/5738/events | https://github.com/huggingface/transformers/issues/5738 | 656,461,696 | MDU6SXNzdWU2NTY0NjE2OTY= | 5,738 | Unicode normalization for bert-cased models | {
"login": "alexeyr",
"id": 24733,
"node_id": "MDQ6VXNlcjI0NzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/24733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexeyr",
"html_url": "https://github.com/alexeyr",
"followers_url": "https://api.github.com/users/alexeyr/followers",
"following_url": "https://api.github.com/users/alexeyr/following{/other_user}",
"gists_url": "https://api.github.com/users/alexeyr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexeyr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexeyr/subscriptions",
"organizations_url": "https://api.github.com/users/alexeyr/orgs",
"repos_url": "https://api.github.com/users/alexeyr/repos",
"events_url": "https://api.github.com/users/alexeyr/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexeyr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Tokenize text containing combining marks with `BertTokenizer`. E.g.
```
In [1]: from transformers.tokenization_bert import *
In [2]: tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
In [3]: tokenizer.tokenize("\u00E1")
Out[3]: ['á']
In [4]: tokenizer.tokenize("a\u0301")
Out[4]: ['a', '##́']
In [5]: tokenizer.tokenize("a\u0300")
Out[5]: ['[UNK]']
In [6]: tokenizer.tokenize("\u00E0")
Out[6]: ['à']
```
Results for `BertTokenizerFast` are the same.
## Expected behavior
`"\u00E1"` and `"a\u0301"` should ideally be tokenized the same way; so should `"\u00E0"` and `"a\u0300"`. If existing behavior should be preserved, maybe add an optional argument.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.4.0-18362-Microsoft-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5738/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5738/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5737/comments | https://api.github.com/repos/huggingface/transformers/issues/5737/events | https://github.com/huggingface/transformers/pull/5737 | 656,451,065 | MDExOlB1bGxSZXF1ZXN0NDQ4NzQ4MTgx | 5,737 | Update model_summary.rst | {
"login": "xwen99",
"id": 48824317,
"node_id": "MDQ6VXNlcjQ4ODI0MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/48824317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xwen99",
"html_url": "https://github.com/xwen99",
"followers_url": "https://api.github.com/users/xwen99/followers",
"following_url": "https://api.github.com/users/xwen99/following{/other_user}",
"gists_url": "https://api.github.com/users/xwen99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xwen99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xwen99/subscriptions",
"organizations_url": "https://api.github.com/users/xwen99/orgs",
"repos_url": "https://api.github.com/users/xwen99/repos",
"events_url": "https://api.github.com/users/xwen99/events{/privacy}",
"received_events_url": "https://api.github.com/users/xwen99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=h1) Report\n> Merging [#5737](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cd30f98fd24837f285cfc221b91cfa66b1b38c32&el=desc) will **decrease** coverage by `0.77%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5737 +/- ##\n==========================================\n- Coverage 78.02% 77.25% -0.78% \n==========================================\n Files 146 146 \n Lines 26055 26055 \n==========================================\n- Hits 20329 20128 -201 \n- Misses 5726 5927 +201 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=footer). Last update [cd30f98...5347731](https://codecov.io/gh/huggingface/transformers/pull/5737?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | Add '-' to make the reference of Transformer-XL more accurate and formal. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5737/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5737",
"html_url": "https://github.com/huggingface/transformers/pull/5737",
"diff_url": "https://github.com/huggingface/transformers/pull/5737.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5737.patch",
"merged_at": 1595842443000
} |
https://api.github.com/repos/huggingface/transformers/issues/5736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5736/comments | https://api.github.com/repos/huggingface/transformers/issues/5736/events | https://github.com/huggingface/transformers/issues/5736 | 656,438,905 | MDU6SXNzdWU2NTY0Mzg5MDU= | 5,736 | TypeError: an integer is required (got type NoneType) while using run_language_modeling.py | {
"login": "cabhijith",
"id": 45108441,
"node_id": "MDQ6VXNlcjQ1MTA4NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/45108441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cabhijith",
"html_url": "https://github.com/cabhijith",
"followers_url": "https://api.github.com/users/cabhijith/followers",
"following_url": "https://api.github.com/users/cabhijith/following{/other_user}",
"gists_url": "https://api.github.com/users/cabhijith/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cabhijith/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cabhijith/subscriptions",
"organizations_url": "https://api.github.com/users/cabhijith/orgs",
"repos_url": "https://api.github.com/users/cabhijith/repos",
"events_url": "https://api.github.com/users/cabhijith/events{/privacy}",
"received_events_url": "https://api.github.com/users/cabhijith/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The error was caused because I did not specify the type of dataset (```LinebyLine``` in my case). After doing that, it worked. ",
"I'm facing the same error and below is my run command. Any pointers, @cabhijith ?\r\n`cmd = \"python run_language_modeling.py \\\r\n --output_dir ./bertout \\\r\n --model_type bert \\\r\n --do_train \\\r\n --do_eval \\\r\n --train_data_file ./test.txt \\\r\n --eval_data_file ./test.txt \\\r\n --mlm \\\r\n --line_by_line \\\r\n --learning_rate 1e-4 \\\r\n --num_train_epochs 5 \\\r\n --save_total_limit 2 \\\r\n --save_steps 2000 \\\r\n --per_gpu_train_batch_size 16 \\\r\n --evaluate_during_training \\\r\n --warmup_steps=10000 \\\r\n --logging_steps=100 \\\r\n --gradient_accumulation_steps=4 \\\r\n --seed 666 \\\r\n --block_size=512 \\\r\n --tokenizer_name ./bert2 \\\r\n --config_name ./bert2\"`",
"I still got this error after I use `LinebyLine` dataset, anyone can help me ?",
"> I still got this error after I use `LinebyLine` dataset, anyone can help me ?\r\n\r\nIt works after a close a file is opening... "
] | 1,594 | 1,617 | 1,594 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below) Yes
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below) Own task
## To reproduce
Steps to reproduce the behavior:
Train the tokenizer:
```python
from tokenizers import BertWordPieceTokenizer
paths = 'Clean_merged.txt'
# Initialize a tokenizer
tokenizer = BertWordPieceTokenizer()
# Customize training
tokenizer.train(files=paths, vocab_size=32_000, min_frequency=4)
tokenizer.save_model('./')
```
```
%env TRAIN_FILE= Clean_merged.txt
!python transformers/examples/language-modeling/run_language_modeling.py \
--output_dir=output_from_scratch \
--model_type=bert \
--do_train \
--tokenizer_name save_tokenizer \
--save_steps 2000 \
--per_gpu_train_batch_size 8 \
--train_data_file=$TRAIN_FILE \
--mlm \
--block_size 510
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
To train a BERT model from scratch, on an mlm task.
### Stack trace
```
2020-07-14 07:52:19.364573: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
07/14/2020 07:52:21 - INFO - transformers.training_args - PyTorch: setting up devices
07/14/2020 07:52:21 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False
07/14/2020 07:52:21 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='output_from_scratch', overwrite_output_dir=False, do_train=True, do_eval=False, do_predict=False, evaluate_during_training=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=8, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Jul14_07-52-21_cf8085fe2205', logging_first_step=False, logging_steps=500, save_steps=2000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1)
07/14/2020 07:52:21 - WARNING - __main__ - You are instantiating a new config instance from scratch.
07/14/2020 07:52:21 - INFO - transformers.configuration_utils - loading configuration file save_tokenizer/config.json
07/14/2020 07:52:21 - INFO - transformers.configuration_utils - Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 514,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 32000
}
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - Model name 'save_tokenizer' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). Assuming 'save_tokenizer' is a path, a model identifier, or url to a directory containing tokenizer files.
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - Didn't find file save_tokenizer/added_tokens.json. We won't load it.
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - Didn't find file save_tokenizer/special_tokens_map.json. We won't load it.
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - Didn't find file save_tokenizer/tokenizer_config.json. We won't load it.
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - Didn't find file save_tokenizer/tokenizer.json. We won't load it.
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - loading file save_tokenizer/vocab.txt
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - loading file None
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - loading file None
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - loading file None
07/14/2020 07:52:21 - INFO - transformers.tokenization_utils_base - loading file None
07/14/2020 07:52:21 - INFO - __main__ - Training new model from scratch
/usr/local/lib/python3.6/dist-packages/transformers/modeling_auto.py:709: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
07/14/2020 07:52:26 - INFO - filelock - Lock 140270211965616 acquired on cached_lm_BertTokenizer_508_Clean_merged.txt.lock
07/14/2020 07:52:31 - INFO - transformers.data.datasets.language_modeling - Loading features from cached file cached_lm_BertTokenizer_508_Clean_merged.txt [took 3.803 s]
07/14/2020 07:52:31 - INFO - filelock - Lock 140270211965616 released on cached_lm_BertTokenizer_508_Clean_merged.txt.lock
07/14/2020 07:52:33 - INFO - transformers.trainer - You are instantiating a Trainer but W&B is not installed. To use wandb logging, run `pip install wandb; wandb login` see https://docs.wandb.com/huggingface.
07/14/2020 07:52:33 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
07/14/2020 07:52:33 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
07/14/2020 07:52:33 - INFO - transformers.trainer - ***** Running training *****
07/14/2020 07:52:33 - INFO - transformers.trainer - Num examples = 101259
07/14/2020 07:52:33 - INFO - transformers.trainer - Num Epochs = 3
07/14/2020 07:52:33 - INFO - transformers.trainer - Instantaneous batch size per device = 8
07/14/2020 07:52:33 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 8
07/14/2020 07:52:33 - INFO - transformers.trainer - Gradient Accumulation steps = 1
07/14/2020 07:52:33 - INFO - transformers.trainer - Total optimization steps = 37974
Epoch: 0% 0/3 [00:00<?, ?it/s]
Iteration: 0% 0/12658 [00:00<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/language-modeling/run_language_modeling.py", line 296, in <module>
main()
File "transformers/examples/language-modeling/run_language_modeling.py", line 260, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 492, in train
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.6/dist-packages/tqdm/std.py", line 1104, in __iter__
for obj in iterable:
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/transformers/data/datasets/language_modeling.py", line 75, in __getitem__
return torch.tensor(self.examples[i], dtype=torch.long)
TypeError: an integer is required (got type NoneType)
Epoch: 0% 0/3 [00:00<?, ?it/s]
Iteration: 0% 0/12658 [00:00<?, ?it/s]
```
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: <No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5736/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5735 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5735/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5735/comments | https://api.github.com/repos/huggingface/transformers/issues/5735/events | https://github.com/huggingface/transformers/pull/5735 | 656,435,428 | MDExOlB1bGxSZXF1ZXN0NDQ4NzM1MjMw | 5,735 | Create README.md (Model card for Norod78/hewiki-articles-distilGPT2py-il) | {
"login": "Norod",
"id": 3617152,
"node_id": "MDQ6VXNlcjM2MTcxNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3617152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Norod",
"html_url": "https://github.com/Norod",
"followers_url": "https://api.github.com/users/Norod/followers",
"following_url": "https://api.github.com/users/Norod/following{/other_user}",
"gists_url": "https://api.github.com/users/Norod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Norod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Norod/subscriptions",
"organizations_url": "https://api.github.com/users/Norod/orgs",
"repos_url": "https://api.github.com/users/Norod/repos",
"events_url": "https://api.github.com/users/Norod/events{/privacy}",
"received_events_url": "https://api.github.com/users/Norod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=h1) Report\n> Merging [#5735](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cd30f98fd24837f285cfc221b91cfa66b1b38c32&el=desc) will **increase** coverage by `0.48%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5735 +/- ##\n==========================================\n+ Coverage 78.02% 78.51% +0.48% \n==========================================\n Files 146 146 \n Lines 26055 26055 \n==========================================\n+ Hits 20329 20456 +127 \n+ Misses 5726 5599 -127 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-6.27%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=footer). Last update [cd30f98...148adfb](https://codecov.io/gh/huggingface/transformers/pull/5735?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @Norod, this is our first model for Hebrew, that's awesome – thanks for sharing.\r\n\r\nDo you think you'd be up for adding default example inputs to all models for Hebrew? If you are, just open a PR against [this file](https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts).",
"(also cc'ing @JetRunner)",
"Hello @julien-c \r\nThank you for having such awesome models and community. \r\nWhile my contribution is the first GPT2 model in Hebrew, there are several ones which were contributed by \"Helsinki-NLP\" for translating to Hebrew and from Hebrew. For example: https://huggingface.co/Helsinki-NLP/opus-mt-he-de?text=%D7%A9%D7%9C%D7%95%D7%9D \r\nThere is also one BERT model contributed by \"TurkuNLP\" [TurkuNLP/wikibert-base-he-cased](https://huggingface.co/TurkuNLP/wikibert-base-he-cased) that can generate masked predictions. \r\n\r\nThank you for pointing out the 'default example inputs' file, I will have a look.",
"@Norod Good point! We now have a clearer webpage that lists (mono-lingual) models by language: see e.g. this link for Hebrew https://huggingface.co/languages#he"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Model card for Norod78/hewiki-articles-distilGPT2py-il
A tiny GPT2 model for generating Hebrew text | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5735/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5735/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5735",
"html_url": "https://github.com/huggingface/transformers/pull/5735",
"diff_url": "https://github.com/huggingface/transformers/pull/5735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5735.patch",
"merged_at": 1594738245000
} |
https://api.github.com/repos/huggingface/transformers/issues/5734 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5734/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5734/comments | https://api.github.com/repos/huggingface/transformers/issues/5734/events | https://github.com/huggingface/transformers/pull/5734 | 656,343,180 | MDExOlB1bGxSZXF1ZXN0NDQ4NjU4ODQy | 5,734 | Fix typo (model saving TF) | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=h1) Report\n> Merging [#5734](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f0bda06f43a0d5e4ef80ad0f1812027b658b724d&el=desc) will **increase** coverage by `0.26%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5734 +/- ##\n==========================================\n+ Coverage 77.05% 77.31% +0.26% \n==========================================\n Files 146 146 \n Lines 26012 26012 \n==========================================\n+ Hits 20043 20111 +68 \n+ Misses 5969 5901 -68 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.53% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=footer). Last update [f0bda06...74755d7](https://codecov.io/gh/huggingface/transformers/pull/5734?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5734/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5734",
"html_url": "https://github.com/huggingface/transformers/pull/5734",
"diff_url": "https://github.com/huggingface/transformers/pull/5734.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5734.patch",
"merged_at": 1595861716000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5733 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5733/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5733/comments | https://api.github.com/repos/huggingface/transformers/issues/5733/events | https://github.com/huggingface/transformers/pull/5733 | 656,322,496 | MDExOlB1bGxSZXF1ZXN0NDQ4NjQxOTQ3 | 5,733 | DataParallel fixes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=h1) Report\n> Merging [#5733](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c3c61ea01733403210a1d159114e8c3d042dabb7&el=desc) will **increase** coverage by `1.34%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5733 +/- ##\n==========================================\n+ Coverage 77.22% 78.57% +1.34% \n==========================================\n Files 146 146 \n Lines 26012 26012 \n==========================================\n+ Hits 20088 20439 +351 \n+ Misses 5924 5573 -351 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <50.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5733/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=footer). Last update [c3c61ea...4d61a1e](https://codecov.io/gh/huggingface/transformers/pull/5733?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Way better fix, LGTM \r\n\r\nDid the multigpu test pass?",
"> Way better fix, LGTM\r\n\r\nWell, I followed @sgugger and @thomwolf's breadcrumbs, so that was easy.\r\n\r\n> Did the multigpu test pass?\r\n\r\nYes. And I retested the glue_run.py on multi-gpu machine.\r\n",
"Pushed another fix for https://github.com/huggingface/transformers/issues/5693#issuecomment-659564678"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | 1. switched to a more precise check as suggested by @thomwolf
```
- if self.args.n_gpu > 1:
+ if isinstance(model, nn.DataParallel):
```
discussion: https://github.com/huggingface/transformers/issues/5693#issuecomment-657937349
2. fix tests - require the same fixup under DataParallel as the training module fix merged earlier today:
https://github.com/huggingface/transformers/pull/5685
discussion: https://github.com/huggingface/transformers/issues/5693#issuecomment-657938856
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5733/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5733",
"html_url": "https://github.com/huggingface/transformers/pull/5733",
"diff_url": "https://github.com/huggingface/transformers/pull/5733.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5733.patch",
"merged_at": 1595251752000
} |
https://api.github.com/repos/huggingface/transformers/issues/5732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5732/comments | https://api.github.com/repos/huggingface/transformers/issues/5732/events | https://github.com/huggingface/transformers/pull/5732 | 656,313,333 | MDExOlB1bGxSZXF1ZXN0NDQ4NjM0MzU2 | 5,732 | Add `power` argument for TF PolynomialDecay | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the PR!!\r\n\r\nCan you just go a bit forward and create a parameter in `training_args_tf.py` for this?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=h1) Report\n> Merging [#5732](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f0bda06f43a0d5e4ef80ad0f1812027b658b724d&el=desc) will **increase** coverage by `1.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5732 +/- ##\n==========================================\n+ Coverage 77.05% 78.12% +1.06% \n==========================================\n Files 146 146 \n Lines 26012 26012 \n==========================================\n+ Hits 20043 20321 +278 \n+ Misses 5969 5691 -278 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.65% <ø> (ø)` | |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.53% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=footer). Last update [f0bda06...bdb7613](https://codecov.io/gh/huggingface/transformers/pull/5732?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"How can I see where my format is wrong ?",
"Can you run in that order:\r\n```\r\nisort --recursive examples templates tests src utils\r\nblack --line-length 119 --target-version py35 examples templates tests src utils\r\n```\r\n\r\nAnd then push.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hello @Colanim!\r\n\r\nCan you rebase on master, and I will merge once done!",
"Nice! Can you run a `make style` in order to fix the code quality test."
] | 1,594 | 1,601 | 1,601 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5732/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5732",
"html_url": "https://github.com/huggingface/transformers/pull/5732",
"diff_url": "https://github.com/huggingface/transformers/pull/5732.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5732.patch",
"merged_at": 1601889390000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5731/comments | https://api.github.com/repos/huggingface/transformers/issues/5731/events | https://github.com/huggingface/transformers/pull/5731 | 656,304,568 | MDExOlB1bGxSZXF1ZXN0NDQ4NjI3NTIy | 5,731 | [fix] mbart_en_ro_generate test now identical to fairseq | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think CI is spurious.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=h1) Report\n> Merging [#5731](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c3c61ea01733403210a1d159114e8c3d042dabb7&el=desc) will **increase** coverage by `1.21%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5731 +/- ##\n==========================================\n+ Coverage 77.22% 78.43% +1.21% \n==========================================\n Files 146 146 \n Lines 26012 26012 \n==========================================\n+ Hits 20088 20403 +315 \n+ Misses 5924 5609 -315 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=footer). Last update [c3c61ea...608369d](https://codecov.io/gh/huggingface/transformers/pull/5731?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | violentele -> violenţa
Slow test was previously failing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5731/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5731/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5731",
"html_url": "https://github.com/huggingface/transformers/pull/5731",
"diff_url": "https://github.com/huggingface/transformers/pull/5731.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5731.patch",
"merged_at": 1594721544000
} |
https://api.github.com/repos/huggingface/transformers/issues/5730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5730/comments | https://api.github.com/repos/huggingface/transformers/issues/5730/events | https://github.com/huggingface/transformers/issues/5730 | 656,277,438 | MDU6SXNzdWU2NTYyNzc0Mzg= | 5,730 | Using pipeline('ner'), partial tokens returned when grouped_entities=True | {
"login": "JamesDeAntonis",
"id": 33379057,
"node_id": "MDQ6VXNlcjMzMzc5MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/33379057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamesDeAntonis",
"html_url": "https://github.com/JamesDeAntonis",
"followers_url": "https://api.github.com/users/JamesDeAntonis/followers",
"following_url": "https://api.github.com/users/JamesDeAntonis/following{/other_user}",
"gists_url": "https://api.github.com/users/JamesDeAntonis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamesDeAntonis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamesDeAntonis/subscriptions",
"organizations_url": "https://api.github.com/users/JamesDeAntonis/orgs",
"repos_url": "https://api.github.com/users/JamesDeAntonis/repos",
"events_url": "https://api.github.com/users/JamesDeAntonis/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamesDeAntonis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): pipeline('ner', grouped_entities=True)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import pipeline
ner = pipeline('ner', grouped_entities=True)
ner("the sapodilla tree is native to Central America")
```
## Expected behavior
The output says that "##di" is one of the named entities. It doesn't seem like partial tokens should possibly be returned as predicted named entities. Instead, I imagine that the desired result is that either the entire word "sapodilla" is determined to be an entity group or nothing at all. Is this a bug or was this quirk consciously chosen to be allowed?
As a side note, another similar quirk here is that something like "U.S." occasionally gives just "U" or "S" as individual named entities, where "U.S." is desired. I consider this related to the above issue.
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-1032-azure-x86_64-with-glibc2.10
- Python version: 3.8.1
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5730/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5729/comments | https://api.github.com/repos/huggingface/transformers/issues/5729/events | https://github.com/huggingface/transformers/issues/5729 | 656,276,886 | MDU6SXNzdWU2NTYyNzY4ODY= | 5,729 | [Feature request] Pass any Iterable to tokenizer.__call__() | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This should be handle in `nlp` in my opinion.\r\n\r\nIt's related to https://github.com/huggingface/nlp/issues/387\r\n\r\ncc @lhoestq ",
"Nice find on the `nlp` side! There are also other use cases when users might want to pass in a NumPy array, or other type of Iterable. Any reason we shouldn't extend to all Iterables like https://github.com/huggingface/nlp/pull/370?",
"The fast tokenizers only support python inputs at the moment. We could change it to allow numpy arrays as well but I would expect this to be potentially a significant work to update.",
"Ah yes, Rust's static typing makes this more complex. And my own benchmarks show significant memory usage when casting from a numpy array to python list. Fixing the `nlp` output format should work well then.",
"I just did the change. Let me know if it's good for you :)",
"Thanks for the fix @lhoestq! That does indeed resolve the need for a list comprehension, but I'm still hitting the same speed bottleneck. It seems that the list comprehension is just moved inside of the `to_pylist()` function. Here's a reproducible benchmark:\r\n\r\n```python\r\nimport nlp\r\nfrom transformers import BertTokenizerFast\r\n\r\ntokenizer = BertTokenizerFast.from_pretrained(\"bert-base-uncased\")\r\n\r\ndset_size = 10000\r\nmax_seq_length = 512\r\ndset = nlp.Dataset.from_dict(\r\n {\"examples\": [[str(i) for i in range(max_seq_length)] for _ in range(dset_size)]}\r\n)\r\n\r\ndset = dset.map(\r\n lambda batch: tokenizer(\r\n batch[\"examples\"], is_pretokenized=True, # rather than [ex for ex in batch[\"examples\"]]\r\n ),\r\n batched=True,\r\n remove_columns=[\"examples\"],\r\n)\r\n```\r\n\r\nThis takes 37 seconds to run, processing around 270 examples/second, or 3.7 seconds/batched iteration. At this pace it takes around 1 hour to encode 10GB of text, such as Wikipedia, and even longer for a larger dataset like C4.\r\n\r\nI would love to take full use of the tokenizers functionality of \"they can encode 1GB of text in ~20sec on a standard server's CPU\". That would allow encoding Wikipedia in 2 minutes rather than an hour. Are there any further improvements that would un-bottleneck the batched map function?",
"The bottleneck is indeed the conversion of arrow types to python lists.\r\nI've been looking for a faster way to do it but I couldn't find a satisfactory solution.\r\n\r\nIf we manage to be full rust/c++ on this we could achieve full speed:\r\n- We could add support for numpy arrays as input for tokenizers (arrow to numpy is very fast)\r\n- Or we could leverage arrow's rust API to do the processing in rust\r\nBoth options are not trivial though.\r\n\r\nIf we do so, we can expect to have the tokenization as the bottleneck, instead of the conversion from arrow to python lists.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,601 | 1,601 | CONTRIBUTOR | null | # 🚀 Feature request
Currently `tokenizer.__call__` accepts `List[List[str]]`, for pre-tokenized inputs. It should also accept `List[np.ndarray[str]]`.
## Motivation
The `nlp` library stores a batch of pre-tokenized strings as `List[np.ndarray[str]]` after creating them with `dset.map(batched=True)`. Currently it requires a memory-intensive `tokenizer([list(ex) for ex in batch])` to pass these pre-tokenized strings into `batch_encode_plus`.
## Your contribution
I could contribute this.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5729/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5729/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5728/comments | https://api.github.com/repos/huggingface/transformers/issues/5728/events | https://github.com/huggingface/transformers/pull/5728 | 656,257,626 | MDExOlB1bGxSZXF1ZXN0NDQ4NTkwMzMw | 5,728 | Return tokens from tokenizer.__call__() | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=h1) Report\n> Merging [#5728](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c3c61ea01733403210a1d159114e8c3d042dabb7&el=desc) will **increase** coverage by `1.18%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5728 +/- ##\n==========================================\n+ Coverage 77.22% 78.41% +1.18% \n==========================================\n Files 146 146 \n Lines 26012 26014 +2 \n==========================================\n+ Hits 20088 20399 +311 \n+ Misses 5924 5615 -309 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <ø> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `93.57% <50.00%> (-0.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.36% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=footer). Last update [c3c61ea...d91c382](https://codecov.io/gh/huggingface/transformers/pull/5728?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi Jared,\r\n\r\nWe are currently trying to reduce the number of user-facing methods in the tokenizer so I would be in favor of extending `tokenize` to accept batches as well. This can be done by having `tokenize` call `__call__` which accept both single examples and batches and extracting the tokens from the results.\r\n\r\nIt's already what happening in `encode` if you dive in the code (`encode` is calling the full tokenization pipeline in `encode_plus` and filtering the output to keep only the tokens).",
"Thanks for explaining your thinking! What about adding a `return_tokens` argument to `tokenizer.__call__`? `encoding.tokens` is the only attribute in an `Encoding` that can't be accessed directly from that method.",
"Oh yes, actually you can already do `.tokens(index_in_the_batch)` on the output of `__call__` but I see the docstring was missed and it is thus not in the doc currently, we will add it (and the similar method `.words(index_in_the_batch)`).\r\n\r\nIt's here in the `BatchEncoding` class (which is the output of the encoding methods): https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L221\r\n\r\nSo you can do:\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-cased', use_fast=True)\r\nbatch = ['hello how are you?', 'good morning everyone']\r\nencodings = tokenizer(batch)\r\n>>> encodings.tokens(1)\r\n['[CLS]', 'good', 'morning', 'everyone', '[SEP]']\r\n>>> encodings.tokens(0)\r\n['[CLS]', 'hello', 'how', 'are', 'you', '?', '[SEP]']\r\n```\r\n\r\nIs it what you were looking for?\r\n\r\nIt currently only work for \"fast\" tokenizers (i.e. most of the tokenizers except the sentencepiece ones but they should be added not too far in the future)",
"Somewhat. I still have to do `[encodings.tokens(i) for i in range(len(encodings))]`, but that's fine. I also got direct access to the underlying Rust tokenizer via `tokenizer._tokenizer.encode_batch(batch[\"sentences\"])`, so that will do. Thanks!"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Currently, pre-tokenizing (not encode tokens to ids, just generating the tokens) a batch of strings requires a manual for-loop. I'm adding a method to call the underlying Rust batch_encode implementation, which runs about 2x faster than a Python for-loop. I was expecting an even greater speedup, so if there's any way this could be made more efficient I would love to hear it.
Before:
```python
batch = ["Sentence 1", "Sentence 2"]
tokenized_batch = [tokenizer.tokenize(ex) for ex in batch]
# [["Sen", "##tence", "1"], ["Sen", "##tence", "2"]
```
Now:
```python
tokenized_batch = tokenizer.tokenize_batch(batch)
# [["Sen", "##tence", "1"], ["Sen", "##tence", "2"]
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5728/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5728",
"html_url": "https://github.com/huggingface/transformers/pull/5728",
"diff_url": "https://github.com/huggingface/transformers/pull/5728.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5728.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5727/comments | https://api.github.com/repos/huggingface/transformers/issues/5727/events | https://github.com/huggingface/transformers/issues/5727 | 656,256,843 | MDU6SXNzdWU2NTYyNTY4NDM= | 5,727 | t5 model card | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | Add Model Card for all t5 checkpoints.
cc @clmnt | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5727/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5726/comments | https://api.github.com/repos/huggingface/transformers/issues/5726/events | https://github.com/huggingface/transformers/issues/5726 | 656,230,983 | MDU6SXNzdWU2NTYyMzA5ODM= | 5,726 | Finetuning GPT2 with Custom Loss | {
"login": "aclifton314",
"id": 53267795,
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aclifton314",
"html_url": "https://github.com/aclifton314",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Saw your question on `discussion.huggingface.co` => thanks for posting it there. We are trying to handle these kinds of questions (longer questions / very researchy bugs/problems in the forum) - so let's move it there :-) ",
"https://discuss.huggingface.co/t/finetuning-gpt2-with-user-defined-loss/163/12?u=patrickvonplaten"
] | 1,594 | 1,594 | 1,594 | NONE | null | ## System Info
- Ubuntu 20.04
- Pytorch: 1.5.1+cpu
- Transformers: 3.0.2
- Python: 3.7.6
## Details
Ultimately, I would like to finetune GPT2 on my dataset using a custom loss from an `NGrams` model I have created. Here is what I for the model:
```python
from transformers import GPT2LMHeadModel
from FeatureExtraction.NGrams import *
class GPT2FinetunedWithNgrams(GPT2LMHeadModel):
def __init__(self, ngrams_model_path):
super().from_pretrained('gpt2')
self.ngrams_model = NGrams(ngrams_model_path)
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_tuple=None,
):
return_tuple = return_tuple if return_tuple is not None else self.config.use_return_tuple
transformer_outputs = self.transformer(
input_ids,
past=past,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_tuple=return_tuple,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
#use gpt2 to generate a span of text based off input_ids?
#gpt2_sent = ???
loss = self.ngrams_model.sentence_loss(gpt2_sent)
return (loss, lm_logits)
```
and here is my training script using Transformers `Trainer`:
```python
from text_gen_w_transformers.finetune_gpt2 import GPT2FinetunedWithNgrams
from transformers import Trainer, TrainingArguments
model = GPT2FinetunedWithNgrams('/path/to/ngrams/model.pkl')
training_args = TrainingArguments(
output_dir='/path/to/finetuned_gpt2',
do_train=True,
per_device_train_batch_size=16,
learning_rate=1e-3,
num_train_epochs=1,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=?????
)
trainer.train()
```
My questions are:
1. You can see from the `#gpt2_sent = ???` comment in the model code that I presume this is the place where I would generate a gpt2 sequence based off this version of gpt2 that is currently being finetuned. However, I am not sure what the best way to go about doing this is. Any recommendations?
2. In the training script, I am using the `Trainer` module. However, I don't understand what the `train_dataset` parameter is in `Trainer`. I have a csv file that contains one sequence per line, but I have a feeling I need to construct a `Dataset` object or something.
3. I haven't tried to run this code because I need to fill in the above 2 parts, but I also think I'm not setting any of the parameters for `transformer_outputs`. It looks like they are set to `None` and I don't know if that will be problematic. Any thoughts on this?
I've been reading through the documentation and really like the library. I'm also new to it and pytorch so I apologize if my questions are pretty basic. Thanks in advance for your help!
**EDIT**
When I run `model = GPT2FinetunedWithNgrams('/path/to/ngrams/model.pkl')`, I just get repeated printouts of the GPT2Config object, so I don't think `super().from_pretrained('gpt2')` is the right approach for loading a pretrained model when you have inherited another class.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5726/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5725/comments | https://api.github.com/repos/huggingface/transformers/issues/5725/events | https://github.com/huggingface/transformers/pull/5725 | 656,174,683 | MDExOlB1bGxSZXF1ZXN0NDQ4NTIwMzc3 | 5,725 | TPU CI testing | {
"login": "zcain117",
"id": 14796584,
"node_id": "MDQ6VXNlcjE0Nzk2NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/14796584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zcain117",
"html_url": "https://github.com/zcain117",
"followers_url": "https://api.github.com/users/zcain117/followers",
"following_url": "https://api.github.com/users/zcain117/following{/other_user}",
"gists_url": "https://api.github.com/users/zcain117/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zcain117/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zcain117/subscriptions",
"organizations_url": "https://api.github.com/users/zcain117/orgs",
"repos_url": "https://api.github.com/users/zcain117/repos",
"events_url": "https://api.github.com/users/zcain117/events{/privacy}",
"received_events_url": "https://api.github.com/users/zcain117/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik I'm wondering if CircleCI does not run the pending changes to `.circleci/config.yml` if the changes came from a PR from a forked repo.\r\n\r\nWhen testing on my private repo, I used a branch on the main repo and CircleCI did include the pending changes when running. I've seen lots of differences between branches on main repo and forked repo PRs when it comes to CircleCI and Github Actions.\r\n\r\nThe action item might be to remake this PR as a branch on the repo rather than a forked PR if you'd like to see the `job` run before submit.",
"\r\n\r\n\r\n> @LysandreJik I'm wondering if CircleCI does not run the pending changes to `.circleci/config.yml` if the changes came from a PR from a forked repo.\r\n> \r\n> When testing on my private repo, I used a branch on the main repo and CircleCI did include the pending changes when running. I've seen lots of differences between branches on main repo and forked repo PRs when it comes to CircleCI and Github Actions.\r\n> \r\n> The action item might be to remake this PR as a branch on the repo rather than a forked PR if you'd like to see the `job` run before submit.\r\n\r\nI think the forked repo is irrelevant. I was able to run new CircleCI changes in a similar PR for PyTorch Lightning: https://github.com/PyTorchLightning/pytorch-lightning/pull/2486\r\n",
"Looks like this rebase went wrong - recreated the change in https://github.com/huggingface/transformers/pull/6158"
] | 1,594 | 1,596 | 1,596 | CONTRIBUTOR | null | Run TPU CI testing using CircleCI.
I sent a guide in Slack with steps needed on the owner side for CircleCI and Google Cloud to make this work (setting env vars, creating GKE cluster, and populating the dataset).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5725/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5725",
"html_url": "https://github.com/huggingface/transformers/pull/5725",
"diff_url": "https://github.com/huggingface/transformers/pull/5725.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5725.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5724/comments | https://api.github.com/repos/huggingface/transformers/issues/5724/events | https://github.com/huggingface/transformers/issues/5724 | 656,144,660 | MDU6SXNzdWU2NTYxNDQ2NjA= | 5,724 | T5 ONNX Export Test Failing on GPU | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/runs/863647649?check_suite_focus=true

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5724/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5723/comments | https://api.github.com/repos/huggingface/transformers/issues/5723/events | https://github.com/huggingface/transformers/issues/5723 | 656,142,497 | MDU6SXNzdWU2NTYxNDI0OTc= | 5,723 | Fix slow test_enro_generate | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | see https://user-images.githubusercontent.com/6045025/87226810-6103b400-c364-11ea-875a-dcbd7c1e49ca.png | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5723/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5722/comments | https://api.github.com/repos/huggingface/transformers/issues/5722/events | https://github.com/huggingface/transformers/issues/5722 | 656,119,783 | MDU6SXNzdWU2NTYxMTk3ODM= | 5,722 | Cannot preprocess WNUT'17 dataset for token-classification | {
"login": "kushalj001",
"id": 32245327,
"node_id": "MDQ6VXNlcjMyMjQ1MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32245327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kushalj001",
"html_url": "https://github.com/kushalj001",
"followers_url": "https://api.github.com/users/kushalj001/followers",
"following_url": "https://api.github.com/users/kushalj001/following{/other_user}",
"gists_url": "https://api.github.com/users/kushalj001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kushalj001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kushalj001/subscriptions",
"organizations_url": "https://api.github.com/users/kushalj001/orgs",
"repos_url": "https://api.github.com/users/kushalj001/repos",
"events_url": "https://api.github.com/users/kushalj001/events{/privacy}",
"received_events_url": "https://api.github.com/users/kushalj001/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@kushalj001 It may take a while to preprocess the whole dataset. Not sure if you waited long enough. Check if your have enough RAM when you are doing this since if the dataset is too large you might run out of memory.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,602 | 1,602 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):BERT
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x] the official example scripts: I am trying to run the run_ner.py script for the WNUT’17 dataset. I followed the preprocessing steps mentioned in the README. I just downloaded the dev dataset. The command
`python3 scripts/preprocess.py data_wnut_17/dev.txt.tmp $BERT_MODEL $MAX_LENGTH > data_wnut_17/dev.txt`
does not work and my terminal hangs.
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Token Classification
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Move into the `token-classification` folder in the `examples` directory.
2. Run the commands for the WNUT’17 dataset as mentioned in the README.
```
mkdir -p data_wnut_17
curl -L 'https://github.com/leondz/emerging_entities_17/raw/master/emerging.dev.conll' | tr '\t' ' ' > data_wnut_17/dev.txt.tmp
export MAX_LENGTH=128
export BERT_MODEL=bert-large-cased
```
3. The terminal hangs on the next command
`python3 scripts/preprocess.py data_wnut_17/dev.txt.tmp $BERT_MODEL $MAX_LENGTH > data_wnut_17/dev.txt`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect the `preprocess.py` script to execute normally and return the control of the terminal back to me. Once hung, I fail to regain control of my command-line even by `ctrl-d/z/c`.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu 18.04
- Python version: 3.7.8
- PyTorch version (GPU?): 1.5 (Yes)
- Tensorflow version (GPU?): -
- Using GPU in script?: The error does not involve GPU usage in the script.
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5722/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5721/comments | https://api.github.com/repos/huggingface/transformers/issues/5721/events | https://github.com/huggingface/transformers/issues/5721 | 656,073,054 | MDU6SXNzdWU2NTYwNzMwNTQ= | 5,721 | Unable to finetune BERT on own dataset | {
"login": "noabenefraim",
"id": 13383820,
"node_id": "MDQ6VXNlcjEzMzgzODIw",
"avatar_url": "https://avatars.githubusercontent.com/u/13383820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noabenefraim",
"html_url": "https://github.com/noabenefraim",
"followers_url": "https://api.github.com/users/noabenefraim/followers",
"following_url": "https://api.github.com/users/noabenefraim/following{/other_user}",
"gists_url": "https://api.github.com/users/noabenefraim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noabenefraim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noabenefraim/subscriptions",
"organizations_url": "https://api.github.com/users/noabenefraim/orgs",
"repos_url": "https://api.github.com/users/noabenefraim/repos",
"events_url": "https://api.github.com/users/noabenefraim/events{/privacy}",
"received_events_url": "https://api.github.com/users/noabenefraim/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Seems you are using Powershell. Since this is a bash shell file, you will need to run it outside of Powershell in bash shell.\r\n\r\nHere's an example using Google Colab\r\n\r\nhttps://colab.research.google.com/github/interactive-fiction-class/interactive-fiction-class.github.io/blob/master/homeworks/language-model/hw4_transformer.ipynb\r\n\r\nSome additional examples\r\nhttps://github.com/huggingface/transformers/blob/223084e42b57cd0d8e78de38e15a42d5d6b04391/notebooks/README.md",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,600 | 1,600 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
Hello, I am trying to use my own data to finetune transformers for summarization tasks. I have followed the instructions on the README.md by generating a text file with each article to be summarized on its own line. The text file is located in ...\workspace\hug
Below is the command that I ran and it's resulting error. I couldn't find further instructions on the README or in the stackoverflow/current or closed issues. Can you please provide more guidance with examples on how to train on your data set. I would greatly appreciate this. @sshleifer
Command ran + error:

Finetune.sh

Once again, I would really appreciate your help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5721/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.