url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/3413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3413/comments | https://api.github.com/repos/huggingface/transformers/issues/3413/events | https://github.com/huggingface/transformers/pull/3413 | 586,999,880 | MDExOlB1bGxSZXF1ZXN0MzkzMDIzNjQ0 | 3,413 | Add t5 to pipeline(task='summarization') | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=h1) Report\n> Merging [#3413](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e392ba6938f50655a195ea7ec8a260b1e9fc6058&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `93.75%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3413 +/- ##\n==========================================\n+ Coverage 77.56% 77.58% +0.02% \n==========================================\n Files 100 100 \n Lines 16970 16993 +23 \n==========================================\n+ Hits 13162 13184 +22 \n- Misses 3808 3809 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.71% <ø> (-0.02%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `73.05% <93.10%> (+0.52%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.44% <100.00%> (+0.52%)` | :arrow_up: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.89% <100.00%> (+0.05%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=footer). Last update [e392ba6...23778d1](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | This PR:
- adds T5 to summarization piplines.
- adds warnings and better defaults to Bart/T5 summarization
- removes unnecessary assert in generate() function
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3413/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3413",
"html_url": "https://github.com/huggingface/transformers/pull/3413",
"diff_url": "https://github.com/huggingface/transformers/pull/3413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3413.patch",
"merged_at": 1585216994000
} |
https://api.github.com/repos/huggingface/transformers/issues/3412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3412/comments | https://api.github.com/repos/huggingface/transformers/issues/3412/events | https://github.com/huggingface/transformers/issues/3412 | 586,986,036 | MDU6SXNzdWU1ODY5ODYwMzY= | 3,412 | cannot import name 'MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING' | {
"login": "vanh17",
"id": 10501538,
"node_id": "MDQ6VXNlcjEwNTAxNTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/10501538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vanh17",
"html_url": "https://github.com/vanh17",
"followers_url": "https://api.github.com/users/vanh17/followers",
"following_url": "https://api.github.com/users/vanh17/following{/other_user}",
"gists_url": "https://api.github.com/users/vanh17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vanh17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vanh17/subscriptions",
"organizations_url": "https://api.github.com/users/vanh17/orgs",
"repos_url": "https://api.github.com/users/vanh17/repos",
"events_url": "https://api.github.com/users/vanh17/events{/privacy}",
"received_events_url": "https://api.github.com/users/vanh17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This should be fixed on master since a8e3336a850e856188350a93e67d77c07c85b8af.\r\n\r\nFeel free to re-open if that's not the case.",
"You might want to upgrade your repo.\r\n`pip install --upgrade .`"
] | 1,585 | 1,585 | 1,585 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): ENglish
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run the run_ner.py script on examples/ner/
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3412/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3411/comments | https://api.github.com/repos/huggingface/transformers/issues/3411/events | https://github.com/huggingface/transformers/pull/3411 | 586,904,415 | MDExOlB1bGxSZXF1ZXN0MzkyOTQ1NTk3 | 3,411 | Add t5 summarization example | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> pending my comments\r\n\r\nVery much down to share the summarization code in another PR!",
"Code quality test fails because of unpinned isort library (see https://github.com/huggingface/transformers/pull/3449)"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | Adds TF 2.0 Example for T5 summarization.
Adds dataset download file via `tensorflow_datasets` and a rouge scorer.
Example is currently being tested on T5-large on GPU to see how rouge scorer performs in comparsion to `examples/summarization/bart` rouge scorer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3411/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3411",
"html_url": "https://github.com/huggingface/transformers/pull/3411",
"diff_url": "https://github.com/huggingface/transformers/pull/3411.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3411.patch",
"merged_at": 1585243076000
} |
https://api.github.com/repos/huggingface/transformers/issues/3410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3410/comments | https://api.github.com/repos/huggingface/transformers/issues/3410/events | https://github.com/huggingface/transformers/pull/3410 | 586,900,490 | MDExOlB1bGxSZXF1ZXN0MzkyOTQyNDg0 | 3,410 | Added precisions in SciBERT-NLI model card | {
"login": "gsarti",
"id": 16674069,
"node_id": "MDQ6VXNlcjE2Njc0MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsarti",
"html_url": "https://github.com/gsarti",
"followers_url": "https://api.github.com/users/gsarti/followers",
"following_url": "https://api.github.com/users/gsarti/following{/other_user}",
"gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsarti/subscriptions",
"organizations_url": "https://api.github.com/users/gsarti/orgs",
"repos_url": "https://api.github.com/users/gsarti/repos",
"events_url": "https://api.github.com/users/gsarti/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsarti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Sorry that I have to do a second PR for this model card, but I forgot to include some precisions about the training process that are undoubtedly useful in order to reproduce my results!
- Added training time and training hardware
- Added lowercasing and Max. Seq. Length to parameters table | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3410/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3410",
"html_url": "https://github.com/huggingface/transformers/pull/3410",
"diff_url": "https://github.com/huggingface/transformers/pull/3410.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3410.patch",
"merged_at": 1585062117000
} |
https://api.github.com/repos/huggingface/transformers/issues/3409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3409/comments | https://api.github.com/repos/huggingface/transformers/issues/3409/events | https://github.com/huggingface/transformers/pull/3409 | 586,875,329 | MDExOlB1bGxSZXF1ZXN0MzkyOTIyMDk0 | 3,409 | Add right model and tokenizer path in example | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3409/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3409",
"html_url": "https://github.com/huggingface/transformers/pull/3409",
"diff_url": "https://github.com/huggingface/transformers/pull/3409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3409.patch",
"merged_at": 1585063813000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3408/comments | https://api.github.com/repos/huggingface/transformers/issues/3408/events | https://github.com/huggingface/transformers/pull/3408 | 586,850,275 | MDExOlB1bGxSZXF1ZXN0MzkyOTAxNTA3 | 3,408 | [model_cards] 🇹🇷 Add new BERTurk models | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=h1) Report\n> Merging [#3408](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e392ba6938f50655a195ea7ec8a260b1e9fc6058&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3408 +/- ##\n==========================================\n- Coverage 77.56% 77.55% -0.01% \n==========================================\n Files 100 100 \n Lines 16970 16970 \n==========================================\n- Hits 13162 13161 -1 \n- Misses 3808 3809 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.58% <0.00%> (-0.14%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=footer). Last update [e392ba6...756792d](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | COLLABORATOR | null | Hi,
this PR adds three new BERT models for Turkish:
* `dbmdz/bert-base-turkish-uncased` - uncased model with a vocab size of 32k
* `dbmdz/bert-base-turkish-128k-cased` - cased model with a vocab size of 128k
* `dbmdz/bert-base-turkish-128k-uncased` - uncased model with a vocab size of 128k
Models (incl. `tokenizer_config.json`) are already uploaded to the model hub :)
Results are coming soon in the [BERTurk repository](https://github.com/stefan-it/turkish-bert)! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3408/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3408",
"html_url": "https://github.com/huggingface/transformers/pull/3408",
"diff_url": "https://github.com/huggingface/transformers/pull/3408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3408.patch",
"merged_at": 1585063747000
} |
https://api.github.com/repos/huggingface/transformers/issues/3407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3407/comments | https://api.github.com/repos/huggingface/transformers/issues/3407/events | https://github.com/huggingface/transformers/issues/3407 | 586,784,494 | MDU6SXNzdWU1ODY3ODQ0OTQ= | 3,407 | AdamW in HuggingFace is different from AdamW in Pytorch | {
"login": "songsuoyuan",
"id": 1378976,
"node_id": "MDQ6VXNlcjEzNzg5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1378976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songsuoyuan",
"html_url": "https://github.com/songsuoyuan",
"followers_url": "https://api.github.com/users/songsuoyuan/followers",
"following_url": "https://api.github.com/users/songsuoyuan/following{/other_user}",
"gists_url": "https://api.github.com/users/songsuoyuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songsuoyuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songsuoyuan/subscriptions",
"organizations_url": "https://api.github.com/users/songsuoyuan/orgs",
"repos_url": "https://api.github.com/users/songsuoyuan/repos",
"events_url": "https://api.github.com/users/songsuoyuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/songsuoyuan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@songsuoyuan , if you notice the line:\r\n p.data.mul_(1 - group['lr'] * group['weight_decay'])\r\nThe multiplication factor is (1 - group['lr'] * group['weight_decay']) .\r\nAll subsequent first and second order moment calculations are not using p.data anymore.\r\nThis means we would get the same result if we had skipped that multiplication and introduce an addition operation at the end if weight decay > 0 with p.data._mul(-group['lr'] * group['weight_decay']) and this what has been done in the hugginFace implementation as well.\r\nSo essentially both are same.\r\n\r\nAlso in the paper, in-fact the weight decay term is introduced at end ( line-12 :Algorithm-2). Decay term in line-6 corresponds to L2 regularization which is not used here.\r\nTherefore it looks to me both the implementation are the same and reflect what {ilya,fh}@ proposed in the paper.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I find this question too. Two codes are obviously different. \r\nBecause the `p.data` in huggingface has changed through `p.data.addcdiv_(-step_size, exp_avg, denom)`\r\nBut I can't understand why.",
"*bump*",
"Update: they are indeed the same. PyTorch's implementation is just too confusing to understand.",
"They are not equivalent. This should be reported as a bug, but I see the huggingface AdamW has been deprecated.",
"bump again.\r\nI see old code from researcher on github use AdamW with huggingface scheduler\r\n\r\n```\r\nfrom pytorch_transformers import AdamW, WarmupLinearSchedule\r\n```\r\n\r\nShould I replace AdamW of huggingface to AdamW of pytorch ?\r\n\r\n```\r\nfrom torch.optim import AdamW\r\nfrom pytorch_transformers import WarmupLinearSchedule\r\n```\r\n\r\nAny advise ?\r\n"
] | 1,585 | 1,670 | 1,590 | NONE | null | # ❓ Question
I just noticed that the implementation of AdamW in HuggingFace is different from PyTorch. The previous AdamW first updates the gradient then apply the weight decay. However, in the paper (Decoupled Weight Decay Regularization, link: https://arxiv.org/abs/1711.05101) and the implementation of Pytorch, the AdamW first apply the weight decay then update the gradient.
I was wondering if the two approaches are the same. Thanks! (In my opinion, they are not the same procedure.)
HuggingFace:
```python
for group in self.param_groups:
for p in group["params"]:
...
# Decay the first and second moment running average coefficient
# In-place operations to update the averages at the same time
exp_avg.mul_(beta1).add_(1.0 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1.0 - beta2, grad, grad)
denom = exp_avg_sq.sqrt().add_(group["eps"])
step_size = group["lr"]
if group["correct_bias"]: # No bias correction for Bert
bias_correction1 = 1.0 - beta1 ** state["step"]
bias_correction2 = 1.0 - beta2 ** state["step"]
step_size = step_size * math.sqrt(bias_correction2) / bias_correction1
p.data.addcdiv_(-step_size, exp_avg, denom)
# Just adding the square of the weights to the loss function is *not*
# the correct way of using L2 regularization/weight decay with Adam,
# since that will interact with the m and v parameters in strange ways.
#
# Instead we want to decay the weights in a manner that doesn't interact
# with the m/v parameters. This is equivalent to adding the square
# of the weights to the loss with plain (non-momentum) SGD.
# Add weight decay at the end (fixed version)
if group["weight_decay"] > 0.0:
p.data.add_(-group["lr"] * group["weight_decay"], p.data)
```
Pytorch:
```python
for group in self.param_groups:
for p in group['params']:
...
# Perform stepweight decay
p.data.mul_(1 - group['lr'] * group['weight_decay'])
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
if amsgrad:
max_exp_avg_sq = state['max_exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
if amsgrad:
# Maintains the maximum of all 2nd moment running avg. till now
torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
# Use the max. for normalizing running avg. of gradient
denom = (max_exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
else:
denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
step_size = group['lr'] / bias_correction1
p.data.addcdiv_(-step_size, exp_avg, denom)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3407/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3407/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3406/comments | https://api.github.com/repos/huggingface/transformers/issues/3406/events | https://github.com/huggingface/transformers/pull/3406 | 586,757,734 | MDExOlB1bGxSZXF1ZXN0MzkyODI3ODY0 | 3,406 | Model cards for CS224n SQuAD2.0 models | {
"login": "elgeish",
"id": 6879673,
"node_id": "MDQ6VXNlcjY4Nzk2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elgeish",
"html_url": "https://github.com/elgeish",
"followers_url": "https://api.github.com/users/elgeish/followers",
"following_url": "https://api.github.com/users/elgeish/following{/other_user}",
"gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elgeish/subscriptions",
"organizations_url": "https://api.github.com/users/elgeish/orgs",
"repos_url": "https://api.github.com/users/elgeish/repos",
"events_url": "https://api.github.com/users/elgeish/events{/privacy}",
"received_events_url": "https://api.github.com/users/elgeish/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | For the following models:
* elgeish/cs224n-squad2.0-albert-base-v2
* elgeish/cs224n-squad2.0-albert-large-v2
* elgeish/cs224n-squad2.0-albert-xxlarge-v1
* elgeish/cs224n-squad2.0-distilbert-base-uncased
* elgeish/cs224n-squad2.0-roberta-base
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3406/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3406",
"html_url": "https://github.com/huggingface/transformers/pull/3406",
"diff_url": "https://github.com/huggingface/transformers/pull/3406.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3406.patch",
"merged_at": 1585063714000
} |
https://api.github.com/repos/huggingface/transformers/issues/3405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3405/comments | https://api.github.com/repos/huggingface/transformers/issues/3405/events | https://github.com/huggingface/transformers/pull/3405 | 586,719,590 | MDExOlB1bGxSZXF1ZXN0MzkyNzk3ODQx | 3,405 | Glue test processors and predictions | {
"login": "shoarora",
"id": 16643856,
"node_id": "MDQ6VXNlcjE2NjQzODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/16643856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shoarora",
"html_url": "https://github.com/shoarora",
"followers_url": "https://api.github.com/users/shoarora/followers",
"following_url": "https://api.github.com/users/shoarora/following{/other_user}",
"gists_url": "https://api.github.com/users/shoarora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shoarora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shoarora/subscriptions",
"organizations_url": "https://api.github.com/users/shoarora/orgs",
"repos_url": "https://api.github.com/users/shoarora/repos",
"events_url": "https://api.github.com/users/shoarora/events{/privacy}",
"received_events_url": "https://api.github.com/users/shoarora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hm my local isort passes even on a clean env\r\n```\r\n(transformers) shoarora@sho-5:~/transformers ‹glue-test-processors›\r\n$ make style\r\nblack --line-length 119 --target-version py35 examples templates tests src utils\r\nAll done! ✨ 🍰 ✨\r\n243 files left unchanged.\r\nisort --recursive examples templates tests src utils\r\n(transformers) shoarora@sho-5:~/transformers ‹glue-test-processors›\r\n$ which isort\r\n/home/shoarora/miniconda3/envs/transformers/bin/isort\r\n```\r\n\r\nUltimately, I ran it in the `circleci/python:3.6` docker image to get the correct formatting. This disagrees with what happens when I style locally in a clean env. ",
"Hi @shoarora, this is a good addition but in the meantime we updated the run_glue script (and associated utilities) quite a bit in #3800.\r\n\r\nWould you like to take a stab at updating this (probably opening a new PR)? The `Trainer`'s predict method accepts non-labelled datasets now so it should be pretty straightforward to hook it. Let us know, otherwise we'll do it down the line.",
"Would love to see this updated and merged :)",
"@ZhaofengWu You can check out https://github.com/huggingface/transformers/pull/4463 which we are going to take a look at soon",
"Thanks!",
"Closed by #4463"
] | 1,585 | 1,590 | 1,590 | CONTRIBUTOR | null | Adress #3176
- Adds a function to load the test dataset for each GLUE task processor.
- Update `run_glue.py` example script to add a `--do_test` flag for producing test set predictions in a `.tsv` file, submittable to the [GLUE scoreboard](https://gluebenchmark.com/).
- Adds a couple extra feature flags to `run_glue.py` that don't need to stay. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3405/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3405/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3405",
"html_url": "https://github.com/huggingface/transformers/pull/3405",
"diff_url": "https://github.com/huggingface/transformers/pull/3405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3405.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3404/comments | https://api.github.com/repos/huggingface/transformers/issues/3404/events | https://github.com/huggingface/transformers/issues/3404 | 586,656,991 | MDU6SXNzdWU1ODY2NTY5OTE= | 3,404 | [Bart]example---BartForConditionalGeneration | {
"login": "qiunlp",
"id": 24563279,
"node_id": "MDQ6VXNlcjI0NTYzMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/24563279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qiunlp",
"html_url": "https://github.com/qiunlp",
"followers_url": "https://api.github.com/users/qiunlp/followers",
"following_url": "https://api.github.com/users/qiunlp/following{/other_user}",
"gists_url": "https://api.github.com/users/qiunlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qiunlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qiunlp/subscriptions",
"organizations_url": "https://api.github.com/users/qiunlp/orgs",
"repos_url": "https://api.github.com/users/qiunlp/repos",
"events_url": "https://api.github.com/users/qiunlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/qiunlp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | NONE | null | **when I run your example:**
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
model = BartForConditionalGeneration.from_pretrained('bart-large-cnn')
tokenizer = BartTokenizer.from_pretrained('bart-large-cnn')
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')
print(inputs)
summary_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_beams=4, max_length=5)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
**model :**
https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/pytorch_model.bin
**the results:**
{'input_ids': tensor([[ 0, 1308, 964, 32, 3035, 53, 51, 3529, 350, 171,
33237, 4, 2]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
Traceback (most recent call last):
File "/home/qwh/桌面/OpenNMT/bart.py", line 17, in <module>
summary_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_beams=4, max_length=5)
File "/home/qwh/.local/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
return func(*args, **kwargs)
**TypeError: generate() got an unexpected keyword argument 'attention_mask'**
thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3404/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3403/comments | https://api.github.com/repos/huggingface/transformers/issues/3403/events | https://github.com/huggingface/transformers/pull/3403 | 586,590,983 | MDExOlB1bGxSZXF1ZXN0MzkyNjk4NDk5 | 3,403 | [examples] Use AutoModels in more examples | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | still to-do (non-exhaustive list):
- [ ] run_multiple_choice
- [ ] run_xnli
- [ ] test_hans
- [ ] run_mmimdb
- [ ] (maybe) run_generation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3403/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3403",
"html_url": "https://github.com/huggingface/transformers/pull/3403",
"diff_url": "https://github.com/huggingface/transformers/pull/3403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3403.patch",
"merged_at": 1585008676000
} |
https://api.github.com/repos/huggingface/transformers/issues/3402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3402/comments | https://api.github.com/repos/huggingface/transformers/issues/3402/events | https://github.com/huggingface/transformers/pull/3402 | 586,556,097 | MDExOlB1bGxSZXF1ZXN0MzkyNjY5ODQz | 3,402 | [WIP] seq2seq example | {
"login": "mgoldey",
"id": 659477,
"node_id": "MDQ6VXNlcjY1OTQ3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/659477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mgoldey",
"html_url": "https://github.com/mgoldey",
"followers_url": "https://api.github.com/users/mgoldey/followers",
"following_url": "https://api.github.com/users/mgoldey/following{/other_user}",
"gists_url": "https://api.github.com/users/mgoldey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mgoldey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mgoldey/subscriptions",
"organizations_url": "https://api.github.com/users/mgoldey/orgs",
"repos_url": "https://api.github.com/users/mgoldey/repos",
"events_url": "https://api.github.com/users/mgoldey/events{/privacy}",
"received_events_url": "https://api.github.com/users/mgoldey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"As a non-blocking question, I do note that a lot of the examples use argparse to parse comparatively long lists of arguments. I've maintained the extant style in this PR to avoid causing noise and confusion Would it be acceptable if I broke with this style to use a JSON file to store all the arguments for an experiment?",
"Hi @mgoldey, sorry for only responding now. Thanks a lot for adding a seq2seq example :-) I will take a look early next week and maybe we can have a quick chat how to merge this PR and https://github.com/huggingface/transformers/pull/3383. \r\n\r\n",
"That sounds good. I'm still tweaking things on my end for accuracy and improved logic as I get more familiar with the code base here. I'll see if I can rebase of #3383 by then, depending on my other workload. Feel free to reach out via google hangouts if you're comfortable.",
"Sorry, to answer only now! I'll will soon add a Encoder-Decoder google colab that shows how to use seq2seq ",
"Thanks - fine to close. We've moved forward without using seq2seq due to poor overall accuracy with the scale of data in place."
] | 1,585 | 1,591 | 1,591 | CONTRIBUTOR | null | This PR presents an example seq2seq use case and bug fixes necessary for this to execute with reasonable accuracy.
The utils_seq2seq.py file defines the data format for training data, and the run_seq2seq.py file takes training, development, and test data and produces a model. The README.md discusses how to execute this toy problem. The specific toy problem in use here is formatting a date string to the American style, which is a trivial example. On my local setup using GPUs, this example executes within 5 minutes. Production models should include more learnings.
I welcome feedback about how to strengthen performance here and the best route to increase testing.
This relies on a few bug fixes which have been incorporated in this branch
- Without a fix for #3038, PreTrainedEncoderDecoder won't instantiate at all.
- Without a fix for #2435, BERT models fail completely on this use case as the BERT decoder isn't instantiated correctly without CrossAttention in that case.
- ~I strongly suspect that the input to the decoder in the PreTrainedEncoderDecoder class is incorrect as present in the code base, and commit https://github.com/huggingface/transformers/commit/9fcf73afbcfa18918234592039da7bd409820431 has a proposed fix. It doesn't make sense to have the expected token ids as input to the decoder when the decoder needs to learn how to decode from the embeddings.~ Incomplete understanding - will fix | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3402/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/3402/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3402",
"html_url": "https://github.com/huggingface/transformers/pull/3402",
"diff_url": "https://github.com/huggingface/transformers/pull/3402.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3402.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3401/comments | https://api.github.com/repos/huggingface/transformers/issues/3401/events | https://github.com/huggingface/transformers/issues/3401 | 586,508,865 | MDU6SXNzdWU1ODY1MDg4NjU= | 3,401 | added_tokens.json is used for splitting texts | {
"login": "chuanli11",
"id": 15967400,
"node_id": "MDQ6VXNlcjE1OTY3NDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/15967400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chuanli11",
"html_url": "https://github.com/chuanli11",
"followers_url": "https://api.github.com/users/chuanli11/followers",
"following_url": "https://api.github.com/users/chuanli11/following{/other_user}",
"gists_url": "https://api.github.com/users/chuanli11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chuanli11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chuanli11/subscriptions",
"organizations_url": "https://api.github.com/users/chuanli11/orgs",
"repos_url": "https://api.github.com/users/chuanli11/repos",
"events_url": "https://api.github.com/users/chuanli11/events{/privacy}",
"received_events_url": "https://api.github.com/users/chuanli11/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run this script to save a pre-trained vocab, add some vocab using `added_tokens.json`. Then create a new tokenizer with the combined vocabs and use it to tokenize a sentence.
```
import os
from transformers import BertTokenizer
model_name = 'bert-base-uncased'
tokenizer_path = 'tmp'
if not os.path.exists(tokenizer_path):
os.makedirs(tokenizer_path)
tokenizer = BertTokenizer.from_pretrained(model_name)
tokenizer.save_vocabulary(tokenizer_path)
with open(tokenizer_path + '/added_tokens.json', 'w') as f:
f.write('{"ver": 30522, "rw": 30523}')
tokenizer = BertTokenizer.from_pretrained(tokenizer_path)
s = "i want to overwrite ubuntu with windows"
a = tokenizer.tokenize(s)
print(a)
```
Output run 1:
```
['i', 'want', 'to', 'o', '##ve', 'rw', 'rite', 'u', '##bu', '##nt', '##u', 'with', 'windows']
```
Ouptut run 2:
```
['i', 'want', 'to', 'o', 'ver', 'write', 'u', '##bu', '##nt', '##u', 'with', 'windows']
```
Cause of the problem:
`added_tokens.json` is [merged](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L668) with `all_special_tokens`, and then used to [split](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L815) the input text. Since the merged tokens is stored in an unordered set, the splitting process is non-deterministic for each run.
For example, these are how the text is split in different runs:
Split Run 1 (use `rw`):
```
['i want to ove', 'rw', 'rite ubuntu with windows']
```
Split Run 2 (use `ver`):
```
['i want to o', 'ver', 'write ubuntu with windows']
```
Possible solution:
Instead of `self.unique_added_tokens_encoder`, use `set(self.all_special_tokens)` to [split the text](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L814)
## Expected behavior
Using `added_tokens.json` to split the text seems to be a bug to me. I expect only a small set of special tokens in the `special_tokens_map.json` should be used for this purpose.
In general it is helpful to let a tokenizer behave deterministically across multiple runs. Otherwise it will be bad for certain downstream task such as [sentence embedding ](https://github.com/UKPLab/sentence-transformers) because one sentence can be encoded in many different ways. This is in particular problematic if the number of added_vocab is big.
## Environment info
wget https://download.pytorch.org/whl/cu100/torch-1.4.0%2Bcu100-cp36-cp36m-linux_x86_64.whl
wget https://files.pythonhosted.org/packages/7e/90/6141bf41f5655c78e24f40f710fdd4f8a8aff6c8b7c6f0328240f649bdbe/torchvision-0.5.0-cp36-cp36m-manylinux1_x86_64.whl
virtualenv -p /usr/bin/python3.6 venv && . venv/bin/activate && find . -maxdepth 1 -name "*.whl" | xargs pip install && pip install -r requirements.txt
requirements.txt:
transformers==2.5.1
tensorboardX==2.0
scikit-learn==0.22.2
- `transformers` version: 2.5.1
- Platform: Ubuntu
- Python version: 3.6.9
- PyTorch version (GPU?): Y
- Tensorflow version (GPU?): N
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: N
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3401/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3400/comments | https://api.github.com/repos/huggingface/transformers/issues/3400/events | https://github.com/huggingface/transformers/pull/3400 | 586,488,024 | MDExOlB1bGxSZXF1ZXN0MzkyNjE0Mjcy | 3,400 | [Bart: example] drop columns that are exclusively pad_token_id from input_ids | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=h1) Report\n> Merging [#3400](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f7dcf8fcea4d486544f221032625a97ad7dc5405&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3400 +/- ##\n=======================================\n Coverage 77.55% 77.55% \n=======================================\n Files 100 100 \n Lines 16970 16970 \n=======================================\n Hits 13161 13161 \n Misses 3809 3809 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=footer). Last update [f7dcf8f...9cfd3da](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,585 | 1,585 | CONTRIBUTOR | null | reasoning: These columns slow down computation, but do not change output.
impact: this reduces the runtime to compute EVAL on the CNN examples from 2h to 1:37 before any other changes.
I'm open to putting this as a method on `PretrainedTokenizer` if others find it useful.
@joeddav you might find this useful.
### Code for easy copy paste
```python
def trim_batch(
input_ids, pad_token_id, attention_mask=None,
):
"""Remove columns that are populated exclusively by pad_token_id"""
keep_column_mask = input_ids.ne(pad_token_id).any(dim=0)
if attention_mask is None:
return input_ids[:, keep_column_mask]
else:
return (input_ids[:, keep_column_mask], attention_mask[:, keep_column_mask])
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3400/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3400/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3400",
"html_url": "https://github.com/huggingface/transformers/pull/3400",
"diff_url": "https://github.com/huggingface/transformers/pull/3400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3400.patch",
"merged_at": 1585265635000
} |
https://api.github.com/repos/huggingface/transformers/issues/3399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3399/comments | https://api.github.com/repos/huggingface/transformers/issues/3399/events | https://github.com/huggingface/transformers/issues/3399 | 586,314,065 | MDU6SXNzdWU1ODYzMTQwNjU= | 3,399 | Trying to train a GPT2 from scratch | {
"login": "CNelias",
"id": 34754896,
"node_id": "MDQ6VXNlcjM0NzU0ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/34754896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CNelias",
"html_url": "https://github.com/CNelias",
"followers_url": "https://api.github.com/users/CNelias/followers",
"following_url": "https://api.github.com/users/CNelias/following{/other_user}",
"gists_url": "https://api.github.com/users/CNelias/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CNelias/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CNelias/subscriptions",
"organizations_url": "https://api.github.com/users/CNelias/orgs",
"repos_url": "https://api.github.com/users/CNelias/repos",
"events_url": "https://api.github.com/users/CNelias/events{/privacy}",
"received_events_url": "https://api.github.com/users/CNelias/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This [blogpost](https://huggingface.co/blog/how-to-train) might be interesting, have you seen it?",
"Yes, saddly the part in which I am interested in, namely, instantiating and training/testing from scratch with my own data are not almost not or not at all described.\r\n\r\nDo you know if it is possible to feed individual tensors to the model ? And if so, how should the dimensions (batch, sequence etc..) be ordered ? I would like to write my own training function for more flexibility.",
"To answer my own question, everything can be found in the code, by reading the docstrings : https://github.com/huggingface/transformers/blob/v2.5.1/src/transformers/modeling_gpt2.py#L99\r\n",
"@johncwok Didu succeed in training the GPT2 model on your own dataset from scratch? ",
"I did, but it didn't produce very good results. Either my data is not good enough or I need more layers, but I have reach the max. of what my computer capacity can allow me ",
"Hi @johncwok, I plan to train gpt2 on my data. Do you mind to share your training script, along with the raw data and the code to preprocess it?"
] | 1,584 | 1,625 | 1,585 | NONE | null | Hi !
I am trying to use a GPT2 architecture for musical applications and consequently need to train it from scratch. After a bit of googling I found that the issue #1714 already had "solved" the question but when I try the to run
```Python
from transformers import GPT2Config, GPT2Model
NUMLAYER = 4
NUMHEAD = 4
SIZEREDUCTION = 10 #the factor by which we reduce the size of the velocity argument.
VELSIZE = int(np.floor(127/SIZEREDUCTION)) + 1
SEQLEN=40 #size of data sequences.
EMBEDSIZE = 5
config = GPT2Config(vocab_size = VELSIZE, n_positions = SEQLEN, n_embd = EMBEDSIZE, n_layer = NUMLAYER, n_ctx = SEQLEN, n_head = NUMHEAD)
model = GPT2Model(config)
```
I get the following error :
```Python
Traceback (most recent call last):
File "<ipython-input-7-b043a7a2425f>", line 1, in <module>
runfile('C:/Users/cnelias/Desktop/PHD/Swing project/code/script/GPT2.py', wdir='C:/Users/cnelias/Desktop/PHD/Swing project/code/script')
File "C:\Users\cnelias\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 786, in runfile
execfile(filename, namespace)
File "C:\Users\cnelias\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/cnelias/Desktop/PHD/Swing project/code/script/GPT2.py", line 191, in <module>
model = GPT2Model(config)
File "C:\Users\cnelias\Anaconda3\lib\site-packages\transformers\modeling_gpt2.py", line 355, in __init__
self.h = nn.ModuleList([Block(config.n_ctx, config, scale=True) for _ in range(config.n_layer)])
File "C:\Users\cnelias\Anaconda3\lib\site-packages\transformers\modeling_gpt2.py", line 355, in <listcomp>
self.h = nn.ModuleList([Block(config.n_ctx, config, scale=True) for _ in range(config.n_layer)])
File "C:\Users\cnelias\Anaconda3\lib\site-packages\transformers\modeling_gpt2.py", line 223, in __init__
self.attn = Attention(nx, n_ctx, config, scale)
File "C:\Users\cnelias\Anaconda3\lib\site-packages\transformers\modeling_gpt2.py", line 109, in __init__
assert n_state % config.n_head == 0
```
What does it mean and how can I solve it ?
Also more generally, is there a documentation on how to do a forward call with the GPT2 ? Can I define my own ```train()``` function or do I have to use the model's build-in function ? Am I forced to use a ```Dataset``` to do the training or can I feed it individual tensors ?
I looked for it but couldn't find answer to these on the doc, but maybe I missed something
EDIT : Yes, I have already read the blogpost on ```huggingface.co``` but it omits too much informations and details to be usefull for my application :( | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3399/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3398/comments | https://api.github.com/repos/huggingface/transformers/issues/3398/events | https://github.com/huggingface/transformers/pull/3398 | 586,256,931 | MDExOlB1bGxSZXF1ZXN0MzkyNDI0MjUw | 3,398 | [Bart] Fix: put dummy_inputs on correct device | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging with deep suspicion that circleci failure is spurious."
] | 1,584 | 1,585 | 1,585 | CONTRIBUTOR | null | This fixes `test_dummy_inputs`, which was failing on GPU because dummy inputs were put on CPU even if model was on GPU. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3398/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3398",
"html_url": "https://github.com/huggingface/transformers/pull/3398",
"diff_url": "https://github.com/huggingface/transformers/pull/3398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3398.patch",
"merged_at": 1585262530000
} |
https://api.github.com/repos/huggingface/transformers/issues/3397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3397/comments | https://api.github.com/repos/huggingface/transformers/issues/3397/events | https://github.com/huggingface/transformers/issues/3397 | 586,242,351 | MDU6SXNzdWU1ODYyNDIzNTE= | 3,397 | Supported language information by model | {
"login": "alexcombessie",
"id": 4739848,
"node_id": "MDQ6VXNlcjQ3Mzk4NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4739848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexcombessie",
"html_url": "https://github.com/alexcombessie",
"followers_url": "https://api.github.com/users/alexcombessie/followers",
"following_url": "https://api.github.com/users/alexcombessie/following{/other_user}",
"gists_url": "https://api.github.com/users/alexcombessie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexcombessie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexcombessie/subscriptions",
"organizations_url": "https://api.github.com/users/alexcombessie/orgs",
"repos_url": "https://api.github.com/users/alexcombessie/repos",
"events_url": "https://api.github.com/users/alexcombessie/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexcombessie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi Alex, language support is described in the metadata in the models' [model cards](https://github.com/huggingface/transformers/tree/master/model_cards) and then rendered in a searchable way on the https://huggingface.co/models website.\r\n\r\nThe mapping is not exhaustive right now because a lot of the canonical/historical models do not have a model card yet. Feel free to create them for the models you're researching. cc @thomwolf @LysandreJik @clmnt ",
"Thanks, that's a good place to start. I am more interested in the canonical/historical models as you say. I see that some README.md in model cards have a \r\n```\r\n---\r\nlanguage:\r\n- bulgarian\r\n- czech\r\n- polish\r\n- russian\r\n- ...\r\n---\r\n```\r\nAre you OK for me to do a pull request to add these?\r\n\r\nI would also like to standardize languages using the ISO 693-1 two-letter code (https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). Do you agree on this standard? ",
"Sounds good to me!",
"Super, I have forked the repo and created a branch. Expect a pull request in the new few weeks :) so we can close this.",
"FYI @MobiusLooper",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | CONTRIBUTOR | null | Hi there,
Another feature/documentation request. I am evaluating the language support of all pre-trained models distributed via huggingface :)
I have quickly looked into the code and happily found that XLNet and FlauBERT models have that information: https://github.com/huggingface/transformers/search?q=lang&unscoped_q=lang
Do you plan in the short term to add an `available_languages` attribute to all pre-trained models?
If not, happy to do that investigation and share results.
Cheers,
Alex
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3397/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3396/comments | https://api.github.com/repos/huggingface/transformers/issues/3396/events | https://github.com/huggingface/transformers/issues/3396 | 586,234,245 | MDU6SXNzdWU1ODYyMzQyNDU= | 3,396 | Cannnot Import from transformers | {
"login": "saurabh896",
"id": 8954341,
"node_id": "MDQ6VXNlcjg5NTQzNDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8954341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saurabh896",
"html_url": "https://github.com/saurabh896",
"followers_url": "https://api.github.com/users/saurabh896/followers",
"following_url": "https://api.github.com/users/saurabh896/following{/other_user}",
"gists_url": "https://api.github.com/users/saurabh896/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saurabh896/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saurabh896/subscriptions",
"organizations_url": "https://api.github.com/users/saurabh896/orgs",
"repos_url": "https://api.github.com/users/saurabh896/repos",
"events_url": "https://api.github.com/users/saurabh896/events{/privacy}",
"received_events_url": "https://api.github.com/users/saurabh896/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843765959,
"node_id": "MDU6TGFiZWwxODQzNzY1OTU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Installation",
"name": "Installation",
"color": "bfdadc",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"In order to import the tensorflow models, you need to have TF2+ installed. Please update your environment info if you *do* have TF2 installed in the environment in which you're running your script.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
I am trying to import TFGPT2LMHeadModel in transformers.Python giving below error.
cannot import name 'TFGPT2LMHeadModel' from 'transformers
## To reproduce
import tensorflow as tf
from transformers import TFGPT2LMHeadModel,GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# add the EOS token as PAD token to avoid warnings
model = TFGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
# encode context the generation is conditioned on
input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='tf')
# generate text until the output length (which includes the context length) reaches 50
greedy_output = model.generate(input_ids, max_length=50)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0], skip_special_tokens=True))
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3396/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3395/comments | https://api.github.com/repos/huggingface/transformers/issues/3395/events | https://github.com/huggingface/transformers/issues/3395 | 586,179,127 | MDU6SXNzdWU1ODYxNzkxMjc= | 3,395 | 🚀 Feature request Multimodal BERT Models | {
"login": "ecekt",
"id": 16474496,
"node_id": "MDQ6VXNlcjE2NDc0NDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/16474496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ecekt",
"html_url": "https://github.com/ecekt",
"followers_url": "https://api.github.com/users/ecekt/followers",
"following_url": "https://api.github.com/users/ecekt/following{/other_user}",
"gists_url": "https://api.github.com/users/ecekt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ecekt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ecekt/subscriptions",
"organizations_url": "https://api.github.com/users/ecekt/orgs",
"repos_url": "https://api.github.com/users/ecekt/repos",
"events_url": "https://api.github.com/users/ecekt/events{/privacy}",
"received_events_url": "https://api.github.com/users/ecekt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"As for guidelines about making MMBT work, here is an example on the mm-imdb dataset: https://github.com/huggingface/transformers/blob/master/examples/mm-imdb/run_mmimdb.py.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi seems like above examples folder has been removed, is it because the multi modal experiment is in intermediate stage.",
"I believe it's available here: https://github.com/huggingface/transformers/tree/master/examples/contrib/mm-imdb",
"Hey is there any progress with it soon? \r\nI find only the mm-imdb example: https://github.com/huggingface/transformers/tree/master/examples/contrib/mm-imdb\r\n\r\nYour LXMERT model receives only text features from what I see (\"visual_feats - These are currently not provided by the transformers library.\")\r\n\r\nThanks :) ",
"The example for mmbt on mm-imdb is also an invalid link now. ",
"Here is a correct link for now https://github.com/huggingface/transformers/tree/master/examples/research_projects/mm-imdb",
"> As for guidelines about making MMBT work, here is an example on the mm-imdb dataset: https://github.com/huggingface/transformers/blob/master/examples/mm-imdb/run_mmimdb.py.\r\n\r\nThe link is broken!",
"> The link is broken!\r\n\r\nSee the reply above you :) That seems to work"
] | 1,584 | 1,620 | 1,592 | NONE | null | Hello, it would be great if more **multimodal** BERT models are included in the library. I have noticed that MMBT from Facebook is provided; however, I was unable to find some guidelines about how to make it work with the help of 🤗 Transformers.
Possible models can be [VilBERT](https://arxiv.org/abs/1908.02265), [VL-BERT](https://arxiv.org/abs/1908.08530), [VisualBERT](https://arxiv.org/abs/1908.03557), [VideoBERT](https://arxiv.org/abs/1904.01766) and so on.
Best regards. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3395/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3395/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3394/comments | https://api.github.com/repos/huggingface/transformers/issues/3394/events | https://github.com/huggingface/transformers/pull/3394 | 586,140,228 | MDExOlB1bGxSZXF1ZXN0MzkyMzI5MTAx | 3,394 | Add comparison table with new models | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3394/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3394",
"html_url": "https://github.com/huggingface/transformers/pull/3394",
"diff_url": "https://github.com/huggingface/transformers/pull/3394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3394.patch",
"merged_at": 1584979824000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3393/comments | https://api.github.com/repos/huggingface/transformers/issues/3393/events | https://github.com/huggingface/transformers/pull/3393 | 586,139,041 | MDExOlB1bGxSZXF1ZXN0MzkyMzI4MTAx | 3,393 | Create README.md | {
"login": "brandenchan",
"id": 33759007,
"node_id": "MDQ6VXNlcjMzNzU5MDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/33759007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brandenchan",
"html_url": "https://github.com/brandenchan",
"followers_url": "https://api.github.com/users/brandenchan/followers",
"following_url": "https://api.github.com/users/brandenchan/following{/other_user}",
"gists_url": "https://api.github.com/users/brandenchan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brandenchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brandenchan/subscriptions",
"organizations_url": "https://api.github.com/users/brandenchan/orgs",
"repos_url": "https://api.github.com/users/brandenchan/repos",
"events_url": "https://api.github.com/users/brandenchan/events{/privacy}",
"received_events_url": "https://api.github.com/users/brandenchan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks for sharing! Any way you could format the training results as a Markdown table? Might be more readable.",
"I'll merge for now"
] | 1,584 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3393/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3393",
"html_url": "https://github.com/huggingface/transformers/pull/3393",
"diff_url": "https://github.com/huggingface/transformers/pull/3393.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3393.patch",
"merged_at": 1585656035000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3392/comments | https://api.github.com/repos/huggingface/transformers/issues/3392/events | https://github.com/huggingface/transformers/pull/3392 | 586,137,187 | MDExOlB1bGxSZXF1ZXN0MzkyMzI2NTQz | 3,392 | Add comparison table with older brother in family | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3392/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3392",
"html_url": "https://github.com/huggingface/transformers/pull/3392",
"diff_url": "https://github.com/huggingface/transformers/pull/3392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3392.patch",
"merged_at": 1584979881000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3391/comments | https://api.github.com/repos/huggingface/transformers/issues/3391/events | https://github.com/huggingface/transformers/pull/3391 | 586,131,207 | MDExOlB1bGxSZXF1ZXN0MzkyMzIxNTcz | 3,391 | Create card for the model | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=h1) Report\n> Merging [#3391](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf72479bf11bf7fbc499a518896dfd3cafdd0b21&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3391 +/- ##\n==========================================\n+ Coverage 77.55% 77.56% +0.01% \n==========================================\n Files 100 100 \n Lines 16970 16970 \n==========================================\n+ Hits 13161 13163 +2 \n+ Misses 3809 3807 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.72% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.09% <0.00%> (+0.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=footer). Last update [cf72479...c941e45](https://codecov.io/gh/huggingface/transformers/pull/3391?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3391/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3391",
"html_url": "https://github.com/huggingface/transformers/pull/3391",
"diff_url": "https://github.com/huggingface/transformers/pull/3391.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3391.patch",
"merged_at": 1584979842000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3390/comments | https://api.github.com/repos/huggingface/transformers/issues/3390/events | https://github.com/huggingface/transformers/issues/3390 | 586,120,611 | MDU6SXNzdWU1ODYxMjA2MTE= | 3,390 | adding --fp16 to run_language_modeling and increase batch size but cuda out of memory error | {
"login": "mahdirezaey",
"id": 34715488,
"node_id": "MDQ6VXNlcjM0NzE1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/34715488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahdirezaey",
"html_url": "https://github.com/mahdirezaey",
"followers_url": "https://api.github.com/users/mahdirezaey/followers",
"following_url": "https://api.github.com/users/mahdirezaey/following{/other_user}",
"gists_url": "https://api.github.com/users/mahdirezaey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahdirezaey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahdirezaey/subscriptions",
"organizations_url": "https://api.github.com/users/mahdirezaey/orgs",
"repos_url": "https://api.github.com/users/mahdirezaey/repos",
"events_url": "https://api.github.com/users/mahdirezaey/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahdirezaey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi all\r\n\r\nI am using colab , 1 GPU , Tesla P100-PCIE-16GB\r\n\r\ncode below ran OK \r\n\r\n!python /content/transformers/examples/run_language_modeling.py \\\r\n --output_dir=/content/outputs \\\r\n --model_type=bert \\\r\n --model_name_or_path=bert-base-cased \\\r\n --num_train_epochs 1\\\r\n --do_train \\\r\n --do_eval \\\r\n --per_gpu_train_batch_size 152\\\r\n --train_data_file=/content/input_data/trn.txt \\\r\n --eval_data_file=/content/input_data/val.txt \\\r\n --evaluate_during_training \\\r\n --learning_rate 1e-4\\\r\n --overwrite_output_dir\\\r\n --tokenizer_name /content/token/ \\\r\n --block_size 64\\\r\n --mlm\r\n\r\n\r\n\r\n\r\n\r\n(and batch_size 152 was max num i was able to run without cuda out of memory )\r\nthen installing apex by \r\n\r\n\r\n\r\n\r\n%%writefile setup.sh\r\n\r\nexport CUDA_HOME=/usr/local/cuda-10.1\r\ngit clone https://github.com/NVIDIA/apex\r\npip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./apex\r\n\r\n!sh setup.sh\r\n\r\n\r\n\r\nthen adding \" --fp16\\\" to code but i was not able to increase batch size , even abit\r\n\r\ndo you know that ?\r\n@thomwolf , @VictorSanh , @aaugustin , @BramVanroy , @julien-c , @LysandreJik",
"is it also the case with GTX 1080 , any one tried ?\r\n",
"and one more thing , \r\ndoes any function in those scripts , concatenate the short lines to each other ?\r\nin order not to be enforced to pad each line so much ",
"Please don't mass-tag people — thanks.",
"solve :\r\nthat was because i was using p100"
] | 1,584 | 1,587 | 1,587 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3390/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3389/comments | https://api.github.com/repos/huggingface/transformers/issues/3389/events | https://github.com/huggingface/transformers/issues/3389 | 586,113,668 | MDU6SXNzdWU1ODYxMTM2Njg= | 3,389 | 🐛Bugs in run_tf_ner.py | {
"login": "jia-zhuang",
"id": 32734827,
"node_id": "MDQ6VXNlcjMyNzM0ODI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32734827?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jia-zhuang",
"html_url": "https://github.com/jia-zhuang",
"followers_url": "https://api.github.com/users/jia-zhuang/followers",
"following_url": "https://api.github.com/users/jia-zhuang/following{/other_user}",
"gists_url": "https://api.github.com/users/jia-zhuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jia-zhuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jia-zhuang/subscriptions",
"organizations_url": "https://api.github.com/users/jia-zhuang/orgs",
"repos_url": "https://api.github.com/users/jia-zhuang/repos",
"events_url": "https://api.github.com/users/jia-zhuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/jia-zhuang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @jia-zhuang,\r\n\r\nAs you can see [here](https://github.com/huggingface/transformers/blob/master/examples/ner/run_tf_ner.py#L525) I update the classification layer to add a softmax as activation, then the `from_logits=True` is not necessary.",
"@jplu Thanks for your reply! I learn a lot from your code."
] | 1,584 | 1,585 | 1,585 | NONE | null | Found a bug in [run_tf_ner.py](https://github.com/huggingface/transformers/blob/master/examples/ner/run_tf_ner.py) at line 170 and 325:
```python
loss_fct = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
```
refer to [tf.keras.losses.SparseCategoricalCrossentropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy)
```python
tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=False, reduction=losses_utils.ReductionV2.AUTO,
name='sparse_categorical_crossentropy'
)
```
> `from_logits`: Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. Note: Using from_logits=True may be more numerically stable.
So I think `loss_fct` should be init with `from_logits=True` if the `TFBertForTokenClassification` just return pure logits rather than softmax output. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3389/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3388/comments | https://api.github.com/repos/huggingface/transformers/issues/3388/events | https://github.com/huggingface/transformers/pull/3388 | 586,107,172 | MDExOlB1bGxSZXF1ZXN0MzkyMzAxODE0 | 3,388 | Lazy text dataset loading for language modelling with PyTorch | {
"login": "GCHQResearcher92457",
"id": 62057951,
"node_id": "MDQ6VXNlcjYyMDU3OTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/62057951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GCHQResearcher92457",
"html_url": "https://github.com/GCHQResearcher92457",
"followers_url": "https://api.github.com/users/GCHQResearcher92457/followers",
"following_url": "https://api.github.com/users/GCHQResearcher92457/following{/other_user}",
"gists_url": "https://api.github.com/users/GCHQResearcher92457/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GCHQResearcher92457/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GCHQResearcher92457/subscriptions",
"organizations_url": "https://api.github.com/users/GCHQResearcher92457/orgs",
"repos_url": "https://api.github.com/users/GCHQResearcher92457/repos",
"events_url": "https://api.github.com/users/GCHQResearcher92457/events{/privacy}",
"received_events_url": "https://api.github.com/users/GCHQResearcher92457/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can anyone advise on the failed tests? Seems to failing in parts of the code-base I haven't touched.",
"@GCHQResearcher92457 Yes, the failures are unrelated.",
"Most recent failures are unrelated.",
"Quick question in passing because I am working on something close, did you run some benchmark to see how this behaves speedwise?",
"> Quick question in passing because I am working on something close, did you run some benchmark to see how this behaves speedwise?\r\n\r\nI've found in practice so far that training iterations are the same speed using this method as the previous methods, i.e. the bottleneck seems to be later on. Just did some crude tests on a file of 100 lines to test only the data loading performance.\r\n\r\nInstantiating lazy dataset: 39.1 µs ± 408 ns\r\nInstantiating cached dataset: 17.4 ms ± 41.6 µs \r\nRandom access to single item (lazy): 3.34 µs ± 171 \r\nRandom access to single item (cached): 3.18 µs ± 86.3 ns\r\nCreating a single tokenized batch (lazy): 1.33 ms ± 3.5 µs\r\nCreating a single tokenized batch (cached): 81.5 µs ± 5.15 µs",
"I tested out the LazyLineByLineTextDataset and quickly ran out of memory.\r\n\r\nIt looks like linecache isn't capable of efficiently indexing into large files. My ~6GB training data causes linecache to stall & use up 7+ GB of RAM. \r\n\r\nSaw a similar issue [here](https://stackoverflow.com/questions/620367/how-to-jump-to-a-particular-line-in-a-huge-text-file/2727585). Might be better to use a system similar to the second answer in that post where you create a map of line breaks in the file and seek to them. \r\n\r\n```\r\nclass LineSeekableFile:\r\n def __init__(self, seekable):\r\n self.fin = seekable\r\n self.line_map = list() # Map from line index -> file position.\r\n self.line_map.append(0)\r\n while seekable.readline():\r\n self.line_map.append(seekable.tell())\r\n\r\n def __getitem__(self, index):\r\n # NOTE: This assumes that you're not reading the file sequentially. \r\n # For that, just use 'for line in file'.\r\n self.fin.seek(self.line_map[Index])\r\n return self.fin.readline()\r\n```",
"> I tested out the LazyLineByLineTextDataset and quickly ran out of memory.\r\n> \r\n> It looks like linecache isn't capable of efficiently indexing into large files. My ~6GB training data causes linecache to stall & use up 7+ GB of RAM.\r\n> \r\n> Saw a similar issue [here](https://stackoverflow.com/questions/620367/how-to-jump-to-a-particular-line-in-a-huge-text-file/2727585). Might be better to use a system similar to the second answer in that post where you create a map of line breaks in the file and seek to them.\r\n> \r\n> ```\r\n> class LineSeekableFile:\r\n> def __init__(self, seekable):\r\n> self.fin = seekable\r\n> self.line_map = list() # Map from line index -> file position.\r\n> self.line_map.append(0)\r\n> while seekable.readline():\r\n> self.line_map.append(seekable.tell())\r\n> \r\n> def __getitem__(self, index):\r\n> # NOTE: This assumes that you're not reading the file sequentially. \r\n> # For that, just use 'for line in file'.\r\n> self.fin.seek(self.line_map[Index])\r\n> return self.fin.readline()\r\n> ```\r\n\r\nDid you run into out-of-memory issues, or did the process simply use a lot of memory? It is likely to be the latter, and that is exactly what line_cache_ is supposed to do: it reads as much of the file into memory as it can for quick access as much as possible (considering the available memory), and then does its work.\r\n\r\nLineSeekableFile can be an alternative but definitely not a good replacement imo (it'll be slower, and expects a file handle to always be open which you often would not want).",
"> Did you run into out-of-memory issues, or did the process simply use a lot of memory? It is likely to be the latter, and that is exactly what line_cache_ is supposed to do: it reads as much of the file into memory as it can for quick access as much as possible (considering the available memory), and then does its work.\r\n> \r\n> LineSeekableFile can be an alternative but definitely not a good replacement imo (it'll be slower, and expects a file handle to always be open which you often would not want).\r\n\r\nI ran the code on a GCP VM instance with 13 GB of RAM. My RAM quickly went to 0 and I was kicked out of SSH. I was forced to restart the instance in order to regain access. \r\n\r\nFrom what I'm seeing, it seems like linecache is primarily designed to be used on Python source files, not large text files. From what I can tell, the [source code](https://github.com/python/cpython/blob/10dabbf8d2c1c929f6ac395e19c64b361bd58fdd/Lib/linecache.py#L82) reads all the lines in the file into memory, without any consideration for available memory. ",
"@ceremonious I tested this locally with a 50+GB file on my 32GB RAM system and it works as expected. Memory usage goes up to around 95% and stays there. Reproducible code:\r\n\r\n```python\r\nimport linecache\r\nimport random\r\n\r\n\r\ndef get_n_lines(fin, size=65536):\r\n # borrowed from https://stackoverflow.com/a/9631635/1150683\r\n def blocks(files):\r\n while True:\r\n b = files.read(size)\r\n if not b:\r\n break\r\n yield b\r\n\r\n with open(fin, encoding=\"utf-8\") as fhin:\r\n n_lines = sum(bl.count(\"\\n\") for bl in blocks(fhin))\r\n return n_lines\r\n\r\ndef main(fin):\r\n n_lines = get_n_lines(fin)\r\n while True:\r\n idx = random.randint(1, n_lines+1)\r\n line = linecache.getline(fin, idx)\r\n print(line)\r\n\r\n\r\nif __name__ == '__main__':\r\n f = r'path/to/huge/file.txt'\r\n main(f)\r\n```\r\n\r\nI haven't dug into the source code (though I do see a MemoryError check in it), but I have used this for many projects on our own servers and I can tell you that it works (it will utilise as much RAM as it can but won't throw OOM errors). It is good to know that this won't work well with GCP, though! A note should be included in the class's docstring.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=h1) Report\n> Merging [#3388](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d22894dfd40d5c858e8398e2783545103d191b47&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3388 +/- ##\n=======================================\n Coverage 78.26% 78.26% \n=======================================\n Files 106 106 \n Lines 17964 17964 \n=======================================\n Hits 14060 14060 \n Misses 3904 3904 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=footer). Last update [d22894d...1ead846](https://codecov.io/gh/huggingface/transformers/pull/3388?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"The merge conflicts are a bit of a mess because of datasets and collation being moved outside the main script in `master`. I've opened a new PR where the code used here has been slotted-in to the the more modular format of this script. Please see PR #4009.",
"Any updates on this PR? Lazy loading sounds like an important functionality for massive datasets.",
"@misrasaurabh1 This PR is closed. See https://github.com/huggingface/transformers/pull/4009 for the continuation."
] | 1,584 | 1,589 | 1,589 | NONE | null | #3083
Added a lazy text dataset using linecache to run_language_modeling.py. Slightly refactored collate_fn construction to accommodate the different collate functions needed for a lazy dataset vs a in-memory dataset. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3388/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3388",
"html_url": "https://github.com/huggingface/transformers/pull/3388",
"diff_url": "https://github.com/huggingface/transformers/pull/3388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3388.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3387/comments | https://api.github.com/repos/huggingface/transformers/issues/3387/events | https://github.com/huggingface/transformers/issues/3387 | 586,072,773 | MDU6SXNzdWU1ODYwNzI3NzM= | 3,387 | Finetuning of T5 on SQuAD 1.1 including code examples | {
"login": "h19920918",
"id": 25819693,
"node_id": "MDQ6VXNlcjI1ODE5Njkz",
"avatar_url": "https://avatars.githubusercontent.com/u/25819693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h19920918",
"html_url": "https://github.com/h19920918",
"followers_url": "https://api.github.com/users/h19920918/followers",
"following_url": "https://api.github.com/users/h19920918/following{/other_user}",
"gists_url": "https://api.github.com/users/h19920918/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h19920918/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h19920918/subscriptions",
"organizations_url": "https://api.github.com/users/h19920918/orgs",
"repos_url": "https://api.github.com/users/h19920918/repos",
"events_url": "https://api.github.com/users/h19920918/events{/privacy}",
"received_events_url": "https://api.github.com/users/h19920918/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"**Update:**\r\n\r\nWhen I pre-process a token not applied lower case, its EM from the initial weights was almost 76.85. (I couldn't remember exact score.)\r\n\r\nIn the TensorFlow version, its initial weights outputs 76.30 EM.\r\nHowever, their pre-processing has been applied to lower case for a question, document, and answer.\r\n\r\nI got a EM 74.02 with the lower case at the initial step.\r\nAnd, I used an answer as a target inputs and outputs instead of using the answer from a spanning.\r\nIf an answer is in a extracted document, it will be an example in the training step.\r\n\r\nBut serious problem is to decrease the validation performance after a few training steps.\r\nWhen I validated my trained model at 100 steps, its score went down almost 62.XX.\r\n\r\nI think that a problem is one of them for the batch size or wrong pre-processing or bugs in the T5 model.\r\nFor the optimizer, I tested for AdaFactor and Adam optimizer but both results are same.\r\n\r\nI didn't understand one thing when implementing it.\r\nThe thing is that loaded pre-trained weights don't have a weight for the 'lm_head' layer.\r\nI guess that it is for an user who wants to implement own vocabulary.\r\nBut I think this is a reason for that validation accuracy is lower than the TensorFlow version at the initial step. (lm_head layer should be randomly initialized.)\r\n\r\n\r\nWhen I applied a mask for the outputs and inputs_embeds for the encoder and decoder, the validation accuracy goes up. [Value * (mask == 1).float().unsqueeze(2), features of (PAD) should be zero.]\r\nBut I have to train the T5 model more for proving whether correct or not.\r\nAnd, low learning rate is better than a learning rate from the original paper. (Original paper: 1e-3, but I used 5e-5 mentioned in the BERT.)\r\n\r\n\r\nLast, in the previous comment I forgot to write something for my inputs.\r\ninput_ids = ['(QUESTION)', Q_W1, Q_W2, ..., '(CONTEXT)', C_W1, C_W2, (EOS), '(PAD)', ...]\r\nattention_masks = [1, 1, 1, ..., 1, 1, 1, 1, ..., 0, ...]\r\ndecoder_input_ids = ['(PAD)', W1, W2, ..., '(PAD)', '(PAD)', ...]\r\ndecoder_attention_masks = [1, 1, 1, ..., 0, 0, ...]\r\nlm_labels = [W1, W2, ..., '(EOS)', '(PAD)', ...]\r\n\r\nEOS token should be added in a context.\r\nAnd, tokens of the input_ids are [question : Q_W1, Q_W2, ..., context : 'C_W1, C_W2, (EOS), (PAD)', ...]\r\n\r\nI hope it will be helpful for someone who is implementing it.\r\nAnd, I will write more about it when I finish to train my model.",
"Hi @h19920918, \r\n\r\nThanks for the in-detail report. Could you quickly post your environment information here as well? \r\nYou can simply run `python transformers-cli env` in the root folder of your cloned transformers repo and copy paste it below. \r\nAnd do you use T5 with Tensorflow or PyTorch? \r\nAlso it would be great if you could copy paste your code for the above experiment here :-) ",
"@patrickvonplaten Thank you for your answer.\r\n\r\nUnfortunately, I didn't use all codes in yours.\r\nI partly used your code to implement it.\r\n\r\nFirst, my environment is below:\r\nPython == 3.6.4\r\nPytorch == 1.4.0+cu92\r\nCUDA == 9.2\r\nCuDNN == 6 or 7? (I don't know exactly.)\r\nTransformer == 2.5.1\r\n\r\nActually, I solved the problem.\r\n\r\nPaper: \r\n T5-Small: EM: 79.10 || F1: 87.24\r\nOwn:\r\n T5-Small: EM: 79.03 || F1: 87.35\r\n\r\nI suspected four things:\r\n1. Batch size\r\n Original paper used 128 batch size to train the model, but I trained with small number of batch size due to the insufficient resources. In my training process, I trained my model with 72 batch size.\r\n\r\n2. Learning rate\r\n I adjust a learning rate from 1e-3 to 1e-4 with the AdaFactor optimizer.\r\n\r\n3. Masking for the 'inputs_embeds', 'encoder_outputs', and 'decoder_outputs'\r\n I masked for three things with [Value * (mask == 1).float().unsqueeze(2)].\r\n\r\n4. Loss scale\r\n Originally, a loss is calculated by dividing the number of tokens.\r\n But I changed this to diving the number of batch size.\r\n\r\nAdditionally, I changed pre-processing part to usage an example if an answer is in an extracted document.\r\nHowever, it can be a problem since some of the documents have an answer but it is not reasonable answer.\r\n\r\nA reason to do this, some of spanned answers are a little bit different with original answer. (e.g. answer, != answer)\r\nAnd, some of spanned answers are converted into (UNK) tokens. (I'm not sure it is fixed right thing after changing my pre-process code.)\r\n\r\nI will upload my code on the GitHub as soon as possible.",
"Great, happy that you solved it :-) \r\n\r\nI think this will be very useful for others. If you could link your uploaded GitHub code to this issue this would be very helpful :-) ",
"I upload my Github, you can see the code in https://github.com/h19920918/T5_SQuAD.\r\n\r\nBut it is quite dirty code..\r\nSo, I recommend which part you watch.\r\nAlmost implementation came from your code.\r\n\r\n1. Mask\r\nhttps://github.com/h19920918/T5_SQuAD/blob/c75d44544c3f18b87a4d8d09ed320742f9aaab36/models/modeling_t5.py#L556\r\nhttps://github.com/h19920918/T5_SQuAD/blob/c75d44544c3f18b87a4d8d09ed320742f9aaab36/models/modeling_t5.py#L659\r\n\r\n2. Loss\r\nhttps://github.com/h19920918/T5_SQuAD/blob/c75d44544c3f18b87a4d8d09ed320742f9aaab36/models/modeling_t5.py#L941\r\nhttps://github.com/h19920918/T5_SQuAD/blob/c75d44544c3f18b87a4d8d09ed320742f9aaab36/models/modeling_t5.py#L943\r\n\r\n3. Pre-processing\r\nhttps://github.com/h19920918/T5_SQuAD/blob/c75d44544c3f18b87a4d8d09ed320742f9aaab36/datasets/squad.py#L101\r\n\r\nAbove links are the modification by me.\r\nWith these modification, my model could be trained. \r\n\r\n\r\nI'm sorry not to provide clean code since I'm working on something in this code..\r\nI hope it will be helpful for someone.\r\n\r\np.s. I have to do ablation study for which part is the real problem.\r\n\r\n@patrickvonplaten I have a question.\r\nAre the T5 checkpoints pre-trained by TensorFlow version or yours?\r\nIt means I want to know whether the checkpoints are converted from somewhere or not.\r\n\r\nI forgot to write something.\r\nThe results from the initial checkpoint are same.\r\nHowever, I don't understand since the 'lm_head' layer should be initialized randomly. (I used different seed for each result.)",
"Thanks for linking your code! I think especially the pre-processing code can be very useful for others!\r\n\r\nThe T5 checkpoints are the official Google checkpoints pre-trained by the T5 team: https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints . \r\n\r\nThese checkpoints were attained by pretraining on both the unsupervised C4 dataset (using a denoising objective) and the mixed multi-task supervised dataset (see [paper](https://arxiv.org/abs/1910.10683)). The PyTorch weights were retrieved by conversion from these weights but correspond 1-to-1 to the same values as the original TF weights. \r\n\r\nDoes that make sense? ",
"Thank you for your detail answer.\r\n\r\nStill, I don't understand the pre-trained weights output the same results with different seeds.\r\nAs I know, the 'lm_head' layer should be used in the inference process for generating tokens.\r\nHowever, the layer is initialized randomly since it is not in the pre-trained weights.\r\n\r\nI guess one thing about it, where the pre-trained weights dominate all features, therefore, the outputs are same regardless of the 'lm_head' layer.\r\n\r\nIs my inference correct?",
"The `lm_head` layer corresponds to the \"inverse\" token embeddings. It is tied to the input embeddings. It should not be randomly initialized when loading weights from the pretrained models.",
"Thank you for your answer.\r\n\r\nSorry, it is my mistake."
] | 1,584 | 1,588 | 1,585 | NONE | null | Hi, I am implementing T5 model on the SQuAD 1.1 dataset.
When I fine-tune the model with the Adam or AdaFactor optimizier, validation accuracy goes down.
But, the training accuracy goes up.
Could you give to me any advice for me?
I feed into the model as below:
input_ids = ['(QUESTION)', Q_W1, Q_W2, ..., '(CONTEXT)', C_W1, C_W2, ..., '(PAD)', ...]
attention_masks = [1, 1, 1, ..., 1, 1, 1, ..., 0, ...]
decoder_input_ids = ['(PAD)', W1, W2, ..., '(PAD)', '(PAD)', ...]
decoder_attention_masks = [1, 1, 1, ..., 0, 0, ...]
lm_labels = [W1, W2, ..., '(EOS)', '(PAD)', ...]
I matched the shape between 'decoder_input_ids' and 'lm_labels'. (Shift doesn't be used.)
And, '(PAD)' in 'lm_labels' is converted into -100 in a process of the loss calculation.
For the generation, 'decoder_input_ids' is generated from the decoder except for an initial token '(PAD)'.
The result from the 'T5-Small' pretrained weights on the dataset is
{
"exact": 71.03122043519394,
"f1": 81.08158598580584,
"total": 10570,
"HasAns_exact": 71.03122043519394,
"HasAns_f1": 81.08158598580584,
"HasAns_total": 10570
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3387/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3386/comments | https://api.github.com/repos/huggingface/transformers/issues/3386/events | https://github.com/huggingface/transformers/issues/3386 | 585,987,679 | MDU6SXNzdWU1ODU5ODc2Nzk= | 3,386 | Model conversion from PyTorch to TF2 doesn't work properly for ALBERT | {
"login": "singletongue",
"id": 17107587,
"node_id": "MDQ6VXNlcjE3MTA3NTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/17107587?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/singletongue",
"html_url": "https://github.com/singletongue",
"followers_url": "https://api.github.com/users/singletongue/followers",
"following_url": "https://api.github.com/users/singletongue/following{/other_user}",
"gists_url": "https://api.github.com/users/singletongue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/singletongue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/singletongue/subscriptions",
"organizations_url": "https://api.github.com/users/singletongue/orgs",
"repos_url": "https://api.github.com/users/singletongue/repos",
"events_url": "https://api.github.com/users/singletongue/events{/privacy}",
"received_events_url": "https://api.github.com/users/singletongue/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Proposed a fix that didn't work out-of-the-box with official ALBERT models. Still looking into it, will keep you posted.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Closing since it seems solved in #4076. Thank you!"
] | 1,584 | 1,591 | 1,591 | CONTRIBUTOR | null | # 🐛 Bug
## Information
The model conversion script `convert_pytorch_checkpoint_to_tf2.py` seems to be not working properly for ALBERT models.
It fails on the pre-trained models officially released by Google which are converted to PyTorch models with `convert_albert_original_tf_checkpoint_to_pytorch.py`.
## To reproduce
```
$ wget https://storage.googleapis.com/albert_models/albert_base_v2.tar.gz
$ tar xzf albert_base_v2.tar.gz
$ cd albert_base/
$ python -m transformers.convert_albert_original_tf_checkpoint_to_pytorch --tf_checkpoint_path model.ckpt-best --pytorch_dump_path ./pytorch_model.bin --albert_config_file albert_config.json
$ python -m transformers.convert_pytorch_checkpoint_to_tf2 --tf_dump_path ./ --model_type albert --pytorch_checkpoint_path ./pytorch_model.bin --config_file albert_config.json --compare_with_pt_model
...
Max absolute difference between models outputs 17.709423065185547
Traceback (most recent call last):
File "/home/m-suzuki/.pyenv/versions/3.7.4/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/m-suzuki/.pyenv/versions/3.7.4/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/m-suzuki/.pyenv/versions/Python-3.7.4/lib/python3.7/site-packages/transformers/convert_pytorch_checkpoint_to_tf2.py", line 499, in <module>
only_convert_finetuned_models=args.only_convert_finetuned_models,
File "/home/m-suzuki/.pyenv/versions/Python-3.7.4/lib/python3.7/site-packages/transformers/convert_pytorch_checkpoint_to_tf2.py", line 428, in convert_all_pt_checkpoints_to_tf
compare_with_pt_model=compare_with_pt_model,
File "/home/m-suzuki/.pyenv/versions/Python-3.7.4/lib/python3.7/site-packages/transformers/convert_pytorch_checkpoint_to_tf2.py", line 351, in convert_pt_checkpoint_to_tf
assert diff <= 2e-2, "Error, model absolute difference is >2e-2: {}".format(diff)
AssertionError: Error, model absolute difference is >2e-2: 17.709423065185547
```
Same error for ALBERT v1 models.
## Expected behavior
Max absolute difference between models outputs should be <= 2e-2
## Environment info
Observed on both of the following environments:
- `transformers` version: 2.5.1
- Platform: Darwin-19.3.0-x86_64-i386-64bit
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
<!-- -->
- `transformers` version: 2.5.1
- Platform: Linux-4.15.0-58-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3386/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3385/comments | https://api.github.com/repos/huggingface/transformers/issues/3385/events | https://github.com/huggingface/transformers/issues/3385 | 585,865,925 | MDU6SXNzdWU1ODU4NjU5MjU= | 3,385 | minor website fix | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"Indeed, those notebooks are not up-to-date and should be deprecated.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | CONTRIBUTOR | null | On this page [https://huggingface.co/transformers/notebooks.html](https://huggingface.co/transformers/notebooks.html)
The first link is fine. The others give 404. Just thought you would like to know | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3385/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3384/comments | https://api.github.com/repos/huggingface/transformers/issues/3384/events | https://github.com/huggingface/transformers/issues/3384 | 585,858,756 | MDU6SXNzdWU1ODU4NTg3NTY= | 3,384 | gpt2 - convert examples to features(tensorflow 2) | {
"login": "yagelardan",
"id": 30495788,
"node_id": "MDQ6VXNlcjMwNDk1Nzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/30495788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yagelardan",
"html_url": "https://github.com/yagelardan",
"followers_url": "https://api.github.com/users/yagelardan/followers",
"following_url": "https://api.github.com/users/yagelardan/following{/other_user}",
"gists_url": "https://api.github.com/users/yagelardan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yagelardan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yagelardan/subscriptions",
"organizations_url": "https://api.github.com/users/yagelardan/orgs",
"repos_url": "https://api.github.com/users/yagelardan/repos",
"events_url": "https://api.github.com/users/yagelardan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yagelardan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @yagelardan ,\r\n\r\nIn order to fine-tune gpt2 you should be able to use this example [script](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py)\r\n\r\nAlso, you can refer to this issue https://github.com/huggingface/transformers/issues/1407 where people already trained gpt2 on different languages. \r\nAnd this issue https://github.com/huggingface/transformers/issues/2008 could help you out :-) "
] | 1,584 | 1,585 | 1,585 | NONE | null | I'm trying to fine-tune GPT2 to generate shakespeare text.
I have variable "train_examples", which is list of InputExamples:
```
>> print(train_examples)
<__main__.InputExample at 0x7f55e0fafd68>,
<__main__.InputExample at 0x7f55e0fafda0>,
<__main__.InputExample at 0x7f55e0fafef0>,
<__main__.InputExample at 0x7f55e0fafeb8>,
<__main__.InputExample at 0x7f55e0f6aeb8>,
```
I created the examples using the following function:
```
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
guid = "%s-%s" % (set_type, i)
#guid = i
text_a = line[1]
examples.append(
InputExample(guid=guid, text_a=text_a))
return examples
class InputExample(object):
def __init__(self, guid, text_a):
self.guid = guid
self.text_a = text_a
```
As I understand I need to convert the type to 'features', before I call the fit function. But how can I convert the examples to features? I saw many cases for BERT, but I couldn't find for gpt2.
I tried:
```
from transformers import glue_convert_examples_to_features
input_train_tensor_data = glue_convert_examples_to_features(train_examples, gpt2_tokenizer, max_length=128, task='mrpc')
```
But got:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-118-d04221b84b4f> in <module>()
1 from transformers import glue_convert_examples_to_features
2
----> 3 input_train_tensor_data = glue_convert_examples_to_features(train_examples, gpt2_tokenizer, max_length=128, task='mrpc')
/usr/local/lib/python3.6/dist-packages/transformers/data/processors/glue.py in glue_convert_examples_to_features(examples, tokenizer, max_length, task, label_list, output_mode, pad_on_left, pad_token, pad_token_segment_id, mask_padding_with_zero)
120
121 if output_mode == "classification":
--> 122 label = label_map[example.label]
123 elif output_mode == "regression":
124 label = float(example.label)
KeyError: None
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3384/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3383/comments | https://api.github.com/repos/huggingface/transformers/issues/3383/events | https://github.com/huggingface/transformers/pull/3383 | 585,846,761 | MDExOlB1bGxSZXF1ZXN0MzkyMDk1MDky | 3,383 | Clean Encoder-Decoder models with Bart/T5-like API and add generate possibility | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=h1) Report\n> Merging [#3383](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/857ccdb259b7e46c60cf86c58b7ab038c63e4d4e&el=desc) will **increase** coverage by `0.29%`.\n> The diff coverage is `88.46%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3383 +/- ##\n==========================================\n+ Coverage 78.61% 78.90% +0.29% \n==========================================\n Files 106 105 -1 \n Lines 17953 17973 +20 \n==========================================\n+ Hits 14114 14182 +68 \n+ Misses 3839 3791 -48 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `89.01% <75.00%> (+0.60%)` | :arrow_up: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/3383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `84.41% <92.30%> (+63.36%)` | :arrow_up: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.98% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=footer). Last update [857ccdb...83f3d10](https://codecov.io/gh/huggingface/transformers/pull/3383?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"UPDATE:\r\n\r\nI really liked the idea of adding a encoder_decoder config (@thomwolf ) and having for both the encoder_decoder config and model 2 `from_pretrained` fn:\r\n1. The standard one which is used thanks to inheritence to `PretrainedConfig` and `PretrainedModel` \r\n2. a `from_encoder_decoder_pretrained` fn (@sshleifer)\r\n\r\nTo understand how to use the encoder decoder class please confer to the added tests.",
"Code is cleaned: added type hints, cleaned the docstring and added a encoder-decoder model page.\r\nJust need to resolve the issue with importing Bert's model tester. @sshleifer found a solution. If everybody is fine with it - I'll go for it :-) ",
"LGTM!",
"Ok Good to merge for me! If @sshleifer it's ok for you I will use your PR #4027 for the new test proposition.",
"Yes!"
] | 1,584 | 1,599 | 1,588 | MEMBER | null | Bert-Bert Encoder-Decoder models can now be used as is shown in the test cases:
`tests/test_modeling_encoder_decoder.py`.
Tests include:
- forward `input_ids` and `decoder_input_ids` for Bert-Bert
- backprop using masked language model loss for Bert-Bert
- backprop using "conventional" language model loss for Bert-Bert
- using the `generate()` fn with Bart-Bart
- saving and loading of Encoder-Decoder models.
Before merging a couple of things have to be agreed on as mentioned in the comments below.
UPDATE:
This branch is IMO now fully functional for Bert-2-Bert models.
I will finish the PR (clean the code, make a pretty docstring, etc...) once we agreed on the issues I mentioned further down. Would be very happy if you can review @thomwolf @LysandreJik @sshleifer @julien-c @yjernite | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3383/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3383/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3383",
"html_url": "https://github.com/huggingface/transformers/pull/3383",
"diff_url": "https://github.com/huggingface/transformers/pull/3383.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3383.patch",
"merged_at": 1588079470000
} |
https://api.github.com/repos/huggingface/transformers/issues/3382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3382/comments | https://api.github.com/repos/huggingface/transformers/issues/3382/events | https://github.com/huggingface/transformers/issues/3382 | 585,843,024 | MDU6SXNzdWU1ODU4NDMwMjQ= | 3,382 | When I used the add_special_tokens function in the BertTokenizer, it assigns 2 different tokens with the same ID. Is this done on purpose? | {
"login": "arnavc1712",
"id": 19833834,
"node_id": "MDQ6VXNlcjE5ODMzODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/19833834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnavc1712",
"html_url": "https://github.com/arnavc1712",
"followers_url": "https://api.github.com/users/arnavc1712/followers",
"following_url": "https://api.github.com/users/arnavc1712/following{/other_user}",
"gists_url": "https://api.github.com/users/arnavc1712/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnavc1712/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnavc1712/subscriptions",
"organizations_url": "https://api.github.com/users/arnavc1712/orgs",
"repos_url": "https://api.github.com/users/arnavc1712/repos",
"events_url": "https://api.github.com/users/arnavc1712/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnavc1712/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"We would need a bit more information to understand the issue. A reproducible code example would be even better.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3382/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/3381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3381/comments | https://api.github.com/repos/huggingface/transformers/issues/3381/events | https://github.com/huggingface/transformers/issues/3381 | 585,761,943 | MDU6SXNzdWU1ODU3NjE5NDM= | 3,381 | [BART] test_dummy_inputs fails on GPU | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,584 | 1,585 | 1,585 | CONTRIBUTOR | null | ```
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select
```
Easy fix, but putting here so I don't forget! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3381/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3380/comments | https://api.github.com/repos/huggingface/transformers/issues/3380/events | https://github.com/huggingface/transformers/issues/3380 | 585,746,714 | MDU6SXNzdWU1ODU3NDY3MTQ= | 3,380 | Can't save DistilBert model. | {
"login": "sainimohit23",
"id": 26195811,
"node_id": "MDQ6VXNlcjI2MTk1ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/26195811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sainimohit23",
"html_url": "https://github.com/sainimohit23",
"followers_url": "https://api.github.com/users/sainimohit23/followers",
"following_url": "https://api.github.com/users/sainimohit23/following{/other_user}",
"gists_url": "https://api.github.com/users/sainimohit23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sainimohit23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sainimohit23/subscriptions",
"organizations_url": "https://api.github.com/users/sainimohit23/orgs",
"repos_url": "https://api.github.com/users/sainimohit23/repos",
"events_url": "https://api.github.com/users/sainimohit23/events{/privacy}",
"received_events_url": "https://api.github.com/users/sainimohit23/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Could you provide all the information related to your environment as the bug template recommends?\r\n\r\nCan you try installing from master? In the latest pypi version, we only handled the `save_pretrained` method, not the `save`/`save_weights` methods. This should have been changed with #3103.",
"The changes in #3103 only address serialization of `TF*MainLayer` classes, used within a general Functional/Sequential API Keras model (which was my use case). Looking at [the Network docstring](https://github.com/tensorflow/tensorflow/blob/5c4931bbf69e0f006f210c6382a234e83dd4dc8e/tensorflow/python/keras/engine/network.py#L89-L97) it seems like the `TF*Model` classes, being “subclass models”, need more work in order to support Keras serialization.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | NONE | null | Model:
```
input_layer = tf.keras.layers.Input(shape = (attention_mask.shape[1],), dtype='int64')
bert = TFDistilBertModel.from_pretrained("distilbert-base-cased")(input_layer)
bert = bert[0][:,0,:]
bert = tf.keras.layers.Dense(units=20, activation='relu')(bert)
classifier = tf.keras.layers.Dense(units=train_y.shape[1], activation='softmax')(bert)
model = tf.keras.models.Model(inputs=input_layer, outputs=classifier)
model.summary()
```
After training when I try to save the model using:
```
# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
```
I am getting this error:
```
File "train_DistilBERT_model.py", line 138, in <module>
model_json = model.to_json()
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 1254, in to_json
model_config = self._updated_config()
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 1232, in _updated_config
config = self.get_config()
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 918, in get_config
return copy.deepcopy(get_network_config(self))
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 1993, in get_network_config
layer_config = serialize_layer_fn(layer)
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 198, in serialize_keras_object
config = instance.get_config()
File "/home/v-mohit.saini/anaconda3/envs/flask/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 917, in get_config
raise NotImplementedError
NotImplementedError
```
How to fix this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3380/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3379/comments | https://api.github.com/repos/huggingface/transformers/issues/3379/events | https://github.com/huggingface/transformers/issues/3379 | 585,746,335 | MDU6SXNzdWU1ODU3NDYzMzU= | 3,379 | Data Processor should not include in the package | {
"login": "Liangtaiwan",
"id": 20909894,
"node_id": "MDQ6VXNlcjIwOTA5ODk0",
"avatar_url": "https://avatars.githubusercontent.com/u/20909894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Liangtaiwan",
"html_url": "https://github.com/Liangtaiwan",
"followers_url": "https://api.github.com/users/Liangtaiwan/followers",
"following_url": "https://api.github.com/users/Liangtaiwan/following{/other_user}",
"gists_url": "https://api.github.com/users/Liangtaiwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Liangtaiwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Liangtaiwan/subscriptions",
"organizations_url": "https://api.github.com/users/Liangtaiwan/orgs",
"repos_url": "https://api.github.com/users/Liangtaiwan/repos",
"events_url": "https://api.github.com/users/Liangtaiwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Liangtaiwan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Agree and I follow you. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | CONTRIBUTOR | null | # 🚀 Feature request
For a clearly, flexible packages, I think there is no need including data processor for certain data.
## Motivation
In the past, it was easy using transformers to do research, since the module is explicit and clear.
When reading example, it was easy tracing the code to apply on a new dataset.
However, recently transformers merged some unrelated module into the package like Data Processor, making hard to modify the preprocess stage. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3379/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3379/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3378/comments | https://api.github.com/repos/huggingface/transformers/issues/3378/events | https://github.com/huggingface/transformers/issues/3378 | 585,737,522 | MDU6SXNzdWU1ODU3Mzc1MjI= | 3,378 | test_resize_tokens_embeddings does not inspect `get_output_embeddings` | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,592 | 1,592 | CONTRIBUTOR | null | Here is the existing logic: copied below for convenience
https://github.com/huggingface/transformers/blob/bbf26c4e619cf42106163e1e2cd5ff98b936ff93/tests/test_modeling_common.py#L489)
```python
model_embed = model.resize_token_embeddings(config.vocab_size)
cloned_embeddings = model_embed.weight.clone()
# Check that resizing the token embeddings with a larger vocab size increases the model's vocab size
model_embed = model.resize_token_embeddings(model_vocab_size + 10)
self.assertEqual(model.config.vocab_size, model_vocab_size + 10)
self.assertEqual(model_embed.weight.shape[0], cloned_embeddings.shape[0] + 10)
```
Since we never test the return value of `get_output_embeddings` after resizing, the model can avoid setting it to the new vocab size.
BART did this by overwriting `tie_weights` to do nothing (fix proposed in https://github.com/huggingface/transformers/pull/3323)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3378/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3377/comments | https://api.github.com/repos/huggingface/transformers/issues/3377/events | https://github.com/huggingface/transformers/issues/3377 | 585,727,878 | MDU6SXNzdWU1ODU3Mjc4Nzg= | 3,377 | RobertaTokenizer doesn't have 'batch_encode_plus' | {
"login": "amaloraini",
"id": 24499228,
"node_id": "MDQ6VXNlcjI0NDk5MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/24499228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amaloraini",
"html_url": "https://github.com/amaloraini",
"followers_url": "https://api.github.com/users/amaloraini/followers",
"following_url": "https://api.github.com/users/amaloraini/following{/other_user}",
"gists_url": "https://api.github.com/users/amaloraini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amaloraini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amaloraini/subscriptions",
"organizations_url": "https://api.github.com/users/amaloraini/orgs",
"repos_url": "https://api.github.com/users/amaloraini/repos",
"events_url": "https://api.github.com/users/amaloraini/events{/privacy}",
"received_events_url": "https://api.github.com/users/amaloraini/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843377584,
"node_id": "MDU6TGFiZWwxODQzMzc3NTg0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Version%20mismatch",
"name": "Version mismatch",
"color": "ddea7c",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi, this is probably because you're using an old version of `transformers`. Version 2.8 doesn't exist, the latest is 2.5.1 ...",
"Sorry, I accidently put 2.8. The version I have is: 2.5.1\r\nI am going to edit the post. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Started getting the same issue today. Is there a known solution?"
] | 1,584 | 1,624 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): RoBERTa
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Following the tutorial on how to train your own RoBERTa model in this [link](https://huggingface.co/blog/how-to-train)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Fill mask
## To reproduce
Steps to reproduce the behavior:
1. I pretrained my own tokenizer and roberta model
2. The tokenizer and model are loaded fine and both seems to have the training information
3. However, when I join them together in the pipeline step as in:
`tokenizer = RobertaTokenizer.from_pretrained('./eo_data')`
`rmodel = RobertaForMaskedLM.from_pretrained('./output_dir')`
`fill_mask = pipeline(
"fill-mask",
model=rmodel,
tokenizer= tokenizer
)
`
I get the following error:
`
AttributeError: 'RobertaTokenizer' object has no attribute 'batch_encode_plus'
`
It seems that RobertaTokenizer doesn't have the batch_encode_plus function as BertTokenizer
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Ubuntu
- Python version: 3.6.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Thank you
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3377/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3377/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3376/comments | https://api.github.com/repos/huggingface/transformers/issues/3376/events | https://github.com/huggingface/transformers/pull/3376 | 585,718,197 | MDExOlB1bGxSZXF1ZXN0MzkyMDA1NTM2 | 3,376 | Added scibert-nli model card | {
"login": "gsarti",
"id": 16674069,
"node_id": "MDQ6VXNlcjE2Njc0MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsarti",
"html_url": "https://github.com/gsarti",
"followers_url": "https://api.github.com/users/gsarti/followers",
"following_url": "https://api.github.com/users/gsarti/following{/other_user}",
"gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsarti/subscriptions",
"organizations_url": "https://api.github.com/users/gsarti/orgs",
"repos_url": "https://api.github.com/users/gsarti/repos",
"events_url": "https://api.github.com/users/gsarti/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsarti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=h1) Report\n> Merging [#3376](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf72479bf11bf7fbc499a518896dfd3cafdd0b21&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3376 +/- ##\n=======================================\n Coverage 77.55% 77.56% \n=======================================\n Files 100 100 \n Lines 16970 16970 \n=======================================\n+ Hits 13161 13162 +1 \n+ Misses 3809 3808 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3376/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.09% <0.00%> (+0.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=footer). Last update [cf72479...0c19c77](https://codecov.io/gh/huggingface/transformers/pull/3376?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3376/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3376",
"html_url": "https://github.com/huggingface/transformers/pull/3376",
"diff_url": "https://github.com/huggingface/transformers/pull/3376.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3376.patch",
"merged_at": 1584978942000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3375/comments | https://api.github.com/repos/huggingface/transformers/issues/3375/events | https://github.com/huggingface/transformers/pull/3375 | 585,709,119 | MDExOlB1bGxSZXF1ZXN0MzkxOTk5MjQy | 3,375 | Add camembert integration tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=h1) Report\n> Merging [#3375](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8becb732931bbab5dd75cca5f5e7c75b2516d10b&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3375 +/- ##\n==========================================\n+ Coverage 77.64% 77.71% +0.06% \n==========================================\n Files 100 100 \n Lines 16979 16979 \n==========================================\n+ Hits 13184 13195 +11 \n+ Misses 3795 3784 -11 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3375/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.50% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3375/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.52% <0.00%> (+1.73%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=footer). Last update [8becb73...74e09c3](https://codecov.io/gh/huggingface/transformers/pull/3375?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,585 | 1,585 | MEMBER | null | Add integration tests for camembert comparing results to original fairseq code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3375/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3375",
"html_url": "https://github.com/huggingface/transformers/pull/3375",
"diff_url": "https://github.com/huggingface/transformers/pull/3375.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3375.patch",
"merged_at": 1585041517000
} |
https://api.github.com/repos/huggingface/transformers/issues/3374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3374/comments | https://api.github.com/repos/huggingface/transformers/issues/3374/events | https://github.com/huggingface/transformers/issues/3374 | 585,642,979 | MDU6SXNzdWU1ODU2NDI5Nzk= | 3,374 | closed | {
"login": "kroscek",
"id": 26052229,
"node_id": "MDQ6VXNlcjI2MDUyMjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/26052229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kroscek",
"html_url": "https://github.com/kroscek",
"followers_url": "https://api.github.com/users/kroscek/followers",
"following_url": "https://api.github.com/users/kroscek/following{/other_user}",
"gists_url": "https://api.github.com/users/kroscek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kroscek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kroscek/subscriptions",
"organizations_url": "https://api.github.com/users/kroscek/orgs",
"repos_url": "https://api.github.com/users/kroscek/repos",
"events_url": "https://api.github.com/users/kroscek/events{/privacy}",
"received_events_url": "https://api.github.com/users/kroscek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | NONE | null | closed | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3374/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3373/comments | https://api.github.com/repos/huggingface/transformers/issues/3373/events | https://github.com/huggingface/transformers/issues/3373 | 585,642,574 | MDU6SXNzdWU1ODU2NDI1NzQ= | 3,373 | Add example code for CRF heads | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
}
] | [
"Also https://github.com/huggingface/transformers/pull/2249/",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | CONTRIBUTOR | null |
# Add example `crf/` with heads that force structural output dependencies.
(Mostly a note to myself as a side project)
## Model description
As requested in: https://github.com/huggingface/transformers/pull/3009 there are some tasks and languages where it is useful to have final layer structural dependencies.
Using https://github.com/harvardnlp/pytorch-struct/ we can add these with minimal changes to the code and no new model parameters.
Target:
* example code for ner / parsing (sota).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3373/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3372/comments | https://api.github.com/repos/huggingface/transformers/issues/3372/events | https://github.com/huggingface/transformers/issues/3372 | 585,641,315 | MDU6SXNzdWU1ODU2NDEzMTU= | 3,372 | BERT pretrained checkpoints | {
"login": "yzhang123",
"id": 4204271,
"node_id": "MDQ6VXNlcjQyMDQyNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4204271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yzhang123",
"html_url": "https://github.com/yzhang123",
"followers_url": "https://api.github.com/users/yzhang123/followers",
"following_url": "https://api.github.com/users/yzhang123/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhang123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yzhang123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhang123/subscriptions",
"organizations_url": "https://api.github.com/users/yzhang123/orgs",
"repos_url": "https://api.github.com/users/yzhang123/repos",
"events_url": "https://api.github.com/users/yzhang123/events{/privacy}",
"received_events_url": "https://api.github.com/users/yzhang123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We converted them from Google's checkpoints."
] | 1,584 | 1,584 | 1,584 | NONE | null | # ❓ Questions & Help
Did you pretrain the BERT cased checkpoints with huggingface or convert it from google's checkpoints? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3372/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3371/comments | https://api.github.com/repos/huggingface/transformers/issues/3371/events | https://github.com/huggingface/transformers/pull/3371 | 585,632,240 | MDExOlB1bGxSZXF1ZXN0MzkxOTQ4MzU2 | 3,371 | [Bart/Memory] Two separate, smaller decoder attention masks | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=h1) Report\n> Merging [#3371](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf72479bf11bf7fbc499a518896dfd3cafdd0b21&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `93.75%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3371 +/- ##\n==========================================\n- Coverage 77.55% 77.52% -0.03% \n==========================================\n Files 100 100 \n Lines 16970 16957 -13 \n==========================================\n- Hits 13161 13146 -15 \n- Misses 3809 3811 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.59% <93.75%> (-0.50%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=footer). Last update [cf72479...8db65c1](https://codecov.io/gh/huggingface/transformers/pull/3371?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,585 | 1,585 | CONTRIBUTOR | null | ### Background
The bart decoder requires two masks: one to ignore padding tokens, the other (`causal_mask`), to avoid attending to future tokens during training.
Previously, `_prepare_bart_decoder_inputs` combined these two masks into one float_mask of shape `(bsz, 1, tgt_len, tgt_len)` filled with -inf for tokens that should be ignored. This mask was subsequently added to the attention activations.
Now, we return the two masks separately:
`decoder_padding_mask`: shape `(bs, tgt_len)`, `bool`
`causal_mask`: shape `(tgt_len, tgt_len)`, `float`
### Impact
saves 800 MB for bs=6, tgt_len=1024, with negligible speed impact.
### Notes
- The distinct data types (bool and float) are used to minimize code change. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3371/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3371",
"html_url": "https://github.com/huggingface/transformers/pull/3371",
"diff_url": "https://github.com/huggingface/transformers/pull/3371.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3371.patch",
"merged_at": 1585272855000
} |
https://api.github.com/repos/huggingface/transformers/issues/3370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3370/comments | https://api.github.com/repos/huggingface/transformers/issues/3370/events | https://github.com/huggingface/transformers/pull/3370 | 585,580,968 | MDExOlB1bGxSZXF1ZXN0MzkxOTEzNjQz | 3,370 | [Seq2Seq Generation] Call encoder before expanding input_ids | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Like the change a lot! \r\nOne question I asked myself: With this change the `encoder_outputs` which are the same point to the same memory address -> could that lead to problems? Probably not because the `encoder_outputs` are never changed, right? \r\n\r\nI'd just propose some renaming."
] | 1,584 | 1,585 | 1,585 | CONTRIBUTOR | null | Proposing to call model.encoder before expanding `input_ids` to `effective_batch_size*num_beams`.
For Bart, this saves 1.5 GB of GPU mem on batch_size=6. Savings probably similar for T5 (untested).
Requires knowing which index of the encoder_outputs is associated with the batch dim (we need to expand this dimension), which is different between `Bart` and `T5`. This difference is encoded in the `self.encoder_outputs_batch_idx` variable.
This PR is WIP because `encoder_outputs_batch_idx` could be avoided if we transposed Bart's encoder_outputs, which I haven't tried.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3370/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3370",
"html_url": "https://github.com/huggingface/transformers/pull/3370",
"diff_url": "https://github.com/huggingface/transformers/pull/3370.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3370.patch",
"merged_at": 1585262480000
} |
https://api.github.com/repos/huggingface/transformers/issues/3369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3369/comments | https://api.github.com/repos/huggingface/transformers/issues/3369/events | https://github.com/huggingface/transformers/pull/3369 | 585,568,400 | MDExOlB1bGxSZXF1ZXN0MzkxOTAzNzEx | 3,369 | [Bart/Memory] SelfAttention only returns weights if config.output_attentions | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,584 | 1,585 | 1,585 | CONTRIBUTOR | null | **Previously**, `SelfAttention` would always return `attn_weights`, and then `BartDecoder` and `BartEncoder` would decide whether to return them to the user.
The `attn_weights` tensor is fairly large, with shape = `(bs, num_heads, tgt_len, src_len)`
This meant that the memory allocated for `attn_weights` could not be freed until after the forward pass of `BartDecoder`.
Now: `SelfAttention` returns (output, None) if `config.output_attentions=False` and the memory can be freed
Impact: memory can be freed after SelfAttention returns. -600MB peak GPU consumption for batch_size=6, tgt_len=src_len=1024, num_heads=16
Speed impact: negligible | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3369/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3369",
"html_url": "https://github.com/huggingface/transformers/pull/3369",
"diff_url": "https://github.com/huggingface/transformers/pull/3369.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3369.patch",
"merged_at": 1585262559000
} |
https://api.github.com/repos/huggingface/transformers/issues/3368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3368/comments | https://api.github.com/repos/huggingface/transformers/issues/3368/events | https://github.com/huggingface/transformers/issues/3368 | 585,565,098 | MDU6SXNzdWU1ODU1NjUwOTg= | 3,368 | Why does huggingface bert pooler hack make mixed precission training stable? | {
"login": "krishansubudhi",
"id": 11926616,
"node_id": "MDQ6VXNlcjExOTI2NjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/11926616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishansubudhi",
"html_url": "https://github.com/krishansubudhi",
"followers_url": "https://api.github.com/users/krishansubudhi/followers",
"following_url": "https://api.github.com/users/krishansubudhi/following{/other_user}",
"gists_url": "https://api.github.com/users/krishansubudhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishansubudhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishansubudhi/subscriptions",
"organizations_url": "https://api.github.com/users/krishansubudhi/orgs",
"repos_url": "https://api.github.com/users/krishansubudhi/repos",
"events_url": "https://api.github.com/users/krishansubudhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishansubudhi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Any update on this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | # ❓ Questions & Help
## Details
Huggigface BERT implementation has a hack to remove the pooler from optimizer.
https://github.com/huggingface/transformers/blob/b832d5bb8a6dfc5965015b828e577677eace601e/examples/run_squad.py#L927
```
# hack to remove pooler, which is not used
# thus it produce None grad that break apex
param_optimizer = [n for n in param_optimizer if 'pooler' not in n[0]]
```
We are trying to run pretraining on huggingface bert models. The code always diverges later during the training if this pooler hack is not applied. Everytime the reason is apex loss scaler becomes zero.
After using the above hack, there is no divergence issue seen.
The pooler layer is a FFN with tanh activation
```
class BertPooler(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.activation = nn.Tanh()
def forward(self, hidden_states):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token.
first_token_tensor = hidden_states[:, 0]
pooled_output = self.dense(first_token_tensor)
pooled_output = self.activation(pooled_output)
return pooled_output
```
I even tried replacing the tanh acrivation with GELU and adding layer norm in the pooler layer. But the loss scaler became zero even faster.
My question is why this pooler hack solves numeric instability?
**https://stackoverflow.com/questions/60743907/why-does-huggingface-bert-pooler-hack-make-mixed-precission-training-stable**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3368/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3368/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3367/comments | https://api.github.com/repos/huggingface/transformers/issues/3367/events | https://github.com/huggingface/transformers/pull/3367 | 585,509,216 | MDExOlB1bGxSZXF1ZXN0MzkxODY1MTQ3 | 3,367 | [Generate] Add bad words list argument to the generate function | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=h1) Report\n> Merging [#3367](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae6834e028ecdf7fdbe886c1f86d0e02d5fef6f0&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `91.30%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3367 +/- ##\n==========================================\n+ Coverage 77.80% 77.87% +0.06% \n==========================================\n Files 100 100 \n Lines 17064 17127 +63 \n==========================================\n+ Hits 13277 13338 +61 \n- Misses 3787 3789 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3367/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.98% <87.50%> (+0.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3367/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.63% <94.44%> (+0.47%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3367/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.99% <100.00%> (+0.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=footer). Last update [ae6834e...19d6acd](https://codecov.io/gh/huggingface/transformers/pull/3367?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Sadly, TF tensorflow test seems flaky see: https://github.com/huggingface/transformers/commit/b38d552a92a0a201c005afae0e1b861ae6de9ce0\r\n\r\nMight need to revert the commit. "
] | 1,584 | 1,585 | 1,585 | MEMBER | null | The `bad_words_ids` argument allows to insert a list of lists of `input_ids` that cannot be generated, *e.g.* bad words.
That's a proposed feature request (I think there were actually multiple ones):
#3061
Also adds tests for all language models to verify behavior. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3367/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3367",
"html_url": "https://github.com/huggingface/transformers/pull/3367",
"diff_url": "https://github.com/huggingface/transformers/pull/3367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3367.patch",
"merged_at": 1585672951000
} |
https://api.github.com/repos/huggingface/transformers/issues/3366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3366/comments | https://api.github.com/repos/huggingface/transformers/issues/3366/events | https://github.com/huggingface/transformers/issues/3366 | 585,385,380 | MDU6SXNzdWU1ODUzODUzODA= | 3,366 | GPT2TokenizerFast does not preserve special tokens' ids after a save and load. | {
"login": "s-jse",
"id": 60150701,
"node_id": "MDQ6VXNlcjYwMTUwNzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/60150701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/s-jse",
"html_url": "https://github.com/s-jse",
"followers_url": "https://api.github.com/users/s-jse/followers",
"following_url": "https://api.github.com/users/s-jse/following{/other_user}",
"gists_url": "https://api.github.com/users/s-jse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/s-jse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s-jse/subscriptions",
"organizations_url": "https://api.github.com/users/s-jse/orgs",
"repos_url": "https://api.github.com/users/s-jse/repos",
"events_url": "https://api.github.com/users/s-jse/events{/privacy}",
"received_events_url": "https://api.github.com/users/s-jse/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
},
{
"id": 1920687293,
"node_id": "MDU6TGFiZWwxOTIwNjg3Mjkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Fast%20Tokenizers",
"name": "Fast Tokenizers",
"color": "b60205",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2 Fast Tokenizer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
This problem only happens with `GPT2TokenizerFast` and not `GPT2Tokenizer`
## To reproduce
Steps to reproduce the behavior:
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
print('special tokens: ', tokenizer.additional_special_tokens, tokenizer.additional_special_tokens_ids)
tokenizer.add_special_tokens({'additional_special_tokens': ['<special_token>'], 'pad_token': '<pad>'})
print('special tokens: ', tokenizer.additional_special_tokens, tokenizer.additional_special_tokens_ids)
print(tokenizer.pad_token, tokenizer.convert_tokens_to_ids(tokenizer.pad_token))
tokenizer.save_pretrained('./save_dir/')
tokenizer = GPT2TokenizerFast.from_pretrained('./save_dir/')
print('special tokens: ', tokenizer.additional_special_tokens, tokenizer.additional_special_tokens_ids)
print(tokenizer.pad_token, tokenizer.convert_tokens_to_ids(tokenizer.pad_token))
```
It outputs
```
special tokens: [] []
special tokens: ['<special_token>'] [50257]
<pad> 50258
special tokens: ['<special_token>'] [50258]
<pad> 50257
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
```
special tokens: [] []
special tokens: ['<special_token>'] [50257]
<pad> 50258
special tokens: ['<special_token>'] [50257]
<pad> 50258
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-5.4.19-100.fc30.x86_64-x86_64-with-fedora-30-Thirty
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3366/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3365/comments | https://api.github.com/repos/huggingface/transformers/issues/3365/events | https://github.com/huggingface/transformers/pull/3365 | 585,341,657 | MDExOlB1bGxSZXF1ZXN0MzkxNzUyNDIy | 3,365 | fixes lr_scheduler warning | {
"login": "erip",
"id": 2348806,
"node_id": "MDQ6VXNlcjIzNDg4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2348806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erip",
"html_url": "https://github.com/erip",
"followers_url": "https://api.github.com/users/erip/followers",
"following_url": "https://api.github.com/users/erip/following{/other_user}",
"gists_url": "https://api.github.com/users/erip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erip/subscriptions",
"organizations_url": "https://api.github.com/users/erip/orgs",
"repos_url": "https://api.github.com/users/erip/repos",
"events_url": "https://api.github.com/users/erip/events{/privacy}",
"received_events_url": "https://api.github.com/users/erip/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not sure why we missed this one. Thanks!"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | For more details, see https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3365/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3365",
"html_url": "https://github.com/huggingface/transformers/pull/3365",
"diff_url": "https://github.com/huggingface/transformers/pull/3365.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3365.patch",
"merged_at": 1584741831000
} |
https://api.github.com/repos/huggingface/transformers/issues/3364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3364/comments | https://api.github.com/repos/huggingface/transformers/issues/3364/events | https://github.com/huggingface/transformers/issues/3364 | 585,338,621 | MDU6SXNzdWU1ODUzMzg2MjE= | 3,364 | Generate all possible sentences using a fine-tuned GPT-2 model | {
"login": "zeyuhuan",
"id": 22648052,
"node_id": "MDQ6VXNlcjIyNjQ4MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/22648052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zeyuhuan",
"html_url": "https://github.com/zeyuhuan",
"followers_url": "https://api.github.com/users/zeyuhuan/followers",
"following_url": "https://api.github.com/users/zeyuhuan/following{/other_user}",
"gists_url": "https://api.github.com/users/zeyuhuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zeyuhuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zeyuhuan/subscriptions",
"organizations_url": "https://api.github.com/users/zeyuhuan/orgs",
"repos_url": "https://api.github.com/users/zeyuhuan/repos",
"events_url": "https://api.github.com/users/zeyuhuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/zeyuhuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What do you mean exactly be all possible sentences? \r\n\r\nThe space of possible sentence that could be generated grows exponentially with the length of the sentences. Having **V** words in your vocabulary, there are **V^N** possible sentences of length **N** that could be sampled. If **V ~ 50,000** and **N = 10**, you are already at > 10^40 possibilities which is intractable.",
"Thanks for the reply. By all possible sentences I meant all possible sentences under some certain sampling technique. For example, if I want all possible sentences with top-k=1, there would be just 1 in the space. I can control the desired number of possible sentences by choosing sampling techniques at a certain level of strictness. However, I don't want to do a sampling, I want to find all of them by some BFS or DFS. What I plan to do is find some desired sampling technique so that there are say 1 million unique sentences in the space, then find out all of them. The provided util in the package only does sampling which could generate duplicate sentences. Does that make sense?"
] | 1,584 | 1,584 | 1,584 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Is there a way to generate all possible sentences using a fine-tuned GPT-2 model given a certain sampling technique? For some reason I wan to exhaust all possible combination of tokens given a fine-tuned GPT-2 model with certain sampling technique. Is it doable? If it is not, how do we get an estimate of how many possible sentences are there in the latent space?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3364/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3363/comments | https://api.github.com/repos/huggingface/transformers/issues/3363/events | https://github.com/huggingface/transformers/pull/3363 | 585,308,290 | MDExOlB1bGxSZXF1ZXN0MzkxNzI1NDY4 | 3,363 | Added total_save_limit feature similar to run_langauge_modeling.py | {
"login": "oya163",
"id": 7055478,
"node_id": "MDQ6VXNlcjcwNTU0Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7055478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oya163",
"html_url": "https://github.com/oya163",
"followers_url": "https://api.github.com/users/oya163/followers",
"following_url": "https://api.github.com/users/oya163/following{/other_user}",
"gists_url": "https://api.github.com/users/oya163/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oya163/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oya163/subscriptions",
"organizations_url": "https://api.github.com/users/oya163/orgs",
"repos_url": "https://api.github.com/users/oya163/repos",
"events_url": "https://api.github.com/users/oya163/events{/privacy}",
"received_events_url": "https://api.github.com/users/oya163/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I feel its not necessary"
] | 1,584 | 1,585 | 1,585 | NONE | null | Added args.total_save_limit in order to save only the last specific checkpoints similar to the feature in run_langauge_modeling.py. This might be helpful for a student like me who has limited space storage quota on the school's remote server. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3363/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3363",
"html_url": "https://github.com/huggingface/transformers/pull/3363",
"diff_url": "https://github.com/huggingface/transformers/pull/3363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3363.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3362/comments | https://api.github.com/repos/huggingface/transformers/issues/3362/events | https://github.com/huggingface/transformers/pull/3362 | 585,204,333 | MDExOlB1bGxSZXF1ZXN0MzkxNjQxMTY0 | 3,362 | New model, new model cards | {
"login": "traviemcg",
"id": 37486396,
"node_id": "MDQ6VXNlcjM3NDg2Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/37486396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/traviemcg",
"html_url": "https://github.com/traviemcg",
"followers_url": "https://api.github.com/users/traviemcg/followers",
"following_url": "https://api.github.com/users/traviemcg/following{/other_user}",
"gists_url": "https://api.github.com/users/traviemcg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/traviemcg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/traviemcg/subscriptions",
"organizations_url": "https://api.github.com/users/traviemcg/orgs",
"repos_url": "https://api.github.com/users/traviemcg/repos",
"events_url": "https://api.github.com/users/traviemcg/events{/privacy}",
"received_events_url": "https://api.github.com/users/traviemcg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | Trained another squad model! Added details in card. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3362/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3362",
"html_url": "https://github.com/huggingface/transformers/pull/3362",
"diff_url": "https://github.com/huggingface/transformers/pull/3362.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3362.patch",
"merged_at": 1584741662000
} |
https://api.github.com/repos/huggingface/transformers/issues/3361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3361/comments | https://api.github.com/repos/huggingface/transformers/issues/3361/events | https://github.com/huggingface/transformers/issues/3361 | 585,153,845 | MDU6SXNzdWU1ODUxNTM4NDU= | 3,361 | TF Camembert not improving over epochs | {
"login": "bourrel",
"id": 6873714,
"node_id": "MDQ6VXNlcjY4NzM3MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6873714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bourrel",
"html_url": "https://github.com/bourrel",
"followers_url": "https://api.github.com/users/bourrel/followers",
"following_url": "https://api.github.com/users/bourrel/following{/other_user}",
"gists_url": "https://api.github.com/users/bourrel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bourrel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bourrel/subscriptions",
"organizations_url": "https://api.github.com/users/bourrel/orgs",
"repos_url": "https://api.github.com/users/bourrel/repos",
"events_url": "https://api.github.com/users/bourrel/events{/privacy}",
"received_events_url": "https://api.github.com/users/bourrel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@bourrel \r\nWhy do you use `jplu/tf-flaubert-base-cased`? (https://huggingface.co/jplu/tf-flaubert-base-cased)\r\nAny particular reason not to use `flaubert/flaubert_base_cased`? (https://huggingface.co/flaubert/flaubert_base_cased)",
"It was 2 years ago, I don't remember sorry 😅 "
] | 1,584 | 1,660 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
`jplu/tf-camembert-base`
Language I am using the model on (English, Chinese ...):
French
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Get a custom multi-class dataset with imbalanced data
2. Train TFCamembertForSequenceClassification on this dataset
3. Try with and without `class_weight` or under-sample biggest classes (accuracy and loss change but still don't improve over epochs)
```python
import tensorflow as tf
from transformers import TFCamembertForSequenceClassification, CamembertTokenizer
model = TFCamembertForSequenceClassification.from_pretrained("jplu/tf-camembert-base", num_labels=len(labels))
tokenizer = CamembertTokenizer.from_pretrained("jplu/tf-camembert-base")
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
model.fit(
custom_generator(), # generator yield encoded sample (by tokenizer) and encoded label (by OneHotEncoder)
epochs=10,
max_queue_size=2,
steps_per_epoch=25,
#class_weight=class_weights,
validation_data=custom_generator(),
validation_steps=4
)
```
## Expected behavior
The classifier should improve over each epochs. In this case it stay at the same accuracy and loss, it just vary with more or less 5% accuracy.
To compare, I tried to run the same code but with `TFFlaubertForSequenceClassification.from_pretrained("jplu/tf-flaubert-base-cased")` and it worked as expected.
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12 (Google AI Platform)
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
For information, I already posted this problem on [Stack Overflow](https://stackoverflow.com/questions/60761761/hugging-face-transformer-classifier-fail-on-imbalance-dataset) which lead me here. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3361/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3360/comments | https://api.github.com/repos/huggingface/transformers/issues/3360/events | https://github.com/huggingface/transformers/issues/3360 | 585,077,061 | MDU6SXNzdWU1ODUwNzcwNjE= | 3,360 | RuntimeError: CUDA out of memory. Tried to allocate 786.00 MiB (GPU 0; 14.73 GiB total capacity; 13.33 GiB already allocated; 575.88 MiB free; 13.38 GiB reserved in total by PyTorch) | {
"login": "david1983",
"id": 6210160,
"node_id": "MDQ6VXNlcjYyMTAxNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6210160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david1983",
"html_url": "https://github.com/david1983",
"followers_url": "https://api.github.com/users/david1983/followers",
"following_url": "https://api.github.com/users/david1983/following{/other_user}",
"gists_url": "https://api.github.com/users/david1983/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david1983/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1983/subscriptions",
"organizations_url": "https://api.github.com/users/david1983/orgs",
"repos_url": "https://api.github.com/users/david1983/repos",
"events_url": "https://api.github.com/users/david1983/events{/privacy}",
"received_events_url": "https://api.github.com/users/david1983/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@mariamabarham didn't you have a similar issue? ",
"Yes I encountered the same issue. I solved it by adding --fp16(need to install apex first). You can also reduce the block_size to 512. Both worked out for me.",
"You should probably set the `per_gpu_train_batch_size` to 1. That is the default behavior for `gpt-2-simple` to prevent OOM. (I am not a fan of the default batch_size of 4 in `run_language_modeling.py`)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"You can also try using gradient accumulation steps. \r\nBasically, if you want a batch_size of 32, but your GPU can only fit 16.\r\n\r\nSo you make two passes of 16 batches each, accumulate your gradients, and then do the backward pass after 2 batches.\r\n"
] | 1,584 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2
Language I am using the model on (English, Chinese ...): eng
**The tasks I am working on is:**
!python /content/transformers/examples/run_language_modeling.py --train_data_file=shakespeare.txt --model_type=gpt2 --model_name_or_path=gpt2 --output_dir=output --do_train
## To reproduce
Steps to reproduce the behavior:
import os
import requests
file_name = "shakespeare.txt"
if not os.path.isfile(file_name):
url = "https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt"
data = requests.get(url)
with open(file_name, 'w') as f:
f.write(data.text)
!python /content/transformers/examples/run_language_modeling.py --train_data_file=shakespeare.txt --model_type=gpt2 --model_name_or_path=gpt2 --output_dir=output --do_train
03/20/2020 13:36:52 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
03/20/2020 13:36:53 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /root/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.699bbd1c449e9861456f359d6daa51bd523ac085b4b531ab0aad5a55d091e942
03/20/2020 13:36:53 - INFO - transformers.configuration_utils - Model config GPT2Config {
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": null,
"do_sample": false,
"embd_pdrop": 0.1,
"eos_token_ids": null,
"finetuning_task": null,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_epsilon": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 1024,
"num_beams": 1,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 50257
}
03/20/2020 13:36:54 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json from cache at /root/.cache/torch/transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71
03/20/2020 13:36:54 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt from cache at /root/.cache/torch/transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda
03/20/2020 13:36:54 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-pytorch_model.bin from cache at /root/.cache/torch/transformers/4295d67f022061768f4adc386234dbdb781c814c39662dd1662221c309962c55.778cf36f5c4e5d94c8cd9cefcf2a580c8643570eb327f0d4a1f007fab2acbdf1
03/20/2020 13:37:02 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=1024, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=False, do_train=True, eval_all_checkpoints=False, eval_data_file=None, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_steps=-1, mlm=False, mlm_probability=0.15, model_name_or_path='gpt2', model_type='gpt2', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='output', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=500, save_total_limit=None, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name=None, train_data_file='shakespeare.txt', warmup_steps=0, weight_decay=0.0)
03/20/2020 13:37:02 - INFO - __main__ - Loading features from cached file gpt2_cached_lm_1024_shakespeare.txt
03/20/2020 13:37:02 - INFO - __main__ - ***** Running training *****
03/20/2020 13:37:02 - INFO - __main__ - Num examples = 330
03/20/2020 13:37:02 - INFO - __main__ - Num Epochs = 1
03/20/2020 13:37:02 - INFO - __main__ - Instantaneous batch size per GPU = 4
03/20/2020 13:37:02 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4
03/20/2020 13:37:02 - INFO - __main__ - Gradient Accumulation steps = 1
03/20/2020 13:37:02 - INFO - __main__ - Total optimization steps = 83
Epoch: 0% 0/1 [00:00<?, ?it/s]
Iteration: 0% 0/83 [00:00<?, ?it/s]Traceback (most recent call last):
File "/content/transformers/examples/run_language_modeling.py", line 799, in <module>
main()
File "/content/transformers/examples/run_language_modeling.py", line 749, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "/content/transformers/examples/run_language_modeling.py", line 353, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 602, in forward
shift_logits = lm_logits[..., :-1, :].contiguous()
RuntimeError: CUDA out of memory. Tried to allocate 786.00 MiB (GPU 0; 14.73 GiB total capacity; 13.33 GiB already allocated; 575.88 MiB free; 13.38 GiB reserved in total by PyTorch)
Epoch: 0% 0/1 [00:00<?, ?it/s]
Iteration: 0% 0/83 [00:00<?, ?it/s]
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3360/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3359/comments | https://api.github.com/repos/huggingface/transformers/issues/3359/events | https://github.com/huggingface/transformers/issues/3359 | 585,060,011 | MDU6SXNzdWU1ODUwNjAwMTE= | 3,359 | Some community models are broken and can't be downloaded | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"- Item `1) a) i.` is fixed (list of model ids below for reference)\r\n- For models which don't have a tokenizer, or an auto-detected model type, we'll add a notice on their model page (and remove the code sample which is misleading as it lists AutoModel and AutoConfig)\r\n\r\n```\r\nalbert-base\r\nalbert-large\r\nalbert-xlarge\r\nalbert-xxlarge\r\nbert-base-multilingual-cased-finetuned-conll03-dutch\r\nbert-base-multilingual-cased-finetuned-conll03-spanish\r\nmlm-100-1280\r\nmlm-17-1280\r\nbertabs-finetuned-cnndm-extractive-abstractive-summarization\r\nbertabs-finetuned-extractive-abstractive-summarization\r\nbertabs-finetuned-xsum-extractive-abstractive-summarization\r\n```",
"## UPDATE: \r\n\r\n### Stats\r\n\r\n1. **61** can't load either their config (n)or their tokenizer:\r\n\r\n - a) **23** models can't load their config file. The reasons for this are as follows:There is an unrecognized `model_type` in the config.json, `e.g.` \r\n> \"Error: Message: Unrecognized model in hfl/rbtl3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: t5, distilbert, albert, camembert, xlm-roberta, bart, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl\r\n> \"\r\n\r\n - b) **38** models can load their config, but cannot load their tokenizers. The error message is always the same:\r\n\r\n> TOK ERROR: clue/roberta_chinese_base tokenizer can not be loaded\r\n> Message: Model name 'clue/roberta_chinese_base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\r\n\r\n - Here: the model has neither of: \r\n - `vocab_file`\r\n - `added_tokens_file`\r\n - `special_tokens_map_file`\r\n - `tokenizer_config_file`\r\n\r\n2. For **254** models everything is fine! \r\n\r\nHere the full analysis log [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/results.txt)\r\nHere the code that created this log (simple comparison of loaded tokenizer and config with default config): [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/test_all_community_models.py)\r\n\r\n## NEXT STEPS\r\n\r\n1a) and 1b) cannot really be fixed by us because for 1a) we don't know which `model_type` is used and for 1b) if the tokenizer does not work or does not exist it should be fixed or uploaded by the author. These **61** models can probably still be used if the correct model class is used instead of `AutoModel.from_pretrained(...)`\r\n\r\nWe could contact the authors or add a `warning` sign to the model page. ",
"the problem of denpa92/bert-base-cantonese is not solved.\r\n",
"hey @liuchenbaidu , I'd recommend contacting the author of the model in this case.",
"When I use ernie model pretained by BaiDu, I had the same problem.\r\nMy solution is to add \"model_type\":\"bert\" to the configuration file, It worked, but I don't know if it's reasonable.",
"> When I use ernie model pretained by BaiDu, I had the same problem.\r\n> My solution is to add \"model_type\":\"bert\" to the configuration file, It worked, but I don't know if it's reasonable.\r\n\r\nHi, @XiangQinYu. I'm a bit of a newbie with Huggingface. Can you say more about how you did this? I guess you mean adding \"model_type\":\"bert\" to a file like [this](https://huggingface.co/adamlin/ClinicalBert_all_notes/blob/main/config.json). But how did you edit the file? Did you download the whole model repository, and edit and run it locally?\r\n\r\nEDIT: Nevermind, figured it out with help of a commenter on [a question I asked on SO](https://stackoverflow.com/questions/68682786/is-it-possible-to-use-the-allennlp-semantic-role-labeler-with-bert-large-instead?noredirect=1#comment121759210_68682786)."
] | 1,584 | 1,629 | 1,585 | MEMBER | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Community Models
Language I am using the model on (English, Chinese ...): Multiple different ones
Quite some community models can't be loaded. The stats are here:
## Stats
1. **68** can't load either their config (n)or their tokenizer:
- a) **34** models can't even load their config file. The reasons for this are either:
- i. **11/34**: Model identifier is wrong, e.g. `albert-large` does not exist anymore, it seems like it was renamed to `albert-large-v1`. These models have saved the another name online than how it is saved on AWS.
- ii. **23/34**: There is an unrecognized `model_type` in the config.json, `e.g.`
> "Error: Message: Unrecognized model in hfl/rbtl3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: t5, distilbert, albert, camembert, xlm-roberta, bart, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl
> "
- b) **33** models can load their config, but cannot load their tokenizers. The error message is almost always the same:
> TOK ERROR: clue/roberta_chinese_base tokenizer can not be loaded
> Message: Model name 'clue/roberta_chinese_base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
- i. Here: the model has neither of:
- `vocab_file`
- `added_tokens_file`
- `special_tokens_map_file`
- `tokenizer_config_file`
2. **79** currently have wrong `pad_token_id`, `eos_token_id`, `bos_token_id` in their configs. IMPORTANT: The reason for this is that we used to have the wrong defaults saved in `PretrainedConfig()` - see e.g. [here](https://github.com/huggingface/transformers/pull/2885/commits/77d958ac7f0b008df17656e3652246f602aef095)
the default value for **any** model for `pad_token_id` was 0. People trained a model with the lib, saved it and the resulting config.json now had a `pad_token_id = 0` saved. This was then uploaded. But it's wrong and should be corrected.
3. For **162** models everything is fine!
Here the full analysis log [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/results.txt)
Here the code that created this log (simple comparison of loaded tokenizer and config with default config): [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/test_all_community_models.py)
### HOW-TO-FIX-STEPS (in the following order):
- [x] Fix 1 a) i. first: All models that have a wrong model identifier path should get the correct one. Need to update some model identifier paths on `https://huggingface.co/models` like changing `bertabs-finetuned-xsum-extractive-abstractive-summarization` to `remi/bertabs-finetuned-xsum-extractive-abstractive-summarization`. Some of those errors are very weird, see #3358
- [ ] Fix 1 a) ii. shoud be quite easy to add the correct `model_type` to the config.json
- [ ] Fix 1 b) Not sure how to fix the lacking tokenizer files most efficiently @julien-c
- [x] Fix 2) Create automated script that:
- 1. `If tokenizer.pad_token_id != default_config.pad_token_id` -> `config.pad_token_id = tokenizer.pad_token_id else` remove `pad_token_id`.
- 2. Removes all `eos_token_ids` -> they don't exist anymore
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3359/reactions",
"total_count": 5,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3359/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3358/comments | https://api.github.com/repos/huggingface/transformers/issues/3358/events | https://github.com/huggingface/transformers/issues/3358 | 585,046,743 | MDU6SXNzdWU1ODUwNDY3NDM= | 3,358 | Downloading mlm-17-1280 community model | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@julien-c ",
"This was a bogus model file, rm'ed it."
] | 1,584 | 1,585 | 1,585 | MEMBER | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): mlm-17-1280
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```
from transformers import AutoConfig
conf = AutoConfig.from_pretrained('mlm-17-1280')
```
## Expected behavior
The config should be loaded correctly.
All files exist are seem to be correct. There seems to be a problem with the etag.
When debugging, the call jumps into this statement:
https://github.com/huggingface/transformers/blob/8becb732931bbab5dd75cca5f5e7c75b2516d10b/src/transformers/file_utils.py#L449
and never manages to store the `config.json` file. Not sure what's going on here.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-5.3.0-40-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0+cpu (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3358/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3357/comments | https://api.github.com/repos/huggingface/transformers/issues/3357/events | https://github.com/huggingface/transformers/issues/3357 | 584,975,771 | MDU6SXNzdWU1ODQ5NzU3NzE= | 3,357 | License information by model | {
"login": "alexcombessie",
"id": 4739848,
"node_id": "MDQ6VXNlcjQ3Mzk4NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4739848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexcombessie",
"html_url": "https://github.com/alexcombessie",
"followers_url": "https://api.github.com/users/alexcombessie/followers",
"following_url": "https://api.github.com/users/alexcombessie/following{/other_user}",
"gists_url": "https://api.github.com/users/alexcombessie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexcombessie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexcombessie/subscriptions",
"organizations_url": "https://api.github.com/users/alexcombessie/orgs",
"repos_url": "https://api.github.com/users/alexcombessie/repos",
"events_url": "https://api.github.com/users/alexcombessie/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexcombessie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good question. As far as I can tell (I don't think there's a definitive a.k.a legally tested answer to that question) a model is not considered as a derivative work from the dataset(s) it was trained on, so the person who trained the model can choose whatever licensing option they want.\r\n\r\nFor instance, the original [BERT weights](https://github.com/google-research/bert) mention: \r\n> We will not be able to release the pre-processed datasets used in the paper.\r\n> [...]\r\n> These models are all released under the same license as the source code (Apache 2.0).\r\n\r\nPlease share your findings if you conduct more extensive research.",
"Thanks. That's what I assumed as well. Based on the linked you shared, it seems that all BERT models are under the same license. If that's the case for other model architectures, the investigation should be simple. I will look at the ~15 architectures supported and share my findings this week.\r\n\r\nI have a separate question on language support by model, but I will submit it as a separate issue.\r\n\r\nHave a great day,\r\n\r\nAlex",
"Hi @julien-c,\r\n\r\nI really liked your suggestion on #3397 to add it to model cards.\r\n\r\nCould I add license information in the same way, using tags on the model card?\r\n\r\nCheers,\r\n\r\nAlex",
"Yes @alexcombessie feel free to do some research and open a PR. You can add a `license: x` tag to the metadata, where `x` is an identifier found in https://help.github.com/en/github/creating-cloning-and-archiving-repositories/licensing-a-repository\r\n\r\nA few additional data points:\r\n- Camembert: MIT (source: https://camembert-model.fr/), trained on Oscar (https://traces1.inria.fr/oscar/) whose license is `cc0`\r\n- same for all models from Fairseq: https://github.com/pytorch/fairseq#license",
"FYI @MobiusLooper",
"The Distil* models trained at Hugging Face are released under Apache 2.0."
] | 1,584 | 1,588 | 1,588 | CONTRIBUTOR | null | Hi,
First of all, thanks for the good work. Very useful.
Would it be possible to add the license information for each model listed on https://huggingface.co/transformers/pretrained_models.html?
The reason is that for production, I need to know which models can be bundled in my app. Some licenses do not allow bundling...
I may have missed it, but I could not find licensing information in the doc or code.
If that information is not centralized, I am happy to do the research myself (and share results!). I would be interested in hints if you have some.
Cheers,
Alex | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3357/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3356/comments | https://api.github.com/repos/huggingface/transformers/issues/3356/events | https://github.com/huggingface/transformers/pull/3356 | 584,858,142 | MDExOlB1bGxSZXF1ZXN0MzkxMzYzODkw | 3,356 | Update run_language_modeling.py to handle writes on networked filesystem better | {
"login": "Genius1237",
"id": 15867363,
"node_id": "MDQ6VXNlcjE1ODY3MzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/15867363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Genius1237",
"html_url": "https://github.com/Genius1237",
"followers_url": "https://api.github.com/users/Genius1237/followers",
"following_url": "https://api.github.com/users/Genius1237/following{/other_user}",
"gists_url": "https://api.github.com/users/Genius1237/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Genius1237/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Genius1237/subscriptions",
"organizations_url": "https://api.github.com/users/Genius1237/orgs",
"repos_url": "https://api.github.com/users/Genius1237/repos",
"events_url": "https://api.github.com/users/Genius1237/events{/privacy}",
"received_events_url": "https://api.github.com/users/Genius1237/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"For some reference, check out https://github.com/pytorch/pytorch/issues/12042 and https://github.com/facebookresearch/maskrcnn-benchmark/pull/40. These address the same issue.\r\n\r\nAlso, one of the checks that failed, `check_code_quality`, would fail for the existing version of the script as well. There's a check for a line length of 119, and there are already many lines exceeding that.",
"I did think about the other scripts. Are those already setup with `DistributedDataParallel` cause one would theorize that those tasks aren't that heavy and wouldn't benefit much from running across multiple GPUs. \r\n\r\nAlso, I have one or 2 more fixes along the lines of this one for distributed training. I was wondering if I should rename this PR and add those in, or create a new one for each of those fixes. One of them is about loading checkpoints (of the optimizer and scheduler) while resuming training.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Closing this as `run_language_modeling.py` is now based on the trainer. Thank you for your contribution!!"
] | 1,584 | 1,594 | 1,594 | CONTRIBUTOR | null | In the case of multi-node distributed training, reads and writes typically happen to a common networked filesystem.
In the current version of the `run_language_modeling.py` script, processes that have `local_rank` as 0 perform the writes to disk (tensorboard, dataset cache and model checkpointing). In the case of multi-node distributed training, there ends up being one process per node having `local_rank` as 0, hence multiple processes try writing to the filesystem at one point, resulting on errors being thrown depending on the filesystem.
This pull request updates the script such that only the process having a `global_rank` of 0 does the writing. `global_rank` isn't a variable directly accessible in the script, it is obtained by calling `torch.distributed.get_rank()`.
I've tested the script in 4 different cases and they work without any error in these cases: multi-node training with DDP, single-node training with DDP, single-node training with DP and single gpu training. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3356/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3356",
"html_url": "https://github.com/huggingface/transformers/pull/3356",
"diff_url": "https://github.com/huggingface/transformers/pull/3356.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3356.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3355/comments | https://api.github.com/repos/huggingface/transformers/issues/3355/events | https://github.com/huggingface/transformers/issues/3355 | 584,712,038 | MDU6SXNzdWU1ODQ3MTIwMzg= | 3,355 | Bug? NaN loss after training for a while using for BERT Encoded sentences. | {
"login": "codeninja",
"id": 14914,
"node_id": "MDQ6VXNlcjE0OTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/14914?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codeninja",
"html_url": "https://github.com/codeninja",
"followers_url": "https://api.github.com/users/codeninja/followers",
"following_url": "https://api.github.com/users/codeninja/following{/other_user}",
"gists_url": "https://api.github.com/users/codeninja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codeninja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codeninja/subscriptions",
"organizations_url": "https://api.github.com/users/codeninja/orgs",
"repos_url": "https://api.github.com/users/codeninja/repos",
"events_url": "https://api.github.com/users/codeninja/events{/privacy}",
"received_events_url": "https://api.github.com/users/codeninja/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Do you solve the problem?"
] | 1,584 | 1,608 | 1,590 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I have a model which is taken from the HF Examples and slightly modified.
```
from transformers import TFBertModel, TFBertForSequenceClassification, BertTokenizer
# configuration = BertConfig()
def build_bert(batch_size=1, use_logits=True):
num_labels = max(max(train_label), max(test_label))
print(f"Number of labels: {num_labels}")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5, epsilon=1e-06, clipnorm=1.0)
if num_labels == 1:
loss = tf.keras.losses.MeanSquaredError()
else:
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=use_logits)
print(f"loss used: {loss}")
macc = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
vacc = tf.keras.metrics.SparseCategoricalAccuracy('val_accuracy')
config = BertConfig.from_pretrained("bert-base-cased", num_labels=num_labels, batch_size=batch_size)
bert_model = TFBertForSequenceClassification.from_pretrained("bert-base-cased", config=config)
bert_model.compile(optimizer=optimizer, loss=loss, metrics=[macc, vacc])
return bert_model
```
I am training on data which is encoded by BERT into 48 token arrays and encoded with the HF Bert Encoder.
```bert_model = build_bert(1000, False)
bert_model.fit([encodings[30000:40000], train_attn_mask[30000:40000]], classes[30000:40000],
epochs=1, validation_split=.1, shuffle=False)
```

My model will train for a while (Please pardon the output... I have no idea why jupyter lab does this.)

Then at some point (different every run) the loss drops to NaN.

<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
The solution for this problem on SO is varied. I tried changing the optimizer learning rates as well as altering the epsilon. I have validated that my data does not contain nan values, negative classifications, or invalid encodings. I have removed all non unicode characters.
My concern is that this has uncovered a bug with in the framework.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3355/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3354/comments | https://api.github.com/repos/huggingface/transformers/issues/3354/events | https://github.com/huggingface/transformers/pull/3354 | 584,567,674 | MDExOlB1bGxSZXF1ZXN0MzkxMTMwMjg1 | 3,354 | Export ALBERT main layer in TensorFlow | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=h1) Report\n> Merging [#3354](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3bedfd334763cb5676c2fe92705390ac57d8de5f&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3354 +/- ##\n==========================================\n+ Coverage 77.61% 77.68% +0.07% \n==========================================\n Files 100 100 \n Lines 16938 16938 \n==========================================\n+ Hits 13146 13159 +13 \n+ Misses 3792 3779 -13 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.92% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.55% <0.00%> (+2.32%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=footer). Last update [3bedfd3...77a2a4c](https://codecov.io/gh/huggingface/transformers/pull/3354?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | MEMBER | null | closes #3262 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3354/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3354",
"html_url": "https://github.com/huggingface/transformers/pull/3354",
"diff_url": "https://github.com/huggingface/transformers/pull/3354.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3354.patch",
"merged_at": 1584640386000
} |
https://api.github.com/repos/huggingface/transformers/issues/3353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3353/comments | https://api.github.com/repos/huggingface/transformers/issues/3353/events | https://github.com/huggingface/transformers/pull/3353 | 584,413,789 | MDExOlB1bGxSZXF1ZXN0MzkxMDA0NjIz | 3,353 | Handle pinned version of isort | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
}
] | [
"I didn't know that worked, thanks @BramVanroy. @aaugustin what do you think?",
"I'm not familiar with the syntax, but if it works, go for it. I really hope we have a release of sort and we can remove this soon.",
"Works on my machine so I'll merge :)\r\n\r\nThanks @BramVanroy, this will simplify @LysandreJik and @thomwolf's lives a lot!"
] | 1,584 | 1,584 | 1,584 | COLLABORATOR | null | The CONTRIBUTING file pins to a specific version of isort, so we might as well install that in `dev` . This makes it easier for contributors so they don't have to manually install the specific commit. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3353/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3353",
"html_url": "https://github.com/huggingface/transformers/pull/3353",
"diff_url": "https://github.com/huggingface/transformers/pull/3353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3353.patch",
"merged_at": 1584741605000
} |
https://api.github.com/repos/huggingface/transformers/issues/3352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3352/comments | https://api.github.com/repos/huggingface/transformers/issues/3352/events | https://github.com/huggingface/transformers/pull/3352 | 584,380,563 | MDExOlB1bGxSZXF1ZXN0MzkwOTc3NzI3 | 3,352 | Add model cards for huseinzol05/bert-base-bahasa-cased | {
"login": "huseinzol05",
"id": 19810909,
"node_id": "MDQ6VXNlcjE5ODEwOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19810909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huseinzol05",
"html_url": "https://github.com/huseinzol05",
"followers_url": "https://api.github.com/users/huseinzol05/followers",
"following_url": "https://api.github.com/users/huseinzol05/following{/other_user}",
"gists_url": "https://api.github.com/users/huseinzol05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huseinzol05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huseinzol05/subscriptions",
"organizations_url": "https://api.github.com/users/huseinzol05/orgs",
"repos_url": "https://api.github.com/users/huseinzol05/repos",
"events_url": "https://api.github.com/users/huseinzol05/events{/privacy}",
"received_events_url": "https://api.github.com/users/huseinzol05/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Looks good! [**Model page**](https://huggingface.co/huseinzol05/bert-base-bahasa-cased)\r\n\r\nI've also added a filter for the Malay language here:\r\n<img width=\"792\" alt=\"Screenshot 2020-03-19 at 15 28 42\" src=\"https://user-images.githubusercontent.com/326577/77106980-58beec00-69f6-11ea-8145-4d273d605693.png\">\r\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3352/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3352",
"html_url": "https://github.com/huggingface/transformers/pull/3352",
"diff_url": "https://github.com/huggingface/transformers/pull/3352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3352.patch",
"merged_at": 1584644900000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3351/comments | https://api.github.com/repos/huggingface/transformers/issues/3351/events | https://github.com/huggingface/transformers/pull/3351 | 584,349,692 | MDExOlB1bGxSZXF1ZXN0MzkwOTUyMjE0 | 3,351 | Reformer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Memory complexity ReformerLayer\r\n vs BertLayer: \r\n\r\n\r\n\r\n",
"Time complexity ReformerLayer vs. BertLayer:\r\n\r\n\r\n",
"## Experiment\r\n\r\nI tested training the Reformer model on 0.5M tokens per sample on the novel \"Crime and Punishment\" using conventional LM training. I essentially translated the official trax notebook: https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/text_generation.ipynb into hugging face code: https://colab.research.google.com/drive/1jR6hA2CQXDbucJXdiDXhmxmyoQmM2Pws\r\n\r\nThe only differences to the official notebook are:\r\n\r\n- The gradient is accumulated over 8 samples and then updated whereas in the official notebook 8 TPUs are used and the gradient is calculated in parallel and then averaged together.\r\n\r\n- The learning rate is 0.005 instead of 0.01 (because already at 0.005, the gradient seems to become too big).\r\n\r\n## Results\r\n\r\nMy training starts similarly around **6.2** and goes down smoothly in the beginning.\r\nAt some point though the gradient seem to explode and the loss goes up again and that even at a learning rate of \"only\" 0.05.\r\n\r\nThe attached plots are here:\r\n\r\n### Loss\r\n\r\n\r\n### Accuracy\r\n\r\n\r\n### Learning rate (cosine scheduler)\r\n\r\n\r\nWhen lowering the learning rate more, to **0.0005** e.g. the loss keeps going down but only reaches something around 2.3 in the end. \r\n\r\n**Comparison**\r\n\r\nThe training in the official trax notebook is very smooth. \r\nLoss starts at **6.2** something and goes down smoothly to **0.8** while the accuracy reaches **>80%** in the end for a learning rate of **0.01**.\r\n\r\n## Analysis\r\n\r\n- It is confirmed that the forward pass is identical with the trax implementation thanks to integration tests. Things that are not fully tested for the backward pass are:\r\n\r\n - **Dropout**: the dropout used in the official trax library does not seem to correspond to the \"usual\" `nn.Dropout` used in PyTorch but sometimes drop specific dimensions only or whole matrices. It is tested though that the dropout used here is deterministic for both the \"normal\" forward pass and the forward pass used in the backward pass to recalculate the activations, by means of setting the random seed used for the first forward pass. Nevertheless, there could still be small bugs.\r\n\r\n - **Reversible Layers**: Because Reformer uses reversible layers, I had to fiddle with a customized backward function here. This is IMO quite prone to errors. I checked multiple times that from a logical point of view everything is correct and compared my code with: https://github.com/RobinBruegger/RevTorch and https://github.com/lucidrains/reformer-pytorch which do similar / the same architecture. IMO, it quite hard to test this for correctness. One could also write the whole code without having reversible layers and then see whether the gradient is the same (Seems actually not like a bad idea to me). \r\n\r\n - **Attention mask**: The official trax code does not seem to use a user-specific attention mask for the LSH Attn Layer, but only for the Local Attn Layer. I tested that the attn mask is correct for the local attention task by integration tests and checked that the attn mask for the LSH layer works correctly (input with mask gives the same result as input without mask), but maybe the LSH Attn mask has to be removed. But don't really see a reason why ?! \r\n\r\n - **Initialization**: The initialization scheme used in the trax library is different from what is normally done in `transformers`, so there are small changes in my code. But I doubt that this is an issue, especially since the training looks very similar in the beginning.\r\n\r\n - **Training parameters**: It might also be simply due to different training / optimization parameters. Maybe there are some under-the-hood training parameters that I didn't notice (special gradient clipping, ...)",
"> https://colab.research.google.com/drive/1jR6hA2CQXDbucJXdiDXhmxmyoQmM2Pws\r\n\r\nTried to train model over longer time, but getting [error](http://prntscr.com/s79e04)\r\n\r\n> Forward got unexcepted keyword \"lm_labels\" after calling trainer.train()\r\n\r\nP: Fixed the typo. I will change the model into half-precision soon so that the memory will be sufficient :-) ",
"I get some good results with the following parameters: https://gist.github.com/flozi00/b491b41a9865733e5f8bb4032c313540\r\n\r\nthe best eval loss is about 1.654, but is increasing now again the same as yours\r\nwill have a look in a few hours again\r\n\r\n\r\n\r\n",
"> I get some good results with the following parameters: https://gist.github.com/flozi00/b491b41a9865733e5f8bb4032c313540\r\n> \r\n> the best eval loss is about 1.654, but is increasing now again the same as yours\r\n> will have a look in a few hours again\r\n> \r\n> \r\n\r\nAwesome that's already much better than what I got! If you manage to get it under 1 (loss) / >75% (accuracy) that would be great. Also feel free to change the hyper-parameters as you wish! Especially the adam betas and co.\r\n\r\nI also added support for fp16 - so the notebook now only needs 8GB of RAM.\r\n\r\n(You might have to reset the environment and re-install the github branch though)",
"Sounds very great.\nTrying to decrease sequence length, cause while increasing number of hashes or heads getting memory error.\nTraining on 24GB GPU\n\nRead that 4 hashes are good and 8 brings the best quality.\n\nTrained on some configurations now and everytime the loss goes to ~1 but then increases to 4 very fast and keeps on there for minimum 1000 steps.\nAny idea about it ?",
"> Sounds very great.\r\n> Trying to decrease sequence length, cause while increasing number of hashes or heads getting memory error.\r\n> Training on 24GB GPU\r\n> \r\n> Read that 4 hashes are good and 8 brings the best quality.\r\n> \r\n> Trained on some configurations now and everytime the loss goes to ~1 but then increases to 4 very fast and keeps on there for minimum 1000 steps.\r\n> Any idea about it ?\r\n\r\nMy guess is that since it's such a small dataset (0.5M tokens is tiny) the model needs very well-calibrated hyperparameter tuning. When the learning rate is low enough, this actually does not happen anymore but also the loss only gets to about ~2. But I ran very few experiments and didn't do any hyperparameter search.\r\nAlso, I use slightly different dropouts, then were used in the official code so maybe using weight decay instead of dropout could work better. \r\n\r\nWill check that the gradients are correct in the next days and then hopefully be ready soon. ",
"@patrickvonplaten I'm excited to see a lot of progress here!\r\n\r\nThe loss curves above could be due to poor hyperparameter choice, but they're also very similar to what you see when the reverse pass of the network doesn't match the forward pass. For example, failing to cache hash bucket assignments (for exact re-use in the backward pass) leads to a failure mode with loss rebounds very similar to the figures you posted above. I also once had a bug where the wrong random seed was used for dropout in the backward pass, which IIRC manifested itself in the same way.",
"> @patrickvonplaten I'm excited to see a lot of progress here!\r\n> \r\n> The loss curves above could be due to poor hyperparameter choice, but they're also very similar to what you see when the reverse pass of the network doesn't match the forward pass. For example, failing to cache hash bucket assignments (for exact re-use in the backward pass) leads to a failure mode with loss rebounds very similar to the figures you posted above. I also once had a bug where the wrong random seed was used for dropout in the backward pass, which IIRC manifested itself in the same way.\r\n\r\nThanks for taking a look @nkitaev. I just found a bug in the `backward()`. I now have 1-to-1 the same gradients as your trax code. Will retrain tonight and should get better results :-) ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=h1) Report\n> Merging [#3351](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7b5bec373ca2b442a7ac8ac46f8eac6e8003e2ae&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3351 +/- ##\n=======================================\n Coverage 79.13% 79.13% \n=======================================\n Files 117 117 \n Lines 19517 19517 \n=======================================\n Hits 15444 15444 \n Misses 4073 4073 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=footer). Last update [7b5bec3...7b5bec3](https://codecov.io/gh/huggingface/transformers/pull/3351?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Training looks good now on Crime and Punishment. To verify that training works, the model was trained on over 1200 steps and with little regularization.\r\n\r\n### Eval loss \r\n\r\n\r\n\r\n### Eval accuracy\r\n\r\n\r\n",
"https://gist.github.com/flozi00/b491b41a9865733e5f8bb4032c313540\r\n\r\nThis gist contains two notebooks, one of them with trainings batch = 2 --> error\r\nin the other I tried to train model with pre-configured parameters, sequence length 4096 --> error\r\n\r\nIs it mistake by me ?",
"> https://gist.github.com/flozi00/b491b41a9865733e5f8bb4032c313540\r\n> \r\n> This gist contains two notebooks, one of them with trainings batch = 2 --> error\r\n> in the other I tried to train model with pre-configured parameters, sequence length 4096 --> error\r\n> \r\n> Is it mistake by me ?\r\n\r\nhow did you get past \r\n```\r\n# get a pretrained tokenizer\r\ntokenizer = ReformerTokenizer.from_pretrained(\"patrickvonplaten/reformer-crime-and-punish\")\r\n```",
"@lapolonio\r\nI just used the notebook posted by patrickvonplaten four days ago.\r\n\r\nhttps://colab.research.google.com/drive/1jR6hA2CQXDbucJXdiDXhmxmyoQmM2Pws",
"The notebook is still under construction, so I would not waste too much time playing around with it at the moment @lapolonio @flozi00. \r\n\r\nThanks a lot for your good comments and remarks @lapolonio and @flozi00 :-) ",
"This is looking awesome, thanks! is there plans to add an encoder-decoder version? ",
"> This is looking awesome, thanks! is there plans to add an encoder-decoder version?\r\n\r\nYes, this should soon be possible with the encoder-decoder framework",
"@patrickvonplaten awesome! is there an issue or a PR I can follow for that? ",
"> @patrickvonplaten awesome! is there an issue or a PR I can follow for that?\r\n\r\nNot yet, this will probably still need 1,2 weeks :-) ",
"Is it possible or are there any plans to implement reformer for question answering too ?\r\nseq2seq and QA could be very great tasks for it",
"> Is it possible or are there any plans to implement reformer for question answering too ?\r\n> seq2seq and QA could be very great tasks for it\r\n\r\nYeah, I will add a cross attention layer in another PR and then the Reformer can be used as a seq-2-seq model with our Encoder-Decoder framework: https://huggingface.co/transformers/model_doc/encoderdecoder.html",
"I'm not familiar with the Encoder-Decoder framework after the cross attention layer is added can the decoder be BertForSequenceClassification? Where do I ask questions like this?\r\n",
"@patrickvonplaten Based on your merge, it seems like the input size for each batch is fixed in order to match the product of axial position embedding size? I am correct?",
"> @patrickvonplaten Based on your merge, it seems like the input size for each batch is fixed in order to match the product of axial position embedding size? I am correct?\r\n\r\nFor training, yes that's correct. For inference the input_size can also be smaller. Also check out: https://huggingface.co/transformers/model_doc/reformer.html",
"@patrickvonplaten , I wanted to train a language model for reformers on a custom dataset.\r\nWhat are the steps, and any sample notebooks available for the same",
"Hi @prajwal-PHAI, there are a lot of [community notebooks covering T5 finetuning](https://github.com/huggingface/transformers/tree/master/notebooks#community-notebooks).",
"Thanks @LysandreJik \r\nI was running into error loading other datasets, which were not there in the nlp library.",
"hey. thanks for your amazing work!\r\nI'm running into error while trying the colab example:\r\nhttps://colab.research.google.com/drive/1jR6hA2CQXDbucJXdiDXhmxmyoQmM2Pws#scrollTo=WskGtnXsnWdu\r\n\r\nthe problem is that it doesn't recognize the apex package:\r\n\r\nImportError Traceback (most recent call last)\r\n<ipython-input-29-30584d4c4987> in <module>()\r\n 11 \r\n 12 # train\r\n---> 13 trainer.train()\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path)\r\n 384 if self.args.fp16:\r\n 385 if not is_apex_available():\r\n--> 386 raise ImportError(\"Please install apex from https://www.github.com/nvidia/apex to use fp16 training.\")\r\n 387 model, optimizer = amp.initialize(model, optimizer, opt_level=self.args.fp16_opt_level)\r\n 388 \r\n\r\nImportError: Please install apex from https://www.github.com/nvidia/apex to use fp16 training.\r\n\r\nthough I installed it...anyone know what to do?\r\n",
"Linking a related git issue #16972. cc @patrickvonplaten "
] | 1,584 | 1,651 | 1,588 | MEMBER | null | ## Add the Reformer
Paper: (https://arxiv.org/pdf/2001.04451.pdf)
### First steps to take:
- [x] Copy Bert PT code to Reformer PT file.
- [x] Replace self-attention with LSH attention
- [x] Make forward pass work for Bert Layer
### Forward-Pass: Get 1-to-1 same outputs as original Flax code for forward pass
- [x] for LSH attention layer
- [x] for Bert Layer with RevNet
- [x] for different attention masks
- [x] for feed forward chunking layer
- [x] for whole Reformer model
- [x] for sinusoidal position encodings
- [x] for axial position encodings
- [x] for local blocking attention (chunked attention)
- [x] for pretrained weights from official reformer model: ReformerLM model was trained using https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/text_generation.ipynb and weights were loaded into https://huggingface.co/patrickvonplaten/reformer-crime-and-punish and checked that a single forward pass is identical. `predict_mem_len` had to be adapted to make functions equal.
- [x] Add optional attention mask
- [x] Add support for fp16
- [ ] Speed up incremental generation. This is needed for generation and will not be trivial since the buckets have to ordered correctly and there is a chunk length parameter.
### Backpropagation:
- [x] Make backpropagation work
- [x] Check that backpropagation works with chunked feed forward layers
- [x] Implement RevResLayers for backprop
- [x] Implement code using and
- [x] Get identical results for forward pass
- [x] Make sure backprop works
- [x] Implement bucket caching
- [x] Implement random seed caching to have deterministic dropout for backward pass: https://github.com/RobinBruegger/RevTorch/pull/4
- [ ] Make rev resnet work for multi-gpu training
- [x] Check that RevReslayers backprop works on CPU
- [x] Check that RevReslayers backprop works on GPU
- [x] Get same gradients as original trax code
- [x] Train model on crime-and-punishment text and check that model performs reasonable afterwards
### Tokenizer
- [x] Copy sentence piece tokenizer from T5
- [x] Add vanilla sentence piece tokenizer for crime-and-punishment pretrained tokenizer: https://console.cloud.google.com/storage/browser/_details/trax-ml/reformer/cp.320.model
- [ ] Check how many tokenizers are needed
- [ ] Get pretrained tokenizers
### Optimize time and memory efficiency
- [x] Compare memory & time complexity to standard Bert: check https://github.com/huggingface/transformers/pull/3186
- [x] Check and improve memory and speed when training
- [ ] Move "on-the-fly" created masks in LSHSelfAttention to using them as an input
- [ ] Optimizie all unnecessary calculations
### Pretrained Models
- [ ] Check if pretrained model on C4 is added soon: https://github.com/google/trax/commit/b1f0c176a281d35e285137a45ff117b8c5495173
- [ ] Add Reformer / Bert in trax
Useful code resources:
- Original trax code: https://github.com/google/trax/tree/master/trax/models/reformer
- Working trax notebook: https://github.com/google/trax/blob/master/trax/models/reformer/machine_translation.ipynb
- Working PyTorch implementation: https://github.com/lucidrains/reformer-pytorch
- Great to implement for backprop. https://github.com/lucidrains/reformer-pytorch/blob/master/reformer_pytorch/reversible.py
- Pretrained weights: https://console.cloud.google.com/storage/browser/trax-ml/reformer
Useful blog/paper resources:
- Original paper: https://arxiv.org/pdf/2001.04451.pdf
- Google AI blog: https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html
- Good blog post 1: https://www.pragmatic.ml/reformer-deep-dive/
- Good blog post 2: https://towardsdatascience.com/illustrating-the-reformer-393575ac6ba0
Previous Discussions:
- #2341
## Update
The code is clean and ready for review now.
Small ToDos before merging:
- [x] Fill in TODOs in docs
- [ ] Check whether more pre-trained weights can be used
- [ ] Train on fp16 once
- [ ] Update notebook showing how to use Reformer
### Review
I added quite some docstrings to explain the new methods introduced by the Reformer (Axial Position Encoding, LSH Attention, Local Attention, Feed Forward chunking), so it might be better to first go through the doctsrings. Docstrings are easier to read when switching to this branch and creating the docs locally. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3351/reactions",
"total_count": 27,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 11,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3351/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3351",
"html_url": "https://github.com/huggingface/transformers/pull/3351",
"diff_url": "https://github.com/huggingface/transformers/pull/3351.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3351.patch",
"merged_at": 1588839422000
} |
https://api.github.com/repos/huggingface/transformers/issues/3350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3350/comments | https://api.github.com/repos/huggingface/transformers/issues/3350/events | https://github.com/huggingface/transformers/issues/3350 | 584,336,318 | MDU6SXNzdWU1ODQzMzYzMTg= | 3,350 | Reproducing SQuAD v1.1 with xlnet-base cased? | {
"login": "JJumSSu",
"id": 39372342,
"node_id": "MDQ6VXNlcjM5MzcyMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/39372342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JJumSSu",
"html_url": "https://github.com/JJumSSu",
"followers_url": "https://api.github.com/users/JJumSSu/followers",
"following_url": "https://api.github.com/users/JJumSSu/following{/other_user}",
"gists_url": "https://api.github.com/users/JJumSSu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JJumSSu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JJumSSu/subscriptions",
"organizations_url": "https://api.github.com/users/JJumSSu/orgs",
"repos_url": "https://api.github.com/users/JJumSSu/repos",
"events_url": "https://api.github.com/users/JJumSSu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JJumSSu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | Hi, first of all, thanks for the great library you guys are providing. I'm currently using the latest version of huggingface/transformers, and I'm trying to get a score for the SQuAD V1.1 with XLNET base-cased. However, it seems that the performance I get is only 0.10 for EM and 0.64 for F1.
When getting the score with BERT base-cased, the score comes out appropriately. (F1 about 88.5)
Is there any bugs or sth else I should be aware of? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3350/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3349/comments | https://api.github.com/repos/huggingface/transformers/issues/3349/events | https://github.com/huggingface/transformers/pull/3349 | 584,323,956 | MDExOlB1bGxSZXF1ZXN0MzkwOTMwNjQw | 3,349 | Create model card for bert-small-finetuned-squadv2 | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3349/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3349",
"html_url": "https://github.com/huggingface/transformers/pull/3349",
"diff_url": "https://github.com/huggingface/transformers/pull/3349.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3349.patch",
"merged_at": 1584644877000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3348/comments | https://api.github.com/repos/huggingface/transformers/issues/3348/events | https://github.com/huggingface/transformers/pull/3348 | 584,316,946 | MDExOlB1bGxSZXF1ZXN0MzkwOTI1MTk4 | 3,348 | Create card for BERT-Mini finetuned on SQuAD v2 | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3348/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3348",
"html_url": "https://github.com/huggingface/transformers/pull/3348",
"diff_url": "https://github.com/huggingface/transformers/pull/3348.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3348.patch",
"merged_at": 1584644860000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3347/comments | https://api.github.com/repos/huggingface/transformers/issues/3347/events | https://github.com/huggingface/transformers/pull/3347 | 584,305,382 | MDExOlB1bGxSZXF1ZXN0MzkwOTE1NTAx | 3,347 | Create card for BERT-Tiny fine-tuned on SQuAD v2 | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=h1) Report\n> Merging [#3347](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cec3cdda1599541b033e07a9838386189a5d0010&el=desc) will **increase** coverage by `1.15%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3347 +/- ##\n==========================================\n+ Coverage 76.46% 77.61% +1.15% \n==========================================\n Files 100 100 \n Lines 16948 16948 \n==========================================\n+ Hits 12960 13155 +195 \n+ Misses 3988 3793 -195 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.65% <0.00%> (+5.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3347/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=footer). Last update [cec3cdd...ed378f0](https://codecov.io/gh/huggingface/transformers/pull/3347?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | - Only 17MB of Model weights!!
- The smalles model fine-tuned on SQUaD v2? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3347/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3347",
"html_url": "https://github.com/huggingface/transformers/pull/3347",
"diff_url": "https://github.com/huggingface/transformers/pull/3347.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3347.patch",
"merged_at": 1584644843000
} |
https://api.github.com/repos/huggingface/transformers/issues/3346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3346/comments | https://api.github.com/repos/huggingface/transformers/issues/3346/events | https://github.com/huggingface/transformers/pull/3346 | 584,285,904 | MDExOlB1bGxSZXF1ZXN0MzkwODk5MzMz | 3,346 | Created card for spanbert-finetuned-squadv1 | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3346/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3346",
"html_url": "https://github.com/huggingface/transformers/pull/3346",
"diff_url": "https://github.com/huggingface/transformers/pull/3346.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3346.patch",
"merged_at": 1584644797000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3345/comments | https://api.github.com/repos/huggingface/transformers/issues/3345/events | https://github.com/huggingface/transformers/pull/3345 | 584,251,688 | MDExOlB1bGxSZXF1ZXN0MzkwODcxMTA1 | 3,345 | Fix input ids can be none attn mask | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for pointing out @lazarevskiVsg and @julien-c !",
"@julien-c , will merge right away - small change!",
"Why don’t you just use input_shape (which is always defined), to be consistent with other models?",
"> Why don’t you just use input_shape (which is always defined), to be consistent with other models?\r\n\r\nThe problem is that GPT2 and CTRL have a different behavior (and the `input_ids` shape changes) when the `past` variable is inserted which previously led to problem when the attention_mask is inserted as well: \r\n#3031 \r\n\r\nTherefore this slightly weird implementation.",
"But in your code in this PR, batch_size is always input_shape[0] anyways, no?",
"> But in your code in this PR, batch_size is always `input_shape[0]` anyways, no?\r\n\r\nI think in the case of CTRL and GPT2, it's actually a bigger inconsistency:\r\n\r\nLet's say we have an input_ids tensor of shape `[batch_size, sequence_length] = [5, 4]`.\r\n\r\nWe call `GPT2Model` and save the last `output embeddings = outputs[0][:, -1, :]` **and** the `past` key/value states to speed up decoding = `outputs[1]`\r\n\r\nNow if we want to use `past` GPT expects the `input_ids` to be of shape `[batch_size, 1] .squeezed(-1) = [batch_size]`. Therefore we have to adapt the attention mask here differently than in other models. Which is weird (and a bit suboptimal in my opinion in GPT's and CTLR's API) is that the shape of `input_ids` differs depending on whether `past` is None or not. \r\n\r\n@julien-c "
] | 1,584 | 1,584 | 1,584 | MEMBER | null | Make sure `batch_size` is correct for gpt2 and ctrl - these models need a slightly different behavior since the shape of `input_ids` can change depending on whether the past variable is inserted or not.
See also:
PR: https://github.com/huggingface/transformers/pull/3033
and its issue:
https://github.com/huggingface/transformers/issues/3031 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3345/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3345",
"html_url": "https://github.com/huggingface/transformers/pull/3345",
"diff_url": "https://github.com/huggingface/transformers/pull/3345.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3345.patch",
"merged_at": 1584608117000
} |
https://api.github.com/repos/huggingface/transformers/issues/3344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3344/comments | https://api.github.com/repos/huggingface/transformers/issues/3344/events | https://github.com/huggingface/transformers/pull/3344 | 584,235,209 | MDExOlB1bGxSZXF1ZXN0MzkwODU3NTcy | 3,344 | Fix wrong link for the notebook file | {
"login": "Kyeongpil",
"id": 6302455,
"node_id": "MDQ6VXNlcjYzMDI0NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6302455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kyeongpil",
"html_url": "https://github.com/Kyeongpil",
"followers_url": "https://api.github.com/users/Kyeongpil/followers",
"following_url": "https://api.github.com/users/Kyeongpil/following{/other_user}",
"gists_url": "https://api.github.com/users/Kyeongpil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kyeongpil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kyeongpil/subscriptions",
"organizations_url": "https://api.github.com/users/Kyeongpil/orgs",
"repos_url": "https://api.github.com/users/Kyeongpil/repos",
"events_url": "https://api.github.com/users/Kyeongpil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kyeongpil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=h1) Report\n> Merging [#3344](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f6d813aaaa96cc43fcf55f255b9439ebc22a31a0&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3344 +/- ##\n=======================================\n Coverage 77.63% 77.63% \n=======================================\n Files 100 100 \n Lines 16943 16943 \n=======================================\n Hits 13154 13154 \n Misses 3789 3789 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=footer). Last update [f6d813a...f78b5f0](https://codecov.io/gh/huggingface/transformers/pull/3344?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks a lot for pointing this out @rudvlf0413 and @julien-c "
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | For the tutorial of "How to generate text", the URL link was wrong (it was linked to the tutorial of "How to train a language model").
I fixed the URL. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3344/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3344",
"html_url": "https://github.com/huggingface/transformers/pull/3344",
"diff_url": "https://github.com/huggingface/transformers/pull/3344.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3344.patch",
"merged_at": 1584634968000
} |
https://api.github.com/repos/huggingface/transformers/issues/3343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3343/comments | https://api.github.com/repos/huggingface/transformers/issues/3343/events | https://github.com/huggingface/transformers/pull/3343 | 584,218,330 | MDExOlB1bGxSZXF1ZXN0MzkwODQzNTU1 | 3,343 | Update 01-training-tokenizers.ipynb (typo issue) | {
"login": "Kyeongpil",
"id": 6302455,
"node_id": "MDQ6VXNlcjYzMDI0NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6302455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kyeongpil",
"html_url": "https://github.com/Kyeongpil",
"followers_url": "https://api.github.com/users/Kyeongpil/followers",
"following_url": "https://api.github.com/users/Kyeongpil/following{/other_user}",
"gists_url": "https://api.github.com/users/Kyeongpil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kyeongpil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kyeongpil/subscriptions",
"organizations_url": "https://api.github.com/users/Kyeongpil/orgs",
"repos_url": "https://api.github.com/users/Kyeongpil/repos",
"events_url": "https://api.github.com/users/Kyeongpil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kyeongpil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=h1) Report\n> Merging [#3343](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f6d813aaaa96cc43fcf55f255b9439ebc22a31a0&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3343 +/- ##\n==========================================\n- Coverage 77.63% 77.54% -0.09% \n==========================================\n Files 100 100 \n Lines 16943 16943 \n==========================================\n- Hits 13154 13139 -15 \n- Misses 3789 3804 +15 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.68% <0.00%> (-2.69%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=footer). Last update [f6d813a...9ecfde1](https://codecov.io/gh/huggingface/transformers/pull/3343?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | I found there are two grammar errors or typo issues in the explanation of the encoding properties.
The original sentences:
- **If your was** made of multiple \"parts\" such as (question, context), then this would be a vector with for each token the segment it belongs to
- **If your has** been truncated into multiple subparts because of a length limit (for BERT for example the sequence length is limited to 512), this will contain all the remaining overflowing parts.
I think "**input**" should be inserted after the phrase "If your". | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3343/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3343",
"html_url": "https://github.com/huggingface/transformers/pull/3343",
"diff_url": "https://github.com/huggingface/transformers/pull/3343.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3343.patch",
"merged_at": 1584656510000
} |
https://api.github.com/repos/huggingface/transformers/issues/3342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3342/comments | https://api.github.com/repos/huggingface/transformers/issues/3342/events | https://github.com/huggingface/transformers/issues/3342 | 584,162,892 | MDU6SXNzdWU1ODQxNjI4OTI= | 3,342 | No Module named Transformers | {
"login": "rod08018",
"id": 10190881,
"node_id": "MDQ6VXNlcjEwMTkwODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/10190881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rod08018",
"html_url": "https://github.com/rod08018",
"followers_url": "https://api.github.com/users/rod08018/followers",
"following_url": "https://api.github.com/users/rod08018/following{/other_user}",
"gists_url": "https://api.github.com/users/rod08018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rod08018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rod08018/subscriptions",
"organizations_url": "https://api.github.com/users/rod08018/orgs",
"repos_url": "https://api.github.com/users/rod08018/repos",
"events_url": "https://api.github.com/users/rod08018/events{/privacy}",
"received_events_url": "https://api.github.com/users/rod08018/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843765959,
"node_id": "MDU6TGFiZWwxODQzNzY1OTU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Installation",
"name": "Installation",
"color": "bfdadc",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Well this just indicates that you didn't correctly install the library. Try creating a new environment and installing from scratch.\r\n",
"You have to install the library first to use any module from it\r\nFirst type `pip install transformers` in your terminal and then you can import the necessary modules",
"I fixed it had to uninstall it and reinstale from source. I dont know why\npip versión didnt work\n\nOn Sat, Mar 21, 2020, 8:44 AM Tanmay Pandey <[email protected]>\nwrote:\n\n> You have to install the library first to use any module from it\n> First type pip install transformers in your terminal and then you can\n> import the necessary modules\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3342#issuecomment-602054984>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACNYAIIOLEBX2E52BF5B4Q3RITHD3ANCNFSM4LPAPAWA>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"The error still occurs I have reinstalled it from the source, still it's not working \r\nENV details :\r\nWindows 10\r\nAnaconda \r\nPytorch ",
"I don't think `transformers` can be installed using anaconda.\r\nIn any case, please open a new issue **with the filled-in issue template** for us to properly help you.",
"> I don't think `transformers` can be installed using anaconda.\r\n> In any case, please open a new issue **with the filled-in issue template** for us to properly help you.\r\n\r\nSo how do install it on my local system ??",
"https://github.com/huggingface/transformers#installation",
"I had to downgrade to an older version to have this working frankly did not find a solution for some reason.",
"Hi dacidotor, I am having the same issue which version you downgrade? \r\nI tried upgrading tensorflow and pytorch and then installing all again and it did not work.\r\n",
"try this:\r\nfrom transformers.models.bert.modeling_bert import BertEmbeddings"
] | 1,584 | 1,606 | 1,590 | NONE | null | # 🐛 Bug
No module found transformers
## Information
Package Version
------------------------ ----------
absl-py 0.9.0
astor 0.8.1
boto3 1.12.22
botocore 1.15.22
cachetools 4.0.0
certifi 2019.11.28
chardet 3.0.4
click 7.1.1
docutils 0.15.2
filelock 3.0.12
gast 0.2.2
google-auth 1.11.3
google-auth-oauthlib 0.4.1
google-pasta 0.2.0
grpcio 1.27.2
h5py 2.10.0
idna 2.9
jmespath 0.9.5
joblib 0.14.1
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
Markdown 3.2.1
numpy 1.18.1
oauthlib 3.1.0
opt-einsum 3.2.0
pandas 1.0.2
Pillow 7.0.0
pip 20.0.2
protobuf 3.11.3
pyasn1 0.4.8
pyasn1-modules 0.2.8
python-dateutil 2.8.1
pytorch-transformers 1.2.0
pytz 2019.3
pywin32 227
regex 2020.2.20
requests 2.23.0
requests-oauthlib 1.3.0
rsa 4.0
s3transfer 0.3.3
sacremoses 0.0.38
scipy 1.4.1
sentencepiece 0.1.85
setuptools 41.2.0
six 1.14.0
tensorboard 2.1.1
tensorflow 2.1.0
tensorflow-estimator 2.1.0
tensorflow-gpu 2.1.0
tensorflow-gpu-estimator 2.1.0
termcolor 1.1.0
tokenizers 0.5.2
torch 1.4.0
torchvision 0.5.0
tqdm 4.43.0
transformers 2.5.1
urllib3 1.25.8
Werkzeug 1.0.0
wget 3.2
wheel 0.34.2
wrapt 1.12.1
Using Bert on English language
## To reproduce
Steps to reproduce the behavior:
I just run the following code.
from transformers import BertTokenizer
# Load the BERT tokenizer.
print('Loading BERT tokenizer...')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-ff68f42f17c9> in <module>
----> 1 from transformers import BertTokenizer
2
3 # Load the BERT tokenizer.
4 print('Loading BERT tokenizer...')
5 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
ModuleNotFoundError: No module named 'transformers'
## Expected behavior
Do the tokenization.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
C:\Users\David\anaconda3\python.exe: can't open file 'transformers-cli': [Errno 2] No such file or directory
- `transformers` version:transformers 2.5.1
- Platform: Windows 10
- Python version: 3.7.3b
- PyTorch version (GPU?):1.4
- Tensorflow version (GPU?):2.1
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:distributed
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3342/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3341/comments | https://api.github.com/repos/huggingface/transformers/issues/3341/events | https://github.com/huggingface/transformers/pull/3341 | 584,149,848 | MDExOlB1bGxSZXF1ZXN0MzkwNzg2NTky | 3,341 | Simpler Error message when loading config/model with .from_pretrained() | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"tweaked version of #3247 ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=h1) Report\n> Merging [#3341](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f6d813aaaa96cc43fcf55f255b9439ebc22a31a0&el=desc) will **decrease** coverage by `0.19%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3341 +/- ##\n==========================================\n- Coverage 77.63% 77.44% -0.20% \n==========================================\n Files 100 100 \n Lines 16943 16943 \n==========================================\n- Hits 13154 13121 -33 \n- Misses 3789 3822 +33 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3341/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.82% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3341/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.46% <0.00%> (-5.91%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=footer). Last update [f6d813a...c9ce50c](https://codecov.io/gh/huggingface/transformers/pull/3341?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3341/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3341",
"html_url": "https://github.com/huggingface/transformers/pull/3341",
"diff_url": "https://github.com/huggingface/transformers/pull/3341.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3341.patch",
"merged_at": 1584656583000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3340/comments | https://api.github.com/repos/huggingface/transformers/issues/3340/events | https://github.com/huggingface/transformers/pull/3340 | 584,131,113 | MDExOlB1bGxSZXF1ZXN0MzkwNzcxMzgw | 3,340 | Create README.md | {
"login": "DukeEnglish",
"id": 18372948,
"node_id": "MDQ6VXNlcjE4MzcyOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18372948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DukeEnglish",
"html_url": "https://github.com/DukeEnglish",
"followers_url": "https://api.github.com/users/DukeEnglish/followers",
"following_url": "https://api.github.com/users/DukeEnglish/following{/other_user}",
"gists_url": "https://api.github.com/users/DukeEnglish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DukeEnglish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DukeEnglish/subscriptions",
"organizations_url": "https://api.github.com/users/DukeEnglish/orgs",
"repos_url": "https://api.github.com/users/DukeEnglish/repos",
"events_url": "https://api.github.com/users/DukeEnglish/events{/privacy}",
"received_events_url": "https://api.github.com/users/DukeEnglish/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=h1) Report\n> Merging [#3340](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3340 +/- ##\n=======================================\n Coverage 77.63% 77.63% \n=======================================\n Files 100 100 \n Lines 16943 16943 \n=======================================\n Hits 13154 13154 \n Misses 3789 3789 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=footer). Last update [20139b7...b81687c](https://codecov.io/gh/huggingface/transformers/pull/3340?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I think the filepath for this one is incorrect.\r\n\r\nAlso could you add\r\n```\r\n---\r\nlanguage: chinese\r\n---\r\n```\r\nat the top of the file? Thanks!",
"Merged in 73d6a2f9019960c327f19689c1d9a6c0fba31d86"
] | 1,584 | 1,587 | 1,587 | CONTRIBUTOR | null | roberta_chinese_large card | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3340/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3340",
"html_url": "https://github.com/huggingface/transformers/pull/3340",
"diff_url": "https://github.com/huggingface/transformers/pull/3340.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3340.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3339/comments | https://api.github.com/repos/huggingface/transformers/issues/3339/events | https://github.com/huggingface/transformers/pull/3339 | 584,131,078 | MDExOlB1bGxSZXF1ZXN0MzkwNzcxMzU1 | 3,339 | Create README.md | {
"login": "DukeEnglish",
"id": 18372948,
"node_id": "MDQ6VXNlcjE4MzcyOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18372948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DukeEnglish",
"html_url": "https://github.com/DukeEnglish",
"followers_url": "https://api.github.com/users/DukeEnglish/followers",
"following_url": "https://api.github.com/users/DukeEnglish/following{/other_user}",
"gists_url": "https://api.github.com/users/DukeEnglish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DukeEnglish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DukeEnglish/subscriptions",
"organizations_url": "https://api.github.com/users/DukeEnglish/orgs",
"repos_url": "https://api.github.com/users/DukeEnglish/repos",
"events_url": "https://api.github.com/users/DukeEnglish/events{/privacy}",
"received_events_url": "https://api.github.com/users/DukeEnglish/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=h1) Report\n> Merging [#3339](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3339 +/- ##\n==========================================\n- Coverage 77.63% 77.63% -0.01% \n==========================================\n Files 100 100 \n Lines 16943 16943 \n==========================================\n- Hits 13154 13153 -1 \n- Misses 3789 3790 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.19% <0.00%> (-0.18%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=footer). Last update [20139b7...a6ee180](https://codecov.io/gh/huggingface/transformers/pull/3339?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"same issue as #3340 ",
"Merged in 73d6a2f9019960c327f19689c1d9a6c0fba31d86"
] | 1,584 | 1,587 | 1,587 | CONTRIBUTOR | null | xlnet_chinese_large card | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3339/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3339",
"html_url": "https://github.com/huggingface/transformers/pull/3339",
"diff_url": "https://github.com/huggingface/transformers/pull/3339.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3339.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3338/comments | https://api.github.com/repos/huggingface/transformers/issues/3338/events | https://github.com/huggingface/transformers/pull/3338 | 584,130,866 | MDExOlB1bGxSZXF1ZXN0MzkwNzcxMTk2 | 3,338 | Create README.md | {
"login": "DukeEnglish",
"id": 18372948,
"node_id": "MDQ6VXNlcjE4MzcyOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18372948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DukeEnglish",
"html_url": "https://github.com/DukeEnglish",
"followers_url": "https://api.github.com/users/DukeEnglish/followers",
"following_url": "https://api.github.com/users/DukeEnglish/following{/other_user}",
"gists_url": "https://api.github.com/users/DukeEnglish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DukeEnglish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DukeEnglish/subscriptions",
"organizations_url": "https://api.github.com/users/DukeEnglish/orgs",
"repos_url": "https://api.github.com/users/DukeEnglish/repos",
"events_url": "https://api.github.com/users/DukeEnglish/events{/privacy}",
"received_events_url": "https://api.github.com/users/DukeEnglish/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=h1) Report\n> Merging [#3338](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039?src=pr&el=desc) will **decrease** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3338 +/- ##\n==========================================\n- Coverage 77.63% 77.44% -0.19% \n==========================================\n Files 100 100 \n Lines 16943 16943 \n==========================================\n- Hits 13154 13122 -32 \n- Misses 3789 3821 +32\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.46% <0%> (-5.91%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.36% <0%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=footer). Last update [20139b7...a7ff5ff](https://codecov.io/gh/huggingface/transformers/pull/3338?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | roberta_chinese_base card | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3338/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3338",
"html_url": "https://github.com/huggingface/transformers/pull/3338",
"diff_url": "https://github.com/huggingface/transformers/pull/3338.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3338.patch",
"merged_at": 1584589453000
} |
https://api.github.com/repos/huggingface/transformers/issues/3337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3337/comments | https://api.github.com/repos/huggingface/transformers/issues/3337/events | https://github.com/huggingface/transformers/pull/3337 | 584,130,627 | MDExOlB1bGxSZXF1ZXN0MzkwNzcxMDE0 | 3,337 | Create README.md | {
"login": "DukeEnglish",
"id": 18372948,
"node_id": "MDQ6VXNlcjE4MzcyOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18372948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DukeEnglish",
"html_url": "https://github.com/DukeEnglish",
"followers_url": "https://api.github.com/users/DukeEnglish/followers",
"following_url": "https://api.github.com/users/DukeEnglish/following{/other_user}",
"gists_url": "https://api.github.com/users/DukeEnglish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DukeEnglish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DukeEnglish/subscriptions",
"organizations_url": "https://api.github.com/users/DukeEnglish/orgs",
"repos_url": "https://api.github.com/users/DukeEnglish/repos",
"events_url": "https://api.github.com/users/DukeEnglish/events{/privacy}",
"received_events_url": "https://api.github.com/users/DukeEnglish/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=h1) Report\n> Merging [#3337](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3337 +/- ##\n=======================================\n Coverage 77.63% 77.63% \n=======================================\n Files 100 100 \n Lines 16943 16943 \n=======================================\n Hits 13154 13154 \n Misses 3789 3789 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.19% <0.00%> (-0.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.36% <0.00%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=footer). Last update [20139b7...7fbca7b](https://codecov.io/gh/huggingface/transformers/pull/3337?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | albert_chinese_tiny card | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3337/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3337",
"html_url": "https://github.com/huggingface/transformers/pull/3337",
"diff_url": "https://github.com/huggingface/transformers/pull/3337.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3337.patch",
"merged_at": 1584589430000
} |
https://api.github.com/repos/huggingface/transformers/issues/3336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3336/comments | https://api.github.com/repos/huggingface/transformers/issues/3336/events | https://github.com/huggingface/transformers/pull/3336 | 584,130,366 | MDExOlB1bGxSZXF1ZXN0MzkwNzcwODAy | 3,336 | Create README.md | {
"login": "DukeEnglish",
"id": 18372948,
"node_id": "MDQ6VXNlcjE4MzcyOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18372948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DukeEnglish",
"html_url": "https://github.com/DukeEnglish",
"followers_url": "https://api.github.com/users/DukeEnglish/followers",
"following_url": "https://api.github.com/users/DukeEnglish/following{/other_user}",
"gists_url": "https://api.github.com/users/DukeEnglish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DukeEnglish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DukeEnglish/subscriptions",
"organizations_url": "https://api.github.com/users/DukeEnglish/orgs",
"repos_url": "https://api.github.com/users/DukeEnglish/repos",
"events_url": "https://api.github.com/users/DukeEnglish/events{/privacy}",
"received_events_url": "https://api.github.com/users/DukeEnglish/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=h1) Report\n> Merging [#3336](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3336 +/- ##\n==========================================\n- Coverage 77.63% 77.60% -0.04% \n==========================================\n Files 100 100 \n Lines 16943 16943 \n==========================================\n- Hits 13154 13148 -6 \n- Misses 3789 3795 +6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3336/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.15% <0.00%> (-0.85%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3336/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.19% <0.00%> (-0.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3336/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.36% <0.00%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=footer). Last update [20139b7...38b38af](https://codecov.io/gh/huggingface/transformers/pull/3336?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | albert_chinese_small card | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3336/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3336",
"html_url": "https://github.com/huggingface/transformers/pull/3336",
"diff_url": "https://github.com/huggingface/transformers/pull/3336.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3336.patch",
"merged_at": 1584589503000
} |
https://api.github.com/repos/huggingface/transformers/issues/3335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3335/comments | https://api.github.com/repos/huggingface/transformers/issues/3335/events | https://github.com/huggingface/transformers/pull/3335 | 584,114,072 | MDExOlB1bGxSZXF1ZXN0MzkwNzU3ODE3 | 3,335 | Fix #3305: run_ner only possible on ModelForTokenClassification models | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=h1) Report\n> Merging [#3335](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039?src=pr&el=desc) will **decrease** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3335 +/- ##\n==========================================\n- Coverage 77.63% 77.57% -0.07% \n==========================================\n Files 100 100 \n Lines 16943 16943 \n==========================================\n- Hits 13154 13143 -11 \n- Misses 3789 3800 +11\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3335/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.4% <0%> (-1.97%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=footer). Last update [20139b7...7fc00b6](https://codecov.io/gh/huggingface/transformers/pull/3335?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok I'll merge this @srush @LysandreJik as it's more correct (feel free to let me know of your feedback anyways)",
"Thanks. Why does it break the pip version?",
"Because `MODEL_MAPPING` from `modeling_auto` is not exposed in the package's [`__init__.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/__init__.py)"
] | 1,584 | 1,584 | 1,584 | MEMBER | null | Also, #3305 breaks (if i'm not mistaken) the ability to run the example script from a pip-installed instance of transformers (vs. from an instance installed from source) (This PR does not fix that second issue) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3335/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3335",
"html_url": "https://github.com/huggingface/transformers/pull/3335",
"diff_url": "https://github.com/huggingface/transformers/pull/3335.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3335.patch",
"merged_at": 1584650490000
} |
https://api.github.com/repos/huggingface/transformers/issues/3334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3334/comments | https://api.github.com/repos/huggingface/transformers/issues/3334/events | https://github.com/huggingface/transformers/issues/3334 | 584,110,766 | MDU6SXNzdWU1ODQxMTA3NjY= | 3,334 | transformers.PreTrainedTokenizer.tokenize does lower case work all the time and discards space and tab. Want this changed. | {
"login": "ChineseYjh",
"id": 34715669,
"node_id": "MDQ6VXNlcjM0NzE1NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/34715669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChineseYjh",
"html_url": "https://github.com/ChineseYjh",
"followers_url": "https://api.github.com/users/ChineseYjh/followers",
"following_url": "https://api.github.com/users/ChineseYjh/following{/other_user}",
"gists_url": "https://api.github.com/users/ChineseYjh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChineseYjh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChineseYjh/subscriptions",
"organizations_url": "https://api.github.com/users/ChineseYjh/orgs",
"repos_url": "https://api.github.com/users/ChineseYjh/repos",
"events_url": "https://api.github.com/users/ChineseYjh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChineseYjh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):'albert_xxlarge_zh'
Language I am using the model on (English, Chinese ...):Chinese
Two problems are all related to method `transformers.PreTrainedTokenizer.tokenize`:
1. How can I not let this method automatically lower the case of the English words in the input sentence? `tokenizer.init_kwargs["do_lower_case"]=True` doesn't work...
2. How can I not let this method discard '\t' and space in default? Or is there any method that can solve this problem?
## To reproduce
Steps to reproduce the behavior:
`
tokenizer=BertTokenizer.from_pretrained("./albert_pytorch/prev_trained_model/albert_xxlarge_zh/")`
`print(tokenizer.init_kwargs.get("do_lower_case")) #output None`
`tokenizer.init_kwargs["do_lower_case"]=True`
`print(tokenizer.init_kwargs.get("do_lower_case")) #output True`
`seq=tokenizer.tokenize("我喜欢\tAPP和WIFI。")`
`print(seq)
`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected output is:
['我', '喜', '欢',**'\t'**, **'APP'**, '和', **'WIFI'**, '。']
While the actual output is:
['我', '喜', '欢', **'app'**, '和', **'wifi'**, '。']
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:2.5.1
- Platform:ubuntu
- Python version:3.6.1
- PyTorch version (GPU?):1.1.0 cuda9
- Using GPU in script?:nope
- Using distributed or parallel set-up in script?:nope
**BTW**, `python transformers-cli env` didn't work, the callback :
> python: can't open file 'transformers-cli': [Errno 2] No such file or directory | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3334/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3333/comments | https://api.github.com/repos/huggingface/transformers/issues/3333/events | https://github.com/huggingface/transformers/issues/3333 | 584,071,657 | MDU6SXNzdWU1ODQwNzE2NTc= | 3,333 | Finetuning T5 Model | {
"login": "caffeinetoomuch",
"id": 5820161,
"node_id": "MDQ6VXNlcjU4MjAxNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5820161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caffeinetoomuch",
"html_url": "https://github.com/caffeinetoomuch",
"followers_url": "https://api.github.com/users/caffeinetoomuch/followers",
"following_url": "https://api.github.com/users/caffeinetoomuch/following{/other_user}",
"gists_url": "https://api.github.com/users/caffeinetoomuch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caffeinetoomuch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caffeinetoomuch/subscriptions",
"organizations_url": "https://api.github.com/users/caffeinetoomuch/orgs",
"repos_url": "https://api.github.com/users/caffeinetoomuch/repos",
"events_url": "https://api.github.com/users/caffeinetoomuch/events{/privacy}",
"received_events_url": "https://api.github.com/users/caffeinetoomuch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @jkangsta,\r\n\r\nThanks for posting your question here. The docstring was out of date and an in-detail description for T5 will be added here #3507 ."
] | 1,584 | 1,585 | 1,585 | NONE | null | Hi there. I am trying to fine tune T5, but I have noticed your documentation gives conflicting instructions.
In modeling_t5.py you say
> To match pre-training, T5 input sequence should be formatted with [CLS] and [SEP] tokens
but then in tokenization_t5.py both of those tokens are set to None and the only tokens that are defined are EOS, UNK, and PAD.
Additionally, the actual T5 implementation makes no mention of SEP and CLS as far as I can tell.
Given that, could you clarify how we should be formatting our training data for HuggingFace's T5 implementation? Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3333/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3333/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3332/comments | https://api.github.com/repos/huggingface/transformers/issues/3332/events | https://github.com/huggingface/transformers/issues/3332 | 583,987,314 | MDU6SXNzdWU1ODM5ODczMTQ= | 3,332 | run_tf_ner.py doesn't work with unlabelled test data | {
"login": "0dust",
"id": 29033531,
"node_id": "MDQ6VXNlcjI5MDMzNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/29033531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0dust",
"html_url": "https://github.com/0dust",
"followers_url": "https://api.github.com/users/0dust/followers",
"following_url": "https://api.github.com/users/0dust/following{/other_user}",
"gists_url": "https://api.github.com/users/0dust/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0dust/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0dust/subscriptions",
"organizations_url": "https://api.github.com/users/0dust/orgs",
"repos_url": "https://api.github.com/users/0dust/repos",
"events_url": "https://api.github.com/users/0dust/events{/privacy}",
"received_events_url": "https://api.github.com/users/0dust/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have noticed the same issue and posted a question here: https://stackoverflow.com/questions/60732509/label-handling-confusion-in-run-tf-ner-example\r\n\r\nI think `pad_token_label_id` should definitely not fall into the range of actual labels. Maybe we can make it `-1` or `num(label)` or something. Also as shown in `convert_examples_to_features()`, `pad_token_label_id` is not only used for pad tokens at the end of the sequence, but also for non-first tokens inside a word when the word is split up to multiple tokens. Accordingly, during prediction, only the label of the first token in each word is used. So I am wondering if we should modify `input_mask` so that the loss does not take into account non-first tokens in a word. \r\n\r\nI tried to set `pad_token_label_id = -1`, mask out non-first tokens in each word by changing `input_mask`, and change `num_labels` to `len(labels)` instead of `len(labels) + 1`. The training and evaluation can run, but the F1-score on the test set becomes much lower (on both conll03 English and Ontonotes English). I am still confused about this.",
"I also found the issue, `pad_token_label_id = 0` and first labels id also 0, seems a bug. @jplu ",
"Hey !\r\n\r\nThe \"fake\" pad token id must be 0 and the first \"true\" token id must be 1. This is important to make the network able to make the difference between a padding and a word that is not part of an entity.\r\n\r\nI just have tested the script on multiple NER datasets and works perfectly without any change, so I think, if there is an issue it is only with unlabeled data.\r\n\r\n@0dust: the exception you mentioned do not rely on where the script has failed, I will try with testing on unlabeled data to see if I get the same issue. To be honest when I developed this script I never tried with unlabeled testing data.\r\n\r\n@VDCN12593: -1 doesn't work because TF do not take into account negative ids.",
"@jplu Exactly! That was just a short desription I could think of.",
"@0dust Sorry I cannot reproduce your issue, everything works fine for me even with unlabeled data... Please try to reproduce the example over germeval in the README by removing the label column in the test file. For me it works like expected.\r\n\r\nIf you still get the same issue, please provide an example of data for which it doesn't works :)",
"@jplu Thank you for your reply! Here's my thoughts\r\n\r\n1. In `run_tf_ner`, it doesn't make the \"true\" labels start from 1, so that's definitely a bug.\r\n\r\n2. We can make `pad_token_label_id = -1`, as long as we also mask out all the non-first tokens inside each word (in function `convert_examples_to_features()`) so that there softmax output are not taken by the loss. I think this makes more sense because we only use the first token (wordpiece) of each word for the tagging, and don't care about the output of the other tokens. This method is also supported by some people like in here: https://www.vamvas.ch/bert-for-ner/",
"@VDCN12593 thanks for your feedback! Indeed the script to generate the feature has changed since I make the script and I did not update it to follow the changes, so yes your first bullet point is true since then. About the second, yes I know that but keeping all the values without ignoring some makes your model converging faster.\r\n\r\nAnyway, I know this script is unusual and difficult to properly follow. So, this weekend, once I have some time, I will fully review it and make it much easier to understand. I will let you know once done.",
"I have done all the changes that was raising some confusion in this PR https://github.com/huggingface/transformers/pull/3511. Basically, the pad token label id to -1 and removing the softmax. The training is a bit longer and results stay unchanged.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | NONE | null | When running `run_tf_ner.py` in `predict` mode if all the labels in test data are `O`, script errors out with
` File "/home/himanshu/.local/lib/python3.7/site-packages/numpy/lib/function_base.py", line 423, in average
"Weights sum to zero, can't be normalized")
ZeroDivisionError: Weights sum to zero, can't be normalized
`
This is because `pad_token_label_id` https://github.com/huggingface/transformers/blob/cae334c43c49aa770d9dac1ee48319679ee8c72c/examples/ner/run_tf_ner.py#L511 , `label_id` for `O` are both zero, resulting in empty `y_pred`
https://github.com/huggingface/transformers/blob/cae334c43c49aa770d9dac1ee48319679ee8c72c/examples/ner/run_tf_ner.py#L364-L367 Shouldn't the `pad_token_label_id` be different? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3332/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3331/comments | https://api.github.com/repos/huggingface/transformers/issues/3331/events | https://github.com/huggingface/transformers/pull/3331 | 583,969,850 | MDExOlB1bGxSZXF1ZXN0MzkwNjM2Njgz | 3,331 | Add model cards for FinBERT. | {
"login": "haamis",
"id": 3799481,
"node_id": "MDQ6VXNlcjM3OTk0ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3799481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haamis",
"html_url": "https://github.com/haamis",
"followers_url": "https://api.github.com/users/haamis/followers",
"following_url": "https://api.github.com/users/haamis/following{/other_user}",
"gists_url": "https://api.github.com/users/haamis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haamis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haamis/subscriptions",
"organizations_url": "https://api.github.com/users/haamis/orgs",
"repos_url": "https://api.github.com/users/haamis/repos",
"events_url": "https://api.github.com/users/haamis/events{/privacy}",
"received_events_url": "https://api.github.com/users/haamis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=h1) Report\n> Merging [#3331](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/20139b7c8d88f380f1e4e0ae2baf0b0ac9351039&el=desc) will **decrease** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3331 +/- ##\n==========================================\n- Coverage 77.63% 77.57% -0.07% \n==========================================\n Files 100 100 \n Lines 16943 16943 \n==========================================\n- Hits 13154 13143 -11 \n- Misses 3789 3800 +11 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.40% <0.00%> (-1.97%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=footer). Last update [20139b7...d354a22](https://codecov.io/gh/huggingface/transformers/pull/3331?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great. Could you add a metadata block at the top of the file with:\r\n```\r\n---\r\nlanguage: finnish\r\n# optional thumbnail: ...\r\n---\r\n```\r\n\r\nThanks!",
"@haamis I've one question regarding to the FinBERT training corpus: would it be possible to obtain the final pre-processed data that you've used for training the BERT model 🤔\r\n\r\nI would really like to train an ELECTRA model and release it to the community :)",
"@stefan-it We can't publish the corpus due to licensing/copyright issues, but since we are also interested in training a Finnish ELECTRA maybe we could collaborate on this? Please send me an email sajvir(at)utu.fi."
] | 1,584 | 1,591 | 1,584 | CONTRIBUTOR | null | These are a copy of https://github.com/TurkuNLP/FinBERT/blob/master/README.md. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3331/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3331",
"html_url": "https://github.com/huggingface/transformers/pull/3331",
"diff_url": "https://github.com/huggingface/transformers/pull/3331.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3331.patch",
"merged_at": 1584644761000
} |
https://api.github.com/repos/huggingface/transformers/issues/3330 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3330/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3330/comments | https://api.github.com/repos/huggingface/transformers/issues/3330/events | https://github.com/huggingface/transformers/pull/3330 | 583,956,243 | MDExOlB1bGxSZXF1ZXN0MzkwNjI1MzE2 | 3,330 | Added model cards for SciBERT models uploaded under AllenAI org | {
"login": "kyleclo",
"id": 13603748,
"node_id": "MDQ6VXNlcjEzNjAzNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/13603748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyleclo",
"html_url": "https://github.com/kyleclo",
"followers_url": "https://api.github.com/users/kyleclo/followers",
"following_url": "https://api.github.com/users/kyleclo/following{/other_user}",
"gists_url": "https://api.github.com/users/kyleclo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyleclo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyleclo/subscriptions",
"organizations_url": "https://api.github.com/users/kyleclo/orgs",
"repos_url": "https://api.github.com/users/kyleclo/repos",
"events_url": "https://api.github.com/users/kyleclo/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyleclo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3330/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3330",
"html_url": "https://github.com/huggingface/transformers/pull/3330",
"diff_url": "https://github.com/huggingface/transformers/pull/3330.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3330.patch",
"merged_at": 1584560711000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3329 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3329/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3329/comments | https://api.github.com/repos/huggingface/transformers/issues/3329/events | https://github.com/huggingface/transformers/issues/3329 | 583,839,301 | MDU6SXNzdWU1ODM4MzkzMDE= | 3,329 | CUDA Error when running run_language_modeling.py | {
"login": "IreneZihuiLi",
"id": 11585259,
"node_id": "MDQ6VXNlcjExNTg1MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/11585259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IreneZihuiLi",
"html_url": "https://github.com/IreneZihuiLi",
"followers_url": "https://api.github.com/users/IreneZihuiLi/followers",
"following_url": "https://api.github.com/users/IreneZihuiLi/following{/other_user}",
"gists_url": "https://api.github.com/users/IreneZihuiLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IreneZihuiLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IreneZihuiLi/subscriptions",
"organizations_url": "https://api.github.com/users/IreneZihuiLi/orgs",
"repos_url": "https://api.github.com/users/IreneZihuiLi/repos",
"events_url": "https://api.github.com/users/IreneZihuiLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/IreneZihuiLi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This seems related to the classes that you use in NLLLoss, That loss function expects a torch.LongTensor with values in the range [0, nb_classes-1] with no values left out in between.",
"Which version of `transformers` do you have installed? There was a recent change from using -1 to -100 for tokens that should be ignored during the calculation of loss. For example in this line: https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py#L218. Therefore, running the latest version of run_language_modeling.py with older versions of `transformers` will give an error similar to what you are seeing.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Problem solved, it was run with Python 3.6+. Return to Python 3.5, no more errors.",
"> Problem solved, it was run with Python 3.6+. Return to Python 3.5, no more errors.\r\n\r\nThat should not be the problem. The repo officially only supports 3.6+ any way. ",
"I'm seeing exactly the same thing training Roberta using `run_mlm.py`. It's at the same step in the training cycle so I can reproduce but I've not tracked down what the issue is, either there's a problem with my input data in a single batch, or perhaps the training has diverged so forward() produces NaN's.\r\n\r\nI'll keep digging."
] | 1,584 | 1,607 | 1,590 | NONE | null | I am trying to run this script[run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py)
I can load the pre-trained roberta model without a problem. However, when it starts training (loss.backward()), then there are issues like these:
> /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
> /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
I assume it was caused by the `loss.backward()` line:
>
> File "2_fine_tune_bert.py", line 386, in train
> loss.backward()
> File "/home/lily/zl379/anaconda2/envs/py36/lib/python3.7/site-packages/torch/tensor.py", line 195, in backward
> torch.autograd.backward(self, gradient, retain_graph, create_graph)
> File "/home/lily/zl379/anaconda2/envs/py36/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in backward
> allow_unreachable=True) # allow_unreachable flag
> RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)
Is it caused by CUDA version or pytorch version? My pytorch is 1.4.0, CUDA: release 10.1, V10.1.243.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3329/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3328 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3328/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3328/comments | https://api.github.com/repos/huggingface/transformers/issues/3328/events | https://github.com/huggingface/transformers/pull/3328 | 583,772,801 | MDExOlB1bGxSZXF1ZXN0MzkwNDcyNzEx | 3,328 | Create README.md | {
"login": "brandenchan",
"id": 33759007,
"node_id": "MDQ6VXNlcjMzNzU5MDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/33759007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brandenchan",
"html_url": "https://github.com/brandenchan",
"followers_url": "https://api.github.com/users/brandenchan/followers",
"following_url": "https://api.github.com/users/brandenchan/following{/other_user}",
"gists_url": "https://api.github.com/users/brandenchan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brandenchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brandenchan/subscriptions",
"organizations_url": "https://api.github.com/users/brandenchan/orgs",
"repos_url": "https://api.github.com/users/brandenchan/repos",
"events_url": "https://api.github.com/users/brandenchan/events{/privacy}",
"received_events_url": "https://api.github.com/users/brandenchan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3328/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3328",
"html_url": "https://github.com/huggingface/transformers/pull/3328",
"diff_url": "https://github.com/huggingface/transformers/pull/3328.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3328.patch",
"merged_at": 1584545838000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3327 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3327/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3327/comments | https://api.github.com/repos/huggingface/transformers/issues/3327/events | https://github.com/huggingface/transformers/pull/3327 | 583,685,990 | MDExOlB1bGxSZXF1ZXN0MzkwNDAwNjk3 | 3,327 | improve doctstring for tf and pt generate() method | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=h1) Report\n> Merging [#3327](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e8f44af5bf44a79f102678f5d7bb737cd6da3b52&el=desc) will **decrease** coverage by `0.10%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3327 +/- ##\n==========================================\n- Coverage 77.10% 77.00% -0.11% \n==========================================\n Files 100 100 \n Lines 16953 16953 \n==========================================\n- Hits 13071 13054 -17 \n- Misses 3882 3899 +17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.43% <ø> (-3.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.09% <ø> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=footer). Last update [e8f44af...183952e](https://codecov.io/gh/huggingface/transformers/pull/3327?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3327/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3327",
"html_url": "https://github.com/huggingface/transformers/pull/3327",
"diff_url": "https://github.com/huggingface/transformers/pull/3327.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3327.patch",
"merged_at": 1584534250000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3326 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3326/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3326/comments | https://api.github.com/repos/huggingface/transformers/issues/3326/events | https://github.com/huggingface/transformers/pull/3326 | 583,668,011 | MDExOlB1bGxSZXF1ZXN0MzkwMzg1Njc4 | 3,326 | add link to blog post | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3326/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3326",
"html_url": "https://github.com/huggingface/transformers/pull/3326",
"diff_url": "https://github.com/huggingface/transformers/pull/3326.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3326.patch",
"merged_at": 1584534268000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3325 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3325/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3325/comments | https://api.github.com/repos/huggingface/transformers/issues/3325/events | https://github.com/huggingface/transformers/issues/3325 | 583,628,541 | MDU6SXNzdWU1ODM2Mjg1NDE= | 3,325 | Cubla Error on DistilBert | {
"login": "Ricocotam",
"id": 9447752,
"node_id": "MDQ6VXNlcjk0NDc3NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9447752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ricocotam",
"html_url": "https://github.com/Ricocotam",
"followers_url": "https://api.github.com/users/Ricocotam/followers",
"following_url": "https://api.github.com/users/Ricocotam/following{/other_user}",
"gists_url": "https://api.github.com/users/Ricocotam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ricocotam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ricocotam/subscriptions",
"organizations_url": "https://api.github.com/users/Ricocotam/orgs",
"repos_url": "https://api.github.com/users/Ricocotam/repos",
"events_url": "https://api.github.com/users/Ricocotam/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ricocotam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"After some investigation, I have really no clue on what's happening. This issue is only referred through Tensorflow questions such as [this](https://stackoverflow.com/questions/41117740/tensorflow-crashes-with-cublas-status-alloc-failed) or [this issue](https://github.com/tensorflow/tensorflow/issues/9489) on Tensorflow's Github",
"I've got the same issue using the same environment as you. \r\nI'm using model `\"mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es\"`\r\n My error though is at:\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-17-90ab5d6d4393> in <module>\r\n 13 outputs = model(**inputs, labels=labels)\r\n 14 loss, logits = outputs[:2]\r\n---> 15 loss.backward()\r\n 16 \r\n 17 \r\n/opt/conda/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)\r\n 193 products. Defaults to ``False``.\r\n 194 \"\"\"\r\n--> 195 torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n 196 \r\n 197 def register_hook(self, hook):\r\n\r\n/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)\r\n 97 Variable._execution_engine.run_backward(\r\n 98 tensors, grad_tensors, retain_graph, create_graph,\r\n---> 99 allow_unreachable=True) # allow_unreachable flag\r\n 100 \r\n 101 \r\nRuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`\r\n```\r\nI've tried most stuff I've found but really nothing seems to work.",
"Find similar issue when using regular BERTModel",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, @Ricocotam and @j6e \r\nI got the same error with torch 1.6.0 GPU with a DistilBert Classification error. Would you advise if and how you fix that?",
"So, I encountered the same issue\r\n\r\nThis might sound stupid, but it happened because I was using the wrong tokenizer. So, for future readers.. check that you really use an appropriate tokenizer ?\r\n\r\nHowever, I agree the error message is rather obscure. I'm not sure exactly *why* it is triggered (I guess the other tokenizer would produce some IDs that do not exist in the distilbert vocabulary ?) , so I don't know if there is an easy fix for that "
] | 1,584 | 1,624 | 1,595 | NONE | null | # 🐛 Bug
When using DistilBert I get `CUBLAS_STATUS_ALLOC_FAILED` when trying to forward
## Information
Exact error is : `RuntimeError : Cuda Error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle)` and traceback indicates i happens on `output = input.matmul(weight.t())` which is probably not informative but the whole stack is filled with forward on a transformer.
I'm using distilbert-base-multilingual-cased with French
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-5.4.0-3amd-64-x89_64-with-debian-bullseye-sid
- Python version: 3.7.0
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3325/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3325/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3324 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3324/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3324/comments | https://api.github.com/repos/huggingface/transformers/issues/3324/events | https://github.com/huggingface/transformers/issues/3324 | 583,541,406 | MDU6SXNzdWU1ODM1NDE0MDY= | 3,324 | Error loading finetuned bert model AttributeError: 'NoneType' object has no attribute 'endswith' | {
"login": "aurooj",
"id": 14858333,
"node_id": "MDQ6VXNlcjE0ODU4MzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/14858333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aurooj",
"html_url": "https://github.com/aurooj",
"followers_url": "https://api.github.com/users/aurooj/followers",
"following_url": "https://api.github.com/users/aurooj/following{/other_user}",
"gists_url": "https://api.github.com/users/aurooj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aurooj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aurooj/subscriptions",
"organizations_url": "https://api.github.com/users/aurooj/orgs",
"repos_url": "https://api.github.com/users/aurooj/repos",
"events_url": "https://api.github.com/users/aurooj/events{/privacy}",
"received_events_url": "https://api.github.com/users/aurooj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The `from_pretrained` method should point to a directory. Could you try to point it to a directory containing both the model weights and the config.json file?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- -->
Unable to load finetuned bert model for language modeling task using run_language_modeling.py script
<!---->
I used the following code to load a fine-tuned model saved on my disk:
`BertForMaskedLM.from_pretrained('outputs/pytorch_model.bin', config=config, from_tf=True).` however I am getting the following error:
Traceback (most recent call last):
File "/home/mahdi/Desktop/pycharm-community-4.5.3/helpers/pydev/pydevd_vars.py", line 342, in evaluateExpression
compiled = compile(expression, '<string>', 'eval')
File "<string>", line 1
from transformers import WEIGHTS_NAME, BertForMaskedLM
^
SyntaxError: invalid syntax
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mahdi/Desktop/pycharm-community-4.5.3/helpers/pydev/pydevd_comm.py", line 1071, in doIt
result = pydevd_vars.evaluateExpression(self.thread_id, self.frame_id, self.expression, self.doExec)
File "/home/mahdi/Desktop/pycharm-community-4.5.3/helpers/pydev/pydevd_vars.py", line 344, in evaluateExpression
Exec(expression, updated_globals, frame.f_locals)
File "/home/mahdi/Desktop/pycharm-community-4.5.3/helpers/pydev/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<string>", line 2, in <module>
File "/home/mahdi/anaconda3/envs/py36/lib/python3.6/site-packages/transformers/modeling_utils.py", line 482, in from_pretrained
if resolved_archive_file.endswith(".index"):
AttributeError: 'NoneType' object has no attribute 'endswith'
Transformers version:2.5.1
Pytorch version: 1.3.0
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3324/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/3324/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3323 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3323/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3323/comments | https://api.github.com/repos/huggingface/transformers/issues/3323/events | https://github.com/huggingface/transformers/pull/3323 | 583,508,378 | MDExOlB1bGxSZXF1ZXN0MzkwMjUyNjc3 | 3,323 | [Bart/Memory] don't create lm_head | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik \r\n```python\r\n if hasattr(output_embeddings, \"out_features\") and hasattr(input_embeddings, \"num_embeddings\"):\r\n output_embeddings.out_features = input_embeddings.num_embeddings\r\n```\r\nwas breaking because bart-large-cnn doesn't have a mask token.\r\nI can investigate more deeply if it's interesting to anyone.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=h1) Report\n> Merging [#3323](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ad2ea06af898a95744a268332431f050c62a862&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3323 +/- ##\n==========================================\n- Coverage 77.83% 77.82% -0.01% \n==========================================\n Files 100 100 \n Lines 17051 17048 -3 \n==========================================\n- Hits 13272 13268 -4 \n- Misses 3779 3780 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `98.07% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.73% <0.00%> (-0.14%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=footer). Last update [5ad2ea0...0b8c252](https://codecov.io/gh/huggingface/transformers/pull/3323?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,585 | 1,585 | CONTRIBUTOR | null | ### Summary
previously we created a `self.lm_head = nn.Linear()` with the exact same weight matrix as the input embeddings, and then skipped `tie_weights`.
This presented 3 problems:
1) 200 MB of extra GPU RAM
2) Can't `tie_weights`
3) Can't `resize_token_embeddings`
This PR alleviates all the concerns by using `lm_logits = F.linear(decoder_outputs[0], self.shared)`. It also adds more aggressive test coverage that `resize_embeddings` is changing the shape of both input and output embeddings.
### Concerns
1) If I recall from an earlier PR, tying the input and output embeddings to a single parameter is unfriendly to torchscript.
However, neither `Bart` before this change nor `T5ForConditionalGeneration`, which uses the `self.lm_head = nn.Linear` technique, pass the common torchscript tests, which suggests that the weight tying in this PR is not removing functionality that existed before it.
2) The failing test here is caused by fact that S3 has `lm_head` in `state_dict`. I will update S3 right before this PR gets merged.
3) To pass unit tests and use `.generate`, `get_output_embeddings` must return `nn.Linear`. To satisfy this constraint, this PR makes the `nn.Linear` module on the fly when `get_output_embeddings` is called. I think (but am not sure) that this is fine because `resize_token_embeddings` works by resizing the input_embeddings then calling `tie_weights`, and we have stopped skipping `tie_weights`
(Note there is a separate but related issue that `test_common.py::test_resize_embeddings` is shallow, detailed in https://github.com/huggingface/transformers/issues/3378)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3323/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3323",
"html_url": "https://github.com/huggingface/transformers/pull/3323",
"diff_url": "https://github.com/huggingface/transformers/pull/3323.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3323.patch",
"merged_at": 1585262440000
} |
https://api.github.com/repos/huggingface/transformers/issues/3322 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3322/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3322/comments | https://api.github.com/repos/huggingface/transformers/issues/3322/events | https://github.com/huggingface/transformers/pull/3322 | 583,470,952 | MDExOlB1bGxSZXF1ZXN0MzkwMjIxMzEz | 3,322 | [BART] torch 1.0 compatibility | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=h1) Report\n> Merging [#3322](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38a555a83c8aceae77895d325174af5bd576cec7&el=desc) will **decrease** coverage by `0.93%`.\n> The diff coverage is `70.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3322 +/- ##\n==========================================\n- Coverage 77.14% 76.20% -0.94% \n==========================================\n Files 100 100 \n Lines 16972 16964 -8 \n==========================================\n- Hits 13093 12928 -165 \n- Misses 3879 4036 +157 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.04% <70.00%> (+0.78%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.00% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.20% <0.00%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.19% <0.00%> (+0.71%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=footer). Last update [38a555a...d67639d](https://codecov.io/gh/huggingface/transformers/pull/3322?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"One note: you can add an `activation_function` attribute in `BartConfig` defaulting to \"gelu\" to be used when calling the `ACT2FN`. This lets people switch to \"gelu_new\" if they want a different trade-off accuracy versus speed/memory."
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | This PR contains two minor fixes and one piece of cleanup for the BartModel.
**1.** Previously, Bart's encoder padding mask used -10000. to represent tokens that should be ignored, then called `masked_fill(mask.to(torch.bool), -inf)` to use the mask.
There are two problems with this:
- it's confusing to set a value to a large negative and then call `bool`. Why not just invert the mask and call `bool` immediately.
- `torch.bool` is released in pytorch 1.2, so this code breaks on earlier versions.
- Fix: let `torch.eq` make the mask the correct dtype at the beginning.
**2.** explicit use of `F.gelu` is not allowed/or broken in earlier torch versions. Let `ACT2FN` handle this logic.
**3.** An unreachable code branch is deleted.
Supplementary Material: torch v 1.2.0 [release notes](https://github.com/pytorch/pytorch/releases/tag/v1.2.0)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3322/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3322",
"html_url": "https://github.com/huggingface/transformers/pull/3322",
"diff_url": "https://github.com/huggingface/transformers/pull/3322.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3322.patch",
"merged_at": 1584633414000
} |
https://api.github.com/repos/huggingface/transformers/issues/3321 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3321/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3321/comments | https://api.github.com/repos/huggingface/transformers/issues/3321/events | https://github.com/huggingface/transformers/pull/3321 | 583,438,019 | MDExOlB1bGxSZXF1ZXN0MzkwMTkzOTQ3 | 3,321 | Init card for model | {
"login": "DukeEnglish",
"id": 18372948,
"node_id": "MDQ6VXNlcjE4MzcyOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18372948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DukeEnglish",
"html_url": "https://github.com/DukeEnglish",
"followers_url": "https://api.github.com/users/DukeEnglish/followers",
"following_url": "https://api.github.com/users/DukeEnglish/following{/other_user}",
"gists_url": "https://api.github.com/users/DukeEnglish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DukeEnglish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DukeEnglish/subscriptions",
"organizations_url": "https://api.github.com/users/DukeEnglish/orgs",
"repos_url": "https://api.github.com/users/DukeEnglish/repos",
"events_url": "https://api.github.com/users/DukeEnglish/events{/privacy}",
"received_events_url": "https://api.github.com/users/DukeEnglish/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=h1) Report\n> Merging [#3321](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38a555a83c8aceae77895d325174af5bd576cec7&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3321 +/- ##\n==========================================\n- Coverage 77.14% 77.10% -0.04% \n==========================================\n Files 100 100 \n Lines 16972 16972 \n==========================================\n- Hits 13093 13087 -6 \n- Misses 3879 3885 +6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.40% <0.00%> (-1.08%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=footer). Last update [38a555a...1630a1f](https://codecov.io/gh/huggingface/transformers/pull/3321?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks! [`model page`](https://huggingface.co/clue/roberta_chinese_3L312_clue_tiny)\r\n\r\nBy the way, I sent an email to the address listed [in your paper](https://arxiv.org/pdf/2003.01355.pdf).\r\nLet me know if you got it.",
"Thank you.\nYes, please. And my friend has sent an e-mail to you via that email.\n\nbest,\nJunyi\n\nOn Wed, 18 Mar 2020 at 20:27, Julien Chaumond <[email protected]>\nwrote:\n\n> Thanks! model page\n> <https://huggingface.co/clue/roberta_chinese_3L312_clue_tiny>\n>\n> By the way, I sent an email to the address listed in your paper\n> <https://arxiv.org/pdf/2003.01355.pdf>.\n> Let me know if you got it.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/3321#issuecomment-600593364>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEMFSVEG4UOVDKYZRLCCSN3RIC43XANCNFSM4LOEAGWQ>\n> .\n>\n\n\n-- \nJunyi Li\n+ 86 136 0354 2466\ndukeenglish.github.io\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | We create card for this model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3321/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3321",
"html_url": "https://github.com/huggingface/transformers/pull/3321",
"diff_url": "https://github.com/huggingface/transformers/pull/3321.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3321.patch",
"merged_at": 1584532528000
} |
https://api.github.com/repos/huggingface/transformers/issues/3320 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3320/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3320/comments | https://api.github.com/repos/huggingface/transformers/issues/3320/events | https://github.com/huggingface/transformers/issues/3320 | 583,435,169 | MDU6SXNzdWU1ODM0MzUxNjk= | 3,320 | TF BERT not FP16 compatible? | {
"login": "volker42maru",
"id": 51976664,
"node_id": "MDQ6VXNlcjUxOTc2NjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/51976664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/volker42maru",
"html_url": "https://github.com/volker42maru",
"followers_url": "https://api.github.com/users/volker42maru/followers",
"following_url": "https://api.github.com/users/volker42maru/following{/other_user}",
"gists_url": "https://api.github.com/users/volker42maru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/volker42maru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/volker42maru/subscriptions",
"organizations_url": "https://api.github.com/users/volker42maru/orgs",
"repos_url": "https://api.github.com/users/volker42maru/repos",
"events_url": "https://api.github.com/users/volker42maru/events{/privacy}",
"received_events_url": "https://api.github.com/users/volker42maru/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've aced same issue. Maybe it's hard coded the data type somewhere? Have you found solution?",
"Tried this on Colab TPU, same error.",
"Same here, would be convenient as hell :)",
"Having the same error also for `transformers` version 2.11.0. \r\nHere some code to easily reproduce the error:\r\n\r\n```python \r\n#!/usr/bin/env python3\r\nfrom transformers import TFBertModel, BertTokenizer\r\nfrom tensorflow.keras.mixed_precision import experimental as mixed_precision\r\n\r\npolicy = mixed_precision.Policy('mixed_float16')\r\nmixed_precision.set_policy(policy)\r\n\r\ntok = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\nmodel = TFBertModel.from_pretrained(\"bert-base-uncased\")\r\ninput_ids = tok(\"The dog is cute\", return_tensors=\"tf\").input_ids\r\nmodel(input_ids) # throws error on GPU\r\n```",
"Encountering the same issue here:\r\n```python3\r\nimport tensorflow as tf\r\nfrom transformers.modeling_tf_distilbert import TFDistilBertModel\r\n\r\ntf.keras.mixed_precision.experimental.set_policy('mixed_float16')\r\nmodel = TFDistilBertModel.from_pretrained('distilbert-base-uncased')",
"Put this issue on my TF ToDo-List :-) ",
"+1",
"Hi @patrickvonplaten, is this problem fixed?\r\nI got the same error recently with version 3.0.2",
"This is still an open problem...I didn't find the time yet to take a look! Will link this issue to the TF projects.",
"This is already solved in new version.\r\n`position_embeddings = tf.cast(self.position_embeddings(position_ids), inputs_embeds.dtype)\r\n token_type_embeddings = tf.cast(self.token_type_embeddings(token_type_ids), inputs_embeds.dtype)\r\n embeddings = inputs_embeds + position_embeddings + token_type_embeddings`"
] | 1,584 | 1,605 | 1,605 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): TFBertForQuestionAnswering
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] my own modified scripts:
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
## To reproduce
Simple example to reproduce error:
```
import tensorflow as tf
from transformers import TFBertForQuestionAnswering
# turn on mp (fp16 operations)
tf.keras.mixed_precision.experimental.set_policy('mixed_float16')
model = TFBertForQuestionAnswering.from_pretrained('bert-base-uncased')
```
The error occurs here:
transformers/modeling_tf_bert.py", line 174, in _embedding
embeddings = inputs_embeds + position_embeddings + token_type_embeddings
And this is the error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a half tensor but is a float tensor [Op:AddV2] name: tf_bert_for_question_answering/bert/embeddings/add/
## Expected behavior
I want to use TF BERT with mixed precision (for faster inference on tensor core GPUs). I know that full fp16 is not working out-of-the-box, because the model weights need to be in fp16 as well. Mixed precision, however, should work because only operations are performed in fp16.
I get some dtype issue. Seems the mode is not fp16 compatible yet? Will this be fixed in the future?
## Environment info
- `transformers` version: 2.5.0
- Platform: ubuntu 16.04
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (GPU)
- Tensorflow version (GPU?): 2.1.0 (GPU)
- Using GPU in script?: sort of
- Using distributed or parallel set-up in script?: nope
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3320/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3320/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3319 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3319/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3319/comments | https://api.github.com/repos/huggingface/transformers/issues/3319/events | https://github.com/huggingface/transformers/pull/3319 | 583,426,127 | MDExOlB1bGxSZXF1ZXN0MzkwMTg0MDI5 | 3,319 | [BART] cleanup: remove redundant kwargs, improve docstrings | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | Small Bart code cleanups before pip release.
### Cleanup
- Deletes unused `value` argument for SelfAttention. (`value` is always the same as `key`.) This might be moderately controversial as most attention modules take query, key, value as arguments, but this change reduces the signature to just query and key (since key always the same as value).
- Deletes redundant `static_kv` argument for SelfAttention. It is always the same as `self.encoder_decoder_attention`.
- Context: the `static_kv` variable decides whether we want to extend the keys and values in the cache or, if `True`, use them without modification.
- This PR keeps a local `static_kv` variable because that variable name describes the purpose of the variable better than `self.encoder_decoder_attention`. But `static_kv` is no longer a kwarg. This simplifies the API and avoids having the same logic in two places.
#### Two new fast tests
- test coverage for `dummy_inputs` (previously broken)
- test coverage for the default generate kwargs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3319/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3319",
"html_url": "https://github.com/huggingface/transformers/pull/3319",
"diff_url": "https://github.com/huggingface/transformers/pull/3319.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3319.patch",
"merged_at": 1584631012000
} |
https://api.github.com/repos/huggingface/transformers/issues/3318 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3318/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3318/comments | https://api.github.com/repos/huggingface/transformers/issues/3318/events | https://github.com/huggingface/transformers/issues/3318 | 583,424,759 | MDU6SXNzdWU1ODM0MjQ3NTk= | 3,318 | pipelines.ipynb mask should be [MASK] | {
"login": "shibing624",
"id": 10249622,
"node_id": "MDQ6VXNlcjEwMjQ5NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/10249622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shibing624",
"html_url": "https://github.com/shibing624",
"followers_url": "https://api.github.com/users/shibing624/followers",
"following_url": "https://api.github.com/users/shibing624/following{/other_user}",
"gists_url": "https://api.github.com/users/shibing624/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shibing624/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shibing624/subscriptions",
"organizations_url": "https://api.github.com/users/shibing624/orgs",
"repos_url": "https://api.github.com/users/shibing624/repos",
"events_url": "https://api.github.com/users/shibing624/events{/privacy}",
"received_events_url": "https://api.github.com/users/shibing624/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @shibing624, \r\n\r\nThanks for opening this issue.\r\n\r\n`fill-mask` pipeline uses Roberta under the hood and the mask token is actually `<mask>` which is the one used in the notebook.\r\n\r\nHowever, it's not the safest way to use the pipeline I agree. I'll update the notebook with \r\n\r\n```python\r\nnlp_fill = pipeline('fill-mask')\r\nnlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)\r\n```\r\n\r\nThis way it will be compatible with any model.\r\n\r\nMorgan",
"Dont hesitate to reopen if I missed something :) "
] | 1,584 | 1,584 | 1,584 | NONE | null | transformers/notebooks/03-pipelines.ipynb fill-mask task, use newest version, <mask> is error, should be [MASK]. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3318/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3317 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3317/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3317/comments | https://api.github.com/repos/huggingface/transformers/issues/3317/events | https://github.com/huggingface/transformers/issues/3317 | 583,360,248 | MDU6SXNzdWU1ODMzNjAyNDg= | 3,317 | output value of XLNetModel changes for the same input | {
"login": "mainulquraishi",
"id": 14335238,
"node_id": "MDQ6VXNlcjE0MzM1MjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/14335238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mainulquraishi",
"html_url": "https://github.com/mainulquraishi",
"followers_url": "https://api.github.com/users/mainulquraishi/followers",
"following_url": "https://api.github.com/users/mainulquraishi/following{/other_user}",
"gists_url": "https://api.github.com/users/mainulquraishi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mainulquraishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mainulquraishi/subscriptions",
"organizations_url": "https://api.github.com/users/mainulquraishi/orgs",
"repos_url": "https://api.github.com/users/mainulquraishi/repos",
"events_url": "https://api.github.com/users/mainulquraishi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mainulquraishi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It seems that after fine-tuning if I don't use `model=model.eval()`, this can happen.\r\n"
] | 1,584 | 1,584 | 1,584 | NONE | null | I was trying to use the pre-trained `XLNetModel` model. I found that for the same input, each time I run the model and the output values are different, which is wired to me. Here is a code that I just used:
```
tokenizer =XLNetTokenizer.from_pretrained("xlnet-base-cased")
model=XLNetModel.from_pretrained('xlnet-base-cased')
sent=tokenizer.encode("I love my dog")
test=torch.tensor(sent)
test=test.view(1,test.shape[0])
```
Now If I run the model, each time the output is different.
```
out=model(test)
print(out[0])
```
Why this be behaviour?
I have another question, isn't this model the pre-trained transformer trained on language modeling task (transformer without the LM head)? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3317/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3316 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3316/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3316/comments | https://api.github.com/repos/huggingface/transformers/issues/3316/events | https://github.com/huggingface/transformers/issues/3316 | 583,302,486 | MDU6SXNzdWU1ODMzMDI0ODY= | 3,316 | TextClassificationPipeline does not work with pretrained BERT model | {
"login": "ogencoglu",
"id": 8182738,
"node_id": "MDQ6VXNlcjgxODI3Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8182738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ogencoglu",
"html_url": "https://github.com/ogencoglu",
"followers_url": "https://api.github.com/users/ogencoglu/followers",
"following_url": "https://api.github.com/users/ogencoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/ogencoglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ogencoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ogencoglu/subscriptions",
"organizations_url": "https://api.github.com/users/ogencoglu/orgs",
"repos_url": "https://api.github.com/users/ogencoglu/repos",
"events_url": "https://api.github.com/users/ogencoglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ogencoglu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That would be because you're using a `BertModel` instead of a `BertModelForSequenceClassification`.",
"What model should we be using, and where can we download it from?"
] | 1,584 | 1,591 | 1,585 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): 'nlptown/bert-base-multilingual-uncased-sentiment'
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
model = BertModel.from_pretrained(pretrained_model_name_or_path='nlptown/bert-base-multilingual-uncased-sentiment')
tokenizer = BertTokenizer.from_pretrained(pretrained_model_name_or_path='nlptown/bert-base-multilingual-uncased-sentiment')
sentiment_analyzer = TextClassificationPipeline(model=model, tokenizer=tokenizer)
sentiment_analyzer('This is awesome!')
/usr/local/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
504 def __call__(self, *args, **kwargs):
505 outputs = super().__call__(*args, **kwargs)
--> 506 scores = np.exp(outputs) / np.exp(outputs).sum(-1)
507 return [{"label": self.model.config.id2label[item.argmax()], "score": item.max()} for item in scores]
508
ValueError: operands could not be broadcast together with shapes (1,8,768) (1,8)
```
## Expected behavior
A sentiment score.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Ubuntu 16.04
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3316/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3315 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3315/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3315/comments | https://api.github.com/repos/huggingface/transformers/issues/3315/events | https://github.com/huggingface/transformers/issues/3315 | 583,296,904 | MDU6SXNzdWU1ODMyOTY5MDQ= | 3,315 | how does masked_lm_labels work ? | {
"login": "mahdirezaey",
"id": 34715488,
"node_id": "MDQ6VXNlcjM0NzE1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/34715488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahdirezaey",
"html_url": "https://github.com/mahdirezaey",
"followers_url": "https://api.github.com/users/mahdirezaey/followers",
"following_url": "https://api.github.com/users/mahdirezaey/following{/other_user}",
"gists_url": "https://api.github.com/users/mahdirezaey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahdirezaey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahdirezaey/subscriptions",
"organizations_url": "https://api.github.com/users/mahdirezaey/orgs",
"repos_url": "https://api.github.com/users/mahdirezaey/repos",
"events_url": "https://api.github.com/users/mahdirezaey/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahdirezaey/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi all\r\nfrom hugging face{https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm}\r\n\r\nin this code :\r\n{\r\nfrom transformers import BertTokenizer, BertForMaskedLM\r\nimport torch\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = BertForMaskedLM.from_pretrained('bert-base-uncased')\r\n\r\ninput_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\", add_special_tokens=True)).unsqueeze(0) # Batch size 1\r\noutputs = model(input_ids, masked_lm_labels=input_ids)\r\n\r\nloss, prediction_scores = outputs[:2]\r\n}\r\n\r\nwhat happens by {outputs = model(input_ids, masked_lm_labels=input_ids)} ?\r\n\r\nit will automatically makes [mask] 15% of all the tokens in each sentence of each bs , and calculates loss just for them ?\r\n@thomwolf @tholor",
"No I don't think so, you need to mask the tokens and then pass them to model, look here (e.g. evaluate) for an example\r\nhttps://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py",
"@blackcat84 , thanks it helped a lot",
"@blackcat84 \r\n\r\nand one more thing ,\r\ndoes any function in those scripts , concatenate the short lines to each other ?\r\nin order not to be enforced to pad each line so much",
"It's been a while so I might be wrong but I think you are correct, I don't remember in which function though. A simple way to be sure about it is to pass a dummy input to the script/function and check it by yourself",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3315/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3314 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3314/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3314/comments | https://api.github.com/repos/huggingface/transformers/issues/3314/events | https://github.com/huggingface/transformers/issues/3314 | 583,242,412 | MDU6SXNzdWU1ODMyNDI0MTI= | 3,314 | Mismatch in the accuracy figures | {
"login": "thak123",
"id": 3891859,
"node_id": "MDQ6VXNlcjM4OTE4NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3891859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thak123",
"html_url": "https://github.com/thak123",
"followers_url": "https://api.github.com/users/thak123/followers",
"following_url": "https://api.github.com/users/thak123/following{/other_user}",
"gists_url": "https://api.github.com/users/thak123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thak123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thak123/subscriptions",
"organizations_url": "https://api.github.com/users/thak123/orgs",
"repos_url": "https://api.github.com/users/thak123/repos",
"events_url": "https://api.github.com/users/thak123/events{/privacy}",
"received_events_url": "https://api.github.com/users/thak123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | NONE | null | # ❓ Questions & Help
Hi Just wanted to know if "bert-base-multilingual-cased" model and https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip are same.
I tried using the tokenizer from the multi_cased_L-12_H-768_A-12 and bert-base-multilingual-cased model which performed better than using the tokenizer and model from pretrained "bert-base-multilingual-cased" .
I was trying to fine tune a sentiment analysis task and the performance of using tokenizers above provide different performances.
Can anybody shed light on this ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3314/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.