url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/3213 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3213/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3213/comments | https://api.github.com/repos/huggingface/transformers/issues/3213/events | https://github.com/huggingface/transformers/pull/3213 | 578,902,715 | MDExOlB1bGxSZXF1ZXN0Mzg2Mzk4NTEz | 3,213 | fix typo in docstring demonstrating invocation of PreTrainedEncoderDecoder.from_pretrained | {
"login": "mgoldey",
"id": 659477,
"node_id": "MDQ6VXNlcjY1OTQ3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/659477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mgoldey",
"html_url": "https://github.com/mgoldey",
"followers_url": "https://api.github.com/users/mgoldey/followers",
"following_url": "https://api.github.com/users/mgoldey/following{/other_user}",
"gists_url": "https://api.github.com/users/mgoldey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mgoldey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mgoldey/subscriptions",
"organizations_url": "https://api.github.com/users/mgoldey/orgs",
"repos_url": "https://api.github.com/users/mgoldey/repos",
"events_url": "https://api.github.com/users/mgoldey/events{/privacy}",
"received_events_url": "https://api.github.com/users/mgoldey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,583 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3213/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3213",
"html_url": "https://github.com/huggingface/transformers/pull/3213",
"diff_url": "https://github.com/huggingface/transformers/pull/3213.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3213.patch",
"merged_at": 1584625675000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3212 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3212/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3212/comments | https://api.github.com/repos/huggingface/transformers/issues/3212/events | https://github.com/huggingface/transformers/pull/3212 | 578,885,648 | MDExOlB1bGxSZXF1ZXN0Mzg2Mzg0ODkw | 3,212 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | - Update title
- Remove metrics | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3212/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3212",
"html_url": "https://github.com/huggingface/transformers/pull/3212",
"diff_url": "https://github.com/huggingface/transformers/pull/3212.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3212.patch",
"merged_at": 1583931801000
} |
https://api.github.com/repos/huggingface/transformers/issues/3211 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3211/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3211/comments | https://api.github.com/repos/huggingface/transformers/issues/3211/events | https://github.com/huggingface/transformers/pull/3211 | 578,884,309 | MDExOlB1bGxSZXF1ZXN0Mzg2MzgzNzcw | 3,211 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Change title to clarify the model description | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3211/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3211",
"html_url": "https://github.com/huggingface/transformers/pull/3211",
"diff_url": "https://github.com/huggingface/transformers/pull/3211.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3211.patch",
"merged_at": 1583931788000
} |
https://api.github.com/repos/huggingface/transformers/issues/3210 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3210/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3210/comments | https://api.github.com/repos/huggingface/transformers/issues/3210/events | https://github.com/huggingface/transformers/pull/3210 | 578,883,480 | MDExOlB1bGxSZXF1ZXN0Mzg2MzgzMTAx | 3,210 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | - Remove metrics until use other benchmarks to test the model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3210/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3210",
"html_url": "https://github.com/huggingface/transformers/pull/3210",
"diff_url": "https://github.com/huggingface/transformers/pull/3210.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3210.patch",
"merged_at": 1583931777000
} |
https://api.github.com/repos/huggingface/transformers/issues/3209 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3209/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3209/comments | https://api.github.com/repos/huggingface/transformers/issues/3209/events | https://github.com/huggingface/transformers/pull/3209 | 578,882,419 | MDExOlB1bGxSZXF1ZXN0Mzg2MzgyMjE1 | 3,209 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | - Remove metrics until tested on other xquad benchmarks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3209/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3209",
"html_url": "https://github.com/huggingface/transformers/pull/3209",
"diff_url": "https://github.com/huggingface/transformers/pull/3209.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3209.patch",
"merged_at": 1583931740000
} |
https://api.github.com/repos/huggingface/transformers/issues/3208 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3208/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3208/comments | https://api.github.com/repos/huggingface/transformers/issues/3208/events | https://github.com/huggingface/transformers/issues/3208 | 578,828,557 | MDU6SXNzdWU1Nzg4Mjg1NTc= | 3,208 | Error loading pretrained bert-base-multilingual-cased | {
"login": "lucky-bai",
"id": 123435,
"node_id": "MDQ6VXNlcjEyMzQzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/123435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucky-bai",
"html_url": "https://github.com/lucky-bai",
"followers_url": "https://api.github.com/users/lucky-bai/followers",
"following_url": "https://api.github.com/users/lucky-bai/following{/other_user}",
"gists_url": "https://api.github.com/users/lucky-bai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucky-bai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucky-bai/subscriptions",
"organizations_url": "https://api.github.com/users/lucky-bai/orgs",
"repos_url": "https://api.github.com/users/lucky-bai/repos",
"events_url": "https://api.github.com/users/lucky-bai/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucky-bai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're creating a configuration based on the size of `bert-base-uncased`, so yes, it will work for that checkpoint. For any other checkpoint, however, you would need to change the values which are different (e.g. vocab size, which is what's failing in your case). It is indicated in the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertconfig):\r\n\r\n> This is the configuration class to store the configuration of a BertModel. It is used to instantiate an BERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BERT bert-base-uncased architecture.\r\n\r\nIn order to load a configuration automatically from a checkpoint, you may use `from_pretrained` on the configuration as well. In your case, that would be:\r\n\r\n```py\r\nimport transformers\r\n\r\nconfig = transformers.BertConfig.from_pretrained(\"bert-base-multilingual-cased\", output_hidden_states=True)\r\nbert_model = transformers.AutoModel.from_pretrained('bert-base-multilingual-cased', config=config)\r\nprint(bert_model)\r\n```"
] | 1,583 | 1,584 | 1,584 | NONE | null | # 🐛 Bug
## Information
Loading `bert-base-multilingual-cased` from pretrained gives an error:
```
Traceback (most recent call last):
File "error.py", line 4, in <module>
bert_model = transformers.AutoModel.from_pretrained('bert-base-multilingual-cased', config=config)
File "/scratch/gobi1/bai/bai-conda/lib/python3.7/site-packages/transformers/modeling_auto.py", line 380, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/scratch/gobi1/bai/bai-conda/lib/python3.7/site-packages/transformers/modeling_utils.py", line 558, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)
RuntimeError: Error(s) in loading state_dict for BertModel:
size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]).
```
## To reproduce
The following snippet produces the error:
```
import transformers
config = transformers.BertConfig(output_hidden_states=True)
bert_model = transformers.AutoModel.from_pretrained('bert-base-multilingual-cased', config=config)
print(bert_model)
```
## Expected behavior
The model should load. Note that `bert-base-uncased` is able to load properly.
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux
- Python version: 3.7.5
- PyTorch version (GPU?): 1.4.0 with GPU
- Tensorflow version (GPU?): N/A
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3208/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3207 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3207/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3207/comments | https://api.github.com/repos/huggingface/transformers/issues/3207/events | https://github.com/huggingface/transformers/issues/3207 | 578,744,211 | MDU6SXNzdWU1Nzg3NDQyMTE= | 3,207 | Pipeline for Question Answering: How to return multiple correct answers output? | {
"login": "rcontesti",
"id": 13105045,
"node_id": "MDQ6VXNlcjEzMTA1MDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/13105045?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcontesti",
"html_url": "https://github.com/rcontesti",
"followers_url": "https://api.github.com/users/rcontesti/followers",
"following_url": "https://api.github.com/users/rcontesti/following{/other_user}",
"gists_url": "https://api.github.com/users/rcontesti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcontesti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcontesti/subscriptions",
"organizations_url": "https://api.github.com/users/rcontesti/orgs",
"repos_url": "https://api.github.com/users/rcontesti/repos",
"events_url": "https://api.github.com/users/rcontesti/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcontesti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1834052129,
"node_id": "MDU6TGFiZWwxODM0MDUyMTI5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature",
"name": "High-Level feature",
"color": "f7c9a3",
"default": false,
"description": ""
},
{
"id": 1834052333,
"node_id": "MDU6TGFiZWwxODM0MDUyMzMz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Question%20Answering",
"name": "Ex: Question Answering",
"color": "86FFCF",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I need a solution to this problem as well please help\r\n\r\nThanks.",
"The solution is to use the `topk` parameter, for example, following the example in https://huggingface.co/transformers/task_summary.html about question answering, if you want 3 answer:\r\n\r\n` result = nlp(question=\"What is extractive question answering?\", context=context, topk = 3)`",
"@fumpe Can this work for sagemaker? Actually I am trying to deploy this pipeline in sagemaker, how can we customize the topk parameter in that? (As Sagemaker works a bit differently), Sagemaker currently returns only 1 answer, and am unable to modify the parameters to increase the number of returned items. Thank you.\r\n\r\nThis is my model and pipeline setup in Sagemaker:\r\nhub = {\r\n'HF_MODEL_ID':'valhalla/t5-base-qa-qg-hl',\r\n'HF_TASK':'text2text-generation'\r\n}"
] | 1,583 | 1,674 | 1,589 | NONE | null | Very simple question I'm using transformer pipelines question answering on a very long piece of text. And, it is the cases that the are are multiple cases for the correct answer. I want them all.
I was wondering if I could retrieve the 10 best scores of output instead of just one.
Many thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3207/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3207/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3206 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3206/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3206/comments | https://api.github.com/repos/huggingface/transformers/issues/3206/events | https://github.com/huggingface/transformers/issues/3206 | 578,736,258 | MDU6SXNzdWU1Nzg3MzYyNTg= | 3,206 | More details about DistilBERT experiment setting. | {
"login": "silencio94",
"id": 40610160,
"node_id": "MDQ6VXNlcjQwNjEwMTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/40610160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silencio94",
"html_url": "https://github.com/silencio94",
"followers_url": "https://api.github.com/users/silencio94/followers",
"following_url": "https://api.github.com/users/silencio94/following{/other_user}",
"gists_url": "https://api.github.com/users/silencio94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silencio94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silencio94/subscriptions",
"organizations_url": "https://api.github.com/users/silencio94/orgs",
"repos_url": "https://api.github.com/users/silencio94/repos",
"events_url": "https://api.github.com/users/silencio94/events{/privacy}",
"received_events_url": "https://api.github.com/users/silencio94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you take a look at the [distillation README](https://github.com/huggingface/transformers/tree/master/examples/distillation)? It shows the command that was used to train the distilled model.",
"Hey @silencio94, if you need help with distillation, drop me an email at clement [at] huggingface [dot] co.",
"Thanks for replying. I'm planing to do some experiments on my DistilBERT after distillation. after Then I'll send an email or leave another issue. thanks!"
] | 1,583 | 1,584 | 1,584 | NONE | null | NIPS workshop paper (http://arxiv.org/abs/1910.01108) does not provide loss weights and other distillation hyper-paramerters (temperature, learning rate, epoch, step... ). and also (https://medium.com/huggingface/distilbert-8cf3380435b5). So when I try to experiment with this scripts(/transformers/examples/distillation/), it's hard to set hyper parameters.
Actually, I tried to train my model with distilation/ scrpts (loss weights are set equally 0.33), and got undesirable result. :sob: I think it would be helpful for others to explicitly write experimental settings of DistilBERT in the readme file. Have a good day. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3206/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3205 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3205/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3205/comments | https://api.github.com/repos/huggingface/transformers/issues/3205/events | https://github.com/huggingface/transformers/issues/3205 | 578,583,425 | MDU6SXNzdWU1Nzg1ODM0MjU= | 3,205 | where is the position emdeddings in bert for training a new model from scratch ? | {
"login": "2hip3ng",
"id": 38064349,
"node_id": "MDQ6VXNlcjM4MDY0MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/38064349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/2hip3ng",
"html_url": "https://github.com/2hip3ng",
"followers_url": "https://api.github.com/users/2hip3ng/followers",
"following_url": "https://api.github.com/users/2hip3ng/following{/other_user}",
"gists_url": "https://api.github.com/users/2hip3ng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/2hip3ng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2hip3ng/subscriptions",
"organizations_url": "https://api.github.com/users/2hip3ng/orgs",
"repos_url": "https://api.github.com/users/2hip3ng/repos",
"events_url": "https://api.github.com/users/2hip3ng/events{/privacy}",
"received_events_url": "https://api.github.com/users/2hip3ng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Right here\r\n\r\nhttps://github.com/huggingface/transformers/blob/31f2437f07cf014a042789f52fa1519a485e8b2b/src/transformers/modeling_bert.py#L150\r\n\r\nIn the future, please make an effort to write a decent post that explains exactly what you need. This is very low quality.",
"Sry for low quality. But how about its initialization? In the paper, it uses sin and cos functions, while the code seems like using random initialization.\n\n\n\n\n| |\nWang\n|\n|\n邮箱:[email protected]\n|\n\nSignature is customized by Netease Mail Master\n\nOn 03/10/2020 22:15, Bram Vanroy wrote:\n\nRight here\n\nhttps://github.com/huggingface/transformers/blob/31f2437f07cf014a042789f52fa1519a485e8b2b/src/transformers/modeling_bert.py#L150\n\nIn the future, please make an effort to write a decent post that explains exactly what you need. This is very low quality.\n\n—\nYou are receiving this because you authored the thread.\nReply to this email directly, view it on GitHub, or unsubscribe.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3205/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3204 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3204/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3204/comments | https://api.github.com/repos/huggingface/transformers/issues/3204/events | https://github.com/huggingface/transformers/issues/3204 | 578,542,586 | MDU6SXNzdWU1Nzg1NDI1ODY= | 3,204 | UnboundLocalError: local variable 'tokenizer' referenced before assignment | {
"login": "PosoSAgapo",
"id": 33200481,
"node_id": "MDQ6VXNlcjMzMjAwNDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/33200481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PosoSAgapo",
"html_url": "https://github.com/PosoSAgapo",
"followers_url": "https://api.github.com/users/PosoSAgapo/followers",
"following_url": "https://api.github.com/users/PosoSAgapo/following{/other_user}",
"gists_url": "https://api.github.com/users/PosoSAgapo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PosoSAgapo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PosoSAgapo/subscriptions",
"organizations_url": "https://api.github.com/users/PosoSAgapo/orgs",
"repos_url": "https://api.github.com/users/PosoSAgapo/repos",
"events_url": "https://api.github.com/users/PosoSAgapo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PosoSAgapo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Plus:I have seen a similiar issue in this project,however the problem in that issue is that he did not input the right pretrain_weights.But I do not think that will be the solution in here",
"Similiarly,I aslo tried DistilBert,Roberta,XLMRoberta,these 3 models also cannot work for me,the error message is the same as the one I described above.",
"I just tried this and cannot reproduce the behaviour that you indicate. Are you running this from a notebook? Try restarting your kernel and running it again.",
"> I just tried this and cannot reproduce the behaviour that you indicate. Are you running this from a notebook? Try restarting your kernel and running it again.\r\n\r\nI run this programme on the linux GPU server,I tried restarting the python programme,however,the problem is still exsiting.Would this be the problem of downloading the model? ",
"No. UnboundLocalError simply means that Python hasn't seen this variable before, which cannot occur in your code snippet. If the models were downloaded incorrectly, you'd get another error. Even if the `tokenizer` was initialized as `None` you'd get another error. \r\n\r\nAre you sure that is your _only_ code that is running? Please pos the full trace.",
"> No. UnboundLocalError simply means that Python hasn't seen this variable before, which cannot occur in your code snippet. If the models were downloaded incorrectly, you'd get another error. Even if the `tokenizer` was initialized as `None` you'd get another error.\r\n> \r\n> Are you sure that is your _only_ code that is running? Please pos the full trace.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 3, in <module>\r\n File \"/users4/bwchen/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py\", line 302, in from_pretrained\r\n return cls._from_pretrained(*inputs, **kwargs)\r\n File \"/users4/bwchen/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py\", line 438, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/users4/bwchen/anaconda3/lib/python3.7/site-packages/transformers/tokenization_bert.py\", line 164, in __init__\r\n \"model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`\".format(vocab_file))\r\nValueError: Can't find a vocabulary file at path '/users4/bwchen/.cache/torch/transformers/37cc1eaaea18a456726fc28ecb438852f0ca1d9e7d259e6e3747ee33065936f6'. To load the vocabulary from a Google pretrained model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`\r\n```\r\nI am sure that is the only code I was running at that time , I am tring to reproduce this error.This time it is working properly when the model_class goes the aforementioned 'wrong' model XLMModel. However,when the model continues to run,I met another problem when the model was the DistillBert, does this error means that I have to use BertTokenizer instead of DistillBertTokenizer?",
"I can also attest to this error.\r\n\r\nI am using a Kaggle notebook, and I get this error after running this in my first cell. Most of it is default code, bottom two lines are the key ones. \r\n\r\n```\r\n# This Python 3 environment comes with many helpful analytics libraries installed\r\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\r\n# For example, here's several helpful packages to load in \r\n\r\nimport numpy as np # linear algebra\r\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\r\n\r\n# Input data files are available in the \"../input/\" directory.\r\n# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory\r\n\r\nimport os\r\nfor dirname, _, filenames in os.walk('/kaggle/input'):\r\n for filename in filenames:\r\n print(os.path.join(dirname, filename))\r\n\r\n# Any results you write to the current directory are saved as output.\r\nprint(os.getcwd(), os.listdir())\r\n\r\nfrom transformers import RobertaTokenizer\r\ntknzr = RobertaTokenizer.from_pretrained('roberta-large')\r\n```\r\n\r\nError thrown\r\n```\r\nUnboundLocalError Traceback (most recent call last)\r\n<ipython-input-1-7957db35f110> in <module>\r\n 19 from transformers import RobertaTokenizer\r\n 20 \r\n---> 21 tknzr = RobertaTokenizer.from_pretrained('roberta-large')\r\n\r\n/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs)\r\n 300 \r\n 301 \"\"\"\r\n--> 302 return cls._from_pretrained(*inputs, **kwargs)\r\n 303 \r\n 304 \r\n\r\n/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)\r\n 442 \r\n 443 # Save inputs and kwargs for saving and re-loading with ``save_pretrained``\r\n--> 444 tokenizer.init_inputs = init_inputs\r\n 445 tokenizer.init_kwargs = init_kwargs\r\n 446 \r\n\r\nUnboundLocalError: local variable 'tokenizer' referenced before assignment\r\n```\r\n\r\nKaggle runs transformers version 2.3.0 by default. After updating to 2.5.1 it worked just fine. To update on Kaggle, turn the internet option on in the settings in the right side. Then do `!pip install -U transformers`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,590 | 1,590 | NONE | null | I am runnnig the example code on the homepage.
However,I met this problem.
```
import torch
from transformers import *
MODELS = [(BertModel, BertTokenizer, 'bert-base-uncased'),
(OpenAIGPTModel, OpenAIGPTTokenizer, 'openai-gpt'),
(GPT2Model, GPT2Tokenizer, 'gpt2'),
(CTRLModel, CTRLTokenizer, 'ctrl'),
(TransfoXLModel, TransfoXLTokenizer, 'transfo-xl-wt103'),
(XLNetModel, XLNetTokenizer, 'xlnet-base-cased'),
(XLMModel, XLMTokenizer, 'xlm-mlm-enfr-1024'),
(DistilBertModel, DistilBertTokenizer, 'distilbert-base-cased'),
(RobertaModel, RobertaTokenizer, 'roberta-base'),
(XLMRobertaModel, XLMRobertaTokenizer, 'xlm-roberta-base'),
]
for model_class, tokenizer_class, pretrained_weights in MODELS:
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
input_ids = torch.tensor([tokenizer.encode("Here is some text to encode", add_special_tokens=True)])
with torch.no_grad():
last_hidden_states = model(input_ids)[0]
`UnboundLocalError: local variable 'tokenizer' referenced before assignmen
```
This happened when the model_class goes to the XLMModel.I do not quite understand why this happen,because this problem only occurs when the model is XLMModel. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3204/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3203/comments | https://api.github.com/repos/huggingface/transformers/issues/3203/events | https://github.com/huggingface/transformers/issues/3203 | 578,494,489 | MDU6SXNzdWU1Nzg0OTQ0ODk= | 3,203 | Attention mask always returns array of ones for CamembertTokenizer.batch_encode_plus | {
"login": "Wissben",
"id": 23744619,
"node_id": "MDQ6VXNlcjIzNzQ0NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23744619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wissben",
"html_url": "https://github.com/Wissben",
"followers_url": "https://api.github.com/users/Wissben/followers",
"following_url": "https://api.github.com/users/Wissben/following{/other_user}",
"gists_url": "https://api.github.com/users/Wissben/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wissben/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wissben/subscriptions",
"organizations_url": "https://api.github.com/users/Wissben/orgs",
"repos_url": "https://api.github.com/users/Wissben/repos",
"events_url": "https://api.github.com/users/Wissben/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wissben/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"**EDIT** : \r\nI've been accidentally testing this code on the 2.4.1 version in a different environment. Since I've updated to 2.5.1 the behavior is as expected."
] | 1,583 | 1,583 | 1,583 | NONE | null | # 🐛 Bug
## Information
The model i'm using is CamebertTokenizer for the French language
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
- First init the models with the following code:
```python
camembert_tokenizer = CamembertTokenizer.from_pretrained('../models/', cache_dir='./models')
camembert_tf_model = TFCamembertModel.from_pretrained('../models/', output_hidden_states=True,
cache_dir='./models'
)
camembert_tf_model.trainable = False
```
- Prepare the input data :
```python
text = ' '.join(['je' for i in range(25)])
texts = [ text, "je suis cool"]
input_ids = camembert_tokenizer.batch_encode_plus(texts,
add_special_tokens=True,
max_length=8,
return_tensors='tf')
print(input_ids)
```
## Expected behavior
What should happen is that the padded tokens should have a mask value of 0 if I've correctly understood the doc. so the output of the snippet should be :
```
{'input_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
array([[ 5, 50, 50, 50, 50, 50, 50, 6],
[ 5, 50, 146, 4261, 6, 1, 1, 1]], dtype=int32)>,
'token_type_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
array([[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1]], dtype=int32)>,
'attention_mask': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
array([[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 0, 0, 0]], dtype=int32)>}
```
Instead, i'm always getting an attention_mask full of ones like this :
```
{'input_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
array([[ 5, 50, 50, 50, 50, 50, 50, 6],
[ 5, 50, 146, 4261, 6, 1, 1, 1]], dtype=int32)>,
'token_type_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
array([[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1]], dtype=int32)>,
'attention_mask': <tf.Tensor: shape=(2, 8), dtype=int32, numpy=
array([[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>}
```
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-5.3.0-40-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: (False)
- Using distributed or parallel set-up in script?: (False)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3203/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3202/comments | https://api.github.com/repos/huggingface/transformers/issues/3202/events | https://github.com/huggingface/transformers/pull/3202 | 578,324,964 | MDExOlB1bGxSZXF1ZXN0Mzg1OTI3MzM5 | 3,202 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=h1) Report\n> Merging [#3202](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ca356a464e98e065488205f3fcf9247f56c3832?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3202 +/- ##\n==========================================\n+ Coverage 77.96% 77.97% +<.01% \n==========================================\n Files 98 98 \n Lines 16668 16668 \n==========================================\n+ Hits 12996 12997 +1 \n+ Misses 3672 3671 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.72% <0%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=footer). Last update [5ca356a...bce6ca3](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | - Clarify that the model is not trained on the evaluation dataset | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3202/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3202",
"html_url": "https://github.com/huggingface/transformers/pull/3202",
"diff_url": "https://github.com/huggingface/transformers/pull/3202.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3202.patch",
"merged_at": 1583852375000
} |
https://api.github.com/repos/huggingface/transformers/issues/3201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3201/comments | https://api.github.com/repos/huggingface/transformers/issues/3201/events | https://github.com/huggingface/transformers/pull/3201 | 578,323,833 | MDExOlB1bGxSZXF1ZXN0Mzg1OTI2NDQ2 | 3,201 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | - Fix path of tokenizer
- Clarify that the model is not trained on the evaluation set | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3201/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3201",
"html_url": "https://github.com/huggingface/transformers/pull/3201",
"diff_url": "https://github.com/huggingface/transformers/pull/3201.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3201.patch",
"merged_at": 1583852358000
} |
https://api.github.com/repos/huggingface/transformers/issues/3200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3200/comments | https://api.github.com/repos/huggingface/transformers/issues/3200/events | https://github.com/huggingface/transformers/issues/3200 | 578,281,528 | MDU6SXNzdWU1NzgyODE1Mjg= | 3,200 | TF GPT2 Language model can't be created with from_pretrained() for specific shortcut name | {
"login": "bilal2vec",
"id": 29356759,
"node_id": "MDQ6VXNlcjI5MzU2NzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/29356759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilal2vec",
"html_url": "https://github.com/bilal2vec",
"followers_url": "https://api.github.com/users/bilal2vec/followers",
"following_url": "https://api.github.com/users/bilal2vec/following{/other_user}",
"gists_url": "https://api.github.com/users/bilal2vec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilal2vec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilal2vec/subscriptions",
"organizations_url": "https://api.github.com/users/bilal2vec/orgs",
"repos_url": "https://api.github.com/users/bilal2vec/repos",
"events_url": "https://api.github.com/users/bilal2vec/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilal2vec/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"For some reason there isn't a TF pretrained checkpoint for gpt2-xl [here](https://github.com/huggingface/transformers/blob/9499a3778e1b782f03bc3b15b2ae0cbd20b6391f/src/transformers/modeling_tf_gpt2.py#L39) but there is for Pytorch [here](https://github.com/huggingface/transformers/blob/4134100363e878693aa41f4a25a667ca46d80a9e/src/transformers/modeling_gpt2.py#L35)\r\n\r\nFixing this should only involve converting the pt checkpoint to a tf one. I'd be happy to do it myself if there is a conversion script that can convert Pytorch checkpoints to TF",
"Converting a pytorch checkpoint to tf works with \r\n\r\n```python\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2-xl')\r\nmodel.save_pretrained('./')\r\nmodel = TFGPT2LMHeadModel.from_pretrained('./', from_pt=True)\r\nmodel.save_pretrained('./out')\r\n```\r\n\r\nIf you can tell me where to upload the TF checkpoint to, I'll open up a pull request",
"Hi @bkkaggle thanks for pointing this out! @julien-c could you maybe help out here:\r\n\r\nWhile the model: \r\n\r\n\"gpt2-xl\": \"https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-xl-pytorch_model.bin\",\r\n\r\ndoes exist in PyTorch. It does not exist for TF 2. Could we add it as well? "
] | 1,583 | 1,584 | 1,584 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): TFGPT2LMHeadModel
The colab notebook works for all model sizes except for gpt2-xl, where it throws an error. It looks like it can't download the correct checkpoint from the model name (gpt2-xl)
I tried running the colab notebook with other gpt2-models and they all work.
Stack trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-068b0d38bee3> in <module>()
1 strategy = tf.distribute.experimental.TPUStrategy(resolver)
2 with strategy.scope():
----> 3 model = create_model()
4
5 loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
2 frames
<ipython-input-7-f6b9ea32b94a> in create_model()
1 def create_model():
----> 2 return TFGPT2LMHeadModel.from_pretrained('gpt2-xl')
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
401 model(model.dummy_inputs, training=False) # build the network with dummy inputs
402
--> 403 assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
404 # 'by_name' allow us to do transfer learning by skipping/adding layers
405 # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357
/usr/lib/python3.6/genericpath.py in isfile(path)
28 """Test whether a path is a regular file"""
29 try:
---> 30 st = os.stat(path)
31 except OSError:
32 return False
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
```
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* my own modified scripts: (give details below)
See colab: https://colab.research.google.com/drive/12gEGdxUjyVLBSUjkjngAWiE_ENIUIV8o
The tasks I am working on is:
* my own task or dataset: (give details below)
Finetuning gpt2-xl on wikitext2
## To reproduce
Run the colab notebook,
## Expected behavior
All gpt2 model sizes work except for gpt2-xl
## Environment info
- `transformers` version: master
- Platform: google colab
- Tensorflow version (GPU?): 2.1 (TPU) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3200/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3199/comments | https://api.github.com/repos/huggingface/transformers/issues/3199/events | https://github.com/huggingface/transformers/pull/3199 | 578,246,960 | MDExOlB1bGxSZXF1ZXN0Mzg1ODY1MTk0 | 3,199 | Model card for albert-base-v2-squad2 | {
"login": "traviemcg",
"id": 37486396,
"node_id": "MDQ6VXNlcjM3NDg2Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/37486396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/traviemcg",
"html_url": "https://github.com/traviemcg",
"followers_url": "https://api.github.com/users/traviemcg/followers",
"following_url": "https://api.github.com/users/traviemcg/following{/other_user}",
"gists_url": "https://api.github.com/users/traviemcg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/traviemcg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/traviemcg/subscriptions",
"organizations_url": "https://api.github.com/users/traviemcg/orgs",
"repos_url": "https://api.github.com/users/traviemcg/repos",
"events_url": "https://api.github.com/users/traviemcg/events{/privacy}",
"received_events_url": "https://api.github.com/users/traviemcg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=h1) Report\n> Merging [#3199](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/49debe62fdc96e161f866dd8914d5915477bb742?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3199 +/- ##\n=========================================\n+ Coverage 77.98% 78% +0.01% \n=========================================\n Files 98 98 \n Lines 16645 16645 \n=========================================\n+ Hits 12981 12984 +3 \n+ Misses 3664 3661 -3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68% <0%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.4% <0%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.64% <0%> (+0.97%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=footer). Last update [49debe6...f53348d](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for sharing. [Model page](https://huggingface.co/twmkn9/albert-base-v2-squad2)"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Just creating model card for new community model! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3199/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3199",
"html_url": "https://github.com/huggingface/transformers/pull/3199",
"diff_url": "https://github.com/huggingface/transformers/pull/3199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3199.patch",
"merged_at": 1583797036000
} |
https://api.github.com/repos/huggingface/transformers/issues/3198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3198/comments | https://api.github.com/repos/huggingface/transformers/issues/3198/events | https://github.com/huggingface/transformers/pull/3198 | 578,232,353 | MDExOlB1bGxSZXF1ZXN0Mzg1ODUyOTYw | 3,198 | XLM-R Tokenizer now passes common tests + Integration tests | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This solved a CUDA runtime error for me. Strange! Thanks for this PR!"
] | 1,583 | 1,584 | 1,584 | MEMBER | null | XLM-R Tokenizer had a lot of issues that were not identified as no testing was done on it.
closes #2993
closes #2795
closes #2741
closes #2727
closes #2508
This fixes all the above issues, and works for all official checkpoints as well as other SPM files.
However, there are a few things I dislike about the way things stands, which I'm detailing in the comments below. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3198/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3198",
"html_url": "https://github.com/huggingface/transformers/pull/3198",
"diff_url": "https://github.com/huggingface/transformers/pull/3198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3198.patch",
"merged_at": 1584539570000
} |
https://api.github.com/repos/huggingface/transformers/issues/3197 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3197/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3197/comments | https://api.github.com/repos/huggingface/transformers/issues/3197/events | https://github.com/huggingface/transformers/pull/3197 | 578,202,842 | MDExOlB1bGxSZXF1ZXN0Mzg1ODI4NTM3 | 3,197 | [model upload] Support for organizations | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3197/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3197",
"html_url": "https://github.com/huggingface/transformers/pull/3197",
"diff_url": "https://github.com/huggingface/transformers/pull/3197.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3197.patch",
"merged_at": 1583789638000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3196/comments | https://api.github.com/repos/huggingface/transformers/issues/3196/events | https://github.com/huggingface/transformers/pull/3196 | 578,167,790 | MDExOlB1bGxSZXF1ZXN0Mzg1Nzk5MTE1 | 3,196 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=h1) Report\n> Merging [#3196](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3aca02efb3d4ff2d6d231c55d3b9367e61b7c0c4?src=pr&el=desc) will **decrease** coverage by `0.97%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3196 +/- ##\n==========================================\n- Coverage 77.98% 77.01% -0.98% \n==========================================\n Files 98 98 \n Lines 16660 16660 \n==========================================\n- Hits 12993 12831 -162 \n- Misses 3667 3829 +162\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `75.84% <0%> (+0.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (+0.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=footer). Last update [3aca02e...99b8533](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3196/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3196",
"html_url": "https://github.com/huggingface/transformers/pull/3196",
"diff_url": "https://github.com/huggingface/transformers/pull/3196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3196.patch",
"merged_at": 1583852294000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3195/comments | https://api.github.com/repos/huggingface/transformers/issues/3195/events | https://github.com/huggingface/transformers/issues/3195 | 578,153,641 | MDU6SXNzdWU1NzgxNTM2NDE= | 3,195 | Error reported when fine tuning on my dataset using ''run_language_modeling.py" | {
"login": "jasonachonu",
"id": 34540386,
"node_id": "MDQ6VXNlcjM0NTQwMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/34540386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jasonachonu",
"html_url": "https://github.com/jasonachonu",
"followers_url": "https://api.github.com/users/jasonachonu/followers",
"following_url": "https://api.github.com/users/jasonachonu/following{/other_user}",
"gists_url": "https://api.github.com/users/jasonachonu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jasonachonu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jasonachonu/subscriptions",
"organizations_url": "https://api.github.com/users/jasonachonu/orgs",
"repos_url": "https://api.github.com/users/jasonachonu/repos",
"events_url": "https://api.github.com/users/jasonachonu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jasonachonu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
## Information
Model I am using (RoBerta):
Language I am using the model on (English):
The problem arises when using:
* [ ] the official example scripts: (give details below)
Using the script provided in run_language_modeling.py I tried to fine tune the model on my
own dataset. But it shows a "KeyError 1" when It is about to run the first epoch and first iteration
* [ ] my own modified scripts: (give details below)
i only modified the script to read in my dataset using the LineByLineTextDataset class to read in my sentence which is in an excel file and convert them to a list like in the class
Please advice, if there a proper format my data-set should be in, before fine tuning it.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
To Fine tuning the language model of Bert on our my own data.
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3195/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3194/comments | https://api.github.com/repos/huggingface/transformers/issues/3194/events | https://github.com/huggingface/transformers/pull/3194 | 578,100,071 | MDExOlB1bGxSZXF1ZXN0Mzg1NzQzMzM5 | 3,194 | [fix] Bart CNN Example: model.to(device) | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=h1) Report\n> Merging [#3194](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5164ea91a7b4d35cb03867233527fa383a651775?src=pr&el=desc) will **decrease** coverage by `1.09%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3194 +/- ##\n========================================\n- Coverage 78.09% 77% -1.1% \n========================================\n Files 98 98 \n Lines 16660 16660 \n========================================\n- Hits 13011 12829 -182 \n- Misses 3649 3831 +182\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.67% <0%> (-3.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.4% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=footer). Last update [5164ea9...f3272af](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3194/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3194",
"html_url": "https://github.com/huggingface/transformers/pull/3194",
"diff_url": "https://github.com/huggingface/transformers/pull/3194.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3194.patch",
"merged_at": 1583780976000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3193/comments | https://api.github.com/repos/huggingface/transformers/issues/3193/events | https://github.com/huggingface/transformers/issues/3193 | 578,075,314 | MDU6SXNzdWU1NzgwNzUzMTQ= | 3,193 | Where is the default download address for pre-trained weight | {
"login": "649459021",
"id": 49975880,
"node_id": "MDQ6VXNlcjQ5OTc1ODgw",
"avatar_url": "https://avatars.githubusercontent.com/u/49975880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/649459021",
"html_url": "https://github.com/649459021",
"followers_url": "https://api.github.com/users/649459021/followers",
"following_url": "https://api.github.com/users/649459021/following{/other_user}",
"gists_url": "https://api.github.com/users/649459021/gists{/gist_id}",
"starred_url": "https://api.github.com/users/649459021/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/649459021/subscriptions",
"organizations_url": "https://api.github.com/users/649459021/orgs",
"repos_url": "https://api.github.com/users/649459021/repos",
"events_url": "https://api.github.com/users/649459021/events{/privacy}",
"received_events_url": "https://api.github.com/users/649459021/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's in your torch home:\r\n\r\n```py\r\n>>> from torch.hub import _get_torch_home\r\n>>> _get_torch_home()\r\n'/home/<USER>/.cache/torch'\r\n```"
] | 1,583 | 1,583 | 1,583 | NONE | null | # ❓ Questions & Help
```
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained('distilbert-base-uncased')
```
I can't find the downloaded file.
Thanks for your help | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3193/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3192/comments | https://api.github.com/repos/huggingface/transformers/issues/3192/events | https://github.com/huggingface/transformers/issues/3192 | 578,071,935 | MDU6SXNzdWU1NzgwNzE5MzU= | 3,192 | Provide comprehensive guide & best-practices for run_language_modeling.py | {
"login": "marrrcin",
"id": 6958772,
"node_id": "MDQ6VXNlcjY5NTg3NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6958772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marrrcin",
"html_url": "https://github.com/marrrcin",
"followers_url": "https://api.github.com/users/marrrcin/followers",
"following_url": "https://api.github.com/users/marrrcin/following{/other_user}",
"gists_url": "https://api.github.com/users/marrrcin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marrrcin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrrcin/subscriptions",
"organizations_url": "https://api.github.com/users/marrrcin/orgs",
"repos_url": "https://api.github.com/users/marrrcin/repos",
"events_url": "https://api.github.com/users/marrrcin/events{/privacy}",
"received_events_url": "https://api.github.com/users/marrrcin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
},
{
"id": 1834053007,
"node_id": "MDU6TGFiZWwxODM0MDUzMDA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)",
"name": "Ex: LM (Pretraining)",
"color": "76FFAF",
"default": false,
"description": "Related to language modeling pre-training"
}
] | closed | false | null | [] | [
"Even I tried to follow the blog and train a LM from scratch but the instructions are ambiguous. Like for ex config file is passed as command line args but if its passed it tries to load it and throws error .",
"I've covered some of the parts here: https://zablo.net/blog/post/training-roberta-from-scratch-the-missing-guide-polish-language-model/",
"https://stackoverflow.com/questions/61232399/decoding-predictions-for-masked-language-modeling-task-using-custom-bpe\r\n\r\nI posted a question related to this on SO. Any help is appreciated! @marrrcin ",
"bump!",
"> I've covered some of the parts here: https://zablo.net/blog/post/training-roberta-from-scratch-the-missing-guide-polish-language-model/\r\n\r\nHey Marcin, Your post is very informative. Thanks for that. Could you say a few words on the reasoning for the vocab size being 32000 exactly? Are there any heuristics that helped your decision? (or) anyone here can say a few words on if there are any good heuristics you can follow to choose this hyperparameter? Thanks",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,596 | 1,596 | CONTRIBUTOR | null | # 🚀 Feature request
Provide comprehensive guide for running scripts included in the repository, especially `run_language_modeling.py` it's parameters and model configurations.
## Motivation
1. Current version has `argparse` powered help, from which a lot of parameters seem to be either mysterious or have variable runtime behaviour (i.e `tokenizer_name` is sometimes path and the value that user provides is expected to provide different data for different models, ie. for Roberta and BERT). Again, when it comes to `tokenizer_name` - it claims that `If both are None, initialize a new tokenizer.`, which does not work at all, i.e when you use RoBERTa model. It should handle the training of the new tokenizer on provided `train_data` right away.
1. There are bunch of parameters that are critical to run the script at all (!), which are not even mentioned here https://huggingface.co/blog/how-to-train or even in the notebook https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb, i.e:
for Roberta, without `"max_position_embeddings": 514,` in config, the script crashes with:
```
CUDA error: device-side assert triggered
```
I had to dig into github to see some unresolved issues around this case and try out a few solutions before the script finally executed (https://github.com/huggingface/transformers/issues/2877).
1. Models with LM heads will train even though the head output size is different than vocab size of the tokenizer - the script should warn the user or (better) raise an exception in such scenarios.
1. Describe how the input dataset should look like. Is it required to have one sentence per-line or one article per line or maybe one paragraph per line?
1. Using multi-GPU on single machine and parameter `--evaluate_during_training` crashes the script -why? It might be worth an explanation. It's probably also a bug (https://github.com/huggingface/transformers/issues/1801).
1. Those are just from the top of my head - I will update this issue once I come up with more or maybe someone else will also add something to this thread.
Given the number of issues currently open, I suspect that I'm not the only one that struggles with the example script. **The biggest problem here is that running it without proper configuration might really cost a lot, but the script will still execute, yielding garbage model.**
Moreover - by improving the docs and providing best practices guide, you can enable many people with even better toolkit for their research and business.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3192/reactions",
"total_count": 22,
"+1": 15,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 7
} | https://api.github.com/repos/huggingface/transformers/issues/3192/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3191/comments | https://api.github.com/repos/huggingface/transformers/issues/3191/events | https://github.com/huggingface/transformers/pull/3191 | 577,932,162 | MDExOlB1bGxSZXF1ZXN0Mzg1NjA3MTA3 | 3,191 | Add integration tests lm generate torch tf | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Approx. how long do these tests take on a single V100?",
"> Approx. how long do these tests take on a single V100?\r\n\r\nDon't know how long they take on a V100. On a cpu, all tests combined (7 model tests for PT and 6 model tests for TF) take less than 10min (whereas `test_modeling_tf_xlnet.py`, `test_modeling_xlnet.py` and `test_modeling_transfo_xl.py` combined take ca. 8min)",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=h1) Report\n> Merging [#3191](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e03129ad447ad7670fcc6206e5eb27a5435d4d86?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3191 +/- ##\n==========================================\n+ Coverage 78.15% 78.16% +0.01% \n==========================================\n Files 98 98 \n Lines 16641 16641 \n==========================================\n+ Hits 13006 13008 +2 \n+ Misses 3635 3633 -2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `90.4% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (+0.15%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.89% <0%> (+0.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=footer). Last update [e03129a...9050ffe](https://codecov.io/gh/huggingface/transformers/pull/3191?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good to merge for me"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | Add integration tests for all LM models that are able to generate language.
- All integration tests use `do_sample=False` (greedy) generation and verify that TF 2.0 and PT yield the same results.
- Fixed a small bug with TFXLMModelWithLMHead | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3191/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3191",
"html_url": "https://github.com/huggingface/transformers/pull/3191",
"diff_url": "https://github.com/huggingface/transformers/pull/3191.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3191.patch",
"merged_at": 1583836158000
} |
https://api.github.com/repos/huggingface/transformers/issues/3190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3190/comments | https://api.github.com/repos/huggingface/transformers/issues/3190/events | https://github.com/huggingface/transformers/pull/3190 | 577,930,759 | MDExOlB1bGxSZXF1ZXN0Mzg1NjA1OTI3 | 3,190 | fix repetition penalty mask in tf | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=h1) Report\n> Merging [#3190](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b29fed790bdaa4be38b6d2c5de88e307474ea38d?src=pr&el=desc) will **increase** coverage by `0.1%`.\n> The diff coverage is `42.85%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3190 +/- ##\n=========================================\n+ Coverage 77.98% 78.09% +0.1% \n=========================================\n Files 98 98 \n Lines 16641 16645 +4 \n=========================================\n+ Hits 12978 12999 +21 \n+ Misses 3663 3646 -17\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.74% <100%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.38% <33.33%> (+3.59%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=footer). Last update [b29fed7...847d370](https://codecov.io/gh/huggingface/transformers/pull/3190?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good to merge for me.",
"> Small typo bug otherwise it's good to go\r\n\r\nThanks for spotting!"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | Fixed bug with TF 2.0 `repetition_penalty` when doing generation add make `early_stopping` an argument to the function `generate()`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3190/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3190",
"html_url": "https://github.com/huggingface/transformers/pull/3190",
"diff_url": "https://github.com/huggingface/transformers/pull/3190.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3190.patch",
"merged_at": 1583767797000
} |
https://api.github.com/repos/huggingface/transformers/issues/3189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3189/comments | https://api.github.com/repos/huggingface/transformers/issues/3189/events | https://github.com/huggingface/transformers/issues/3189 | 577,928,756 | MDU6SXNzdWU1Nzc5Mjg3NTY= | 3,189 | I want to import the model path on my owm computer? | {
"login": "xiongma",
"id": 30991932,
"node_id": "MDQ6VXNlcjMwOTkxOTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30991932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiongma",
"html_url": "https://github.com/xiongma",
"followers_url": "https://api.github.com/users/xiongma/followers",
"following_url": "https://api.github.com/users/xiongma/following{/other_user}",
"gists_url": "https://api.github.com/users/xiongma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiongma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiongma/subscriptions",
"organizations_url": "https://api.github.com/users/xiongma/orgs",
"repos_url": "https://api.github.com/users/xiongma/repos",
"events_url": "https://api.github.com/users/xiongma/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiongma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There are a lot of examples in [the documentation](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel)."
] | 1,583 | 1,583 | 1,583 | NONE | null | hi, I want to import the model path from my own computer, how to write or change the code, can u give me a example? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3189/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3188/comments | https://api.github.com/repos/huggingface/transformers/issues/3188/events | https://github.com/huggingface/transformers/issues/3188 | 577,887,430 | MDU6SXNzdWU1Nzc4ODc0MzA= | 3,188 | Beam search sometimes fails this assert error | {
"login": "Laksh1997",
"id": 59830552,
"node_id": "MDQ6VXNlcjU5ODMwNTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/59830552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laksh1997",
"html_url": "https://github.com/Laksh1997",
"followers_url": "https://api.github.com/users/Laksh1997/followers",
"following_url": "https://api.github.com/users/Laksh1997/following{/other_user}",
"gists_url": "https://api.github.com/users/Laksh1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laksh1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laksh1997/subscriptions",
"organizations_url": "https://api.github.com/users/Laksh1997/orgs",
"repos_url": "https://api.github.com/users/Laksh1997/repos",
"events_url": "https://api.github.com/users/Laksh1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laksh1997/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@Laksh1997 thanks a lot for reporting this error. Can you provide a code snippet and maybe a link to your data to easily reproduce this error? \r\n\r\nIn the meantime: \r\n\r\n- There has been a lot of changes recently in the beam search decoding -> I would recommend using the master branch of beam search decoding! \r\n- Beam search is not really made for top_p_top_k sampling, when using beam search I recommend setting do_sample=False",
"Right, I'll try the master branch and inform any further problems.\r\n\r\nFor generation of samples (without any context or input), there is no point of having sample set to False, as one will always generate the same sample as a result.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm running into this issue somewhat commonly as well.\r\n\r\n- Python 3.7\r\n- transformers 3.0.2\r\n- torch 1.5.1\r\n- macOS 10.15\r\n- Running on CPU\r\n\r\nCode to reproduce error:\r\n```python\r\nimport torch\r\nfrom transformers import MarianMTModel, MarianTokenizer\r\n\r\ntorch.manual_seed(15)\r\nphrase = \"Ich verstehe nicht, was du sagen willst. Sprich doch Deutsch!\"\r\nmodel = MarianMTModel.from_pretrained(\"Helsinki-NLP/opus-mt-de-ZH\")\r\ntokenizer = MarianTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-de-ZH\")\r\n\r\n# Nucleus sampling as per https://github.com/huggingface/blog/blob/master/notebooks/02_how_to_generate.ipynb\r\ninput_ids = tokenizer.prepare_translation_batch([phrase])\r\ntoken_ids_p = model.generate(\r\n **input_ids,\r\n do_sample=True,\r\n top_p=0.9,\r\n)\r\n\r\ntranslated_p = [tokenizer.decode(string, skip_special_tokens=True) for string in token_ids_p]\r\nprint(translated_p)\r\n```\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"temp.py\", line 14, in <module>\r\n top_p=0.9,\r\n File \"/Users/kaz/envs/venv-3.7/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 15, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/Users/kaz/envs/venv-3.7/lib/python3.7/site-packages/transformers/generation_utils.py\", line 459, in generate\r\n model_specific_kwargs=model_specific_kwargs,\r\n File \"/Users/kaz/envs/venv-3.7/lib/python3.7/site-packages/transformers/generation_utils.py\", line 757, in _generate_beam_search\r\n assert len(next_sent_beam) == num_beams, \"Beam should always be full\"\r\n```\r\n@patrickvonplaten Is it possible to revive this issue?"
] | 1,583 | 1,598 | 1,589 | NONE | null | # 🐛 Bug
## Information
Model I am using:
GPT2 with custom config (vocab=27)
Language I am using the model on (English, Chinese ...):
Molecules... (see https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system)
The problem arises when using:
Using the .generate(beam=2) function
The tasks I am working on is:
Generating molecules
## The Problem
Essentially, every so
## To reproduce
Steps to reproduce the behavior:
Just do the generation with these args:
```args = { "num_beams": 3, "max_length": 50, "temperature": 1, "repetition_penalty": 1, "length_penalty": 1, "do_sample": true, "top_k": 50, "top_p": 1} ```
The generation runs fine for several batches, but then after like 100s of iterations, it sometimes bugs out with this error:
```File "/Users/laithani/anaconda3/envs/TransformerVAE/lib/python3.7/site-packages/transformers/modeling_utils.py", line 979, in _generate_beam_search
assert len(next_batch_beam) == num_beams * (batch_ex + 1), f"{next_batch_beam}, {num_beams}, {batch_ex}"
```
I then added print statements to modeling_utils to try and see what is going on. I changed the assert line to:
```
assert len(next_batch_beam) == num_beams * (batch_ex + 1), f"{next_batch_beam}, {num_beams}, {batch_ex}"
```
And with this I got:
```
AssertionError: [(tensor(-19.8421), tensor(26), tensor(0)), (tensor(-20.9710), tensor(26), tensor(0)), (tensor(-30.5064), tensor(5), tensor(0)), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (tensor(-17.4236), tensor(11), tensor(9)), (tensor(-26.3645), tensor(16), tensor(9)), (tensor(-23.9410), tensor(16), tensor(9)), (0, 0, 0), (0, 0, 0), (0, 0, 0), (tensor(-58.0648), tensor(0), tensor(15))], 3, 5
```
If you count up the length of the list, it is length 16, which =/= 3 * (5 + 1)
Not sure what is going on here, looking into the code now to try and figure out what is going on.
- `transformers` version: 2.4.1
- Platform: Ubuntu and Mac (problem occurs in both)
- Python version: 3.6 and 3.7 (problem occurs in both)
- PyTorch version (GPU?): 1.3.0
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes, either V100 or K80
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3188/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3187/comments | https://api.github.com/repos/huggingface/transformers/issues/3187/events | https://github.com/huggingface/transformers/issues/3187 | 577,875,174 | MDU6SXNzdWU1Nzc4NzUxNzQ= | 3,187 | Knowing the specific data set used for DistilBertForQuestionAnswering | {
"login": "valldabo2",
"id": 10504546,
"node_id": "MDQ6VXNlcjEwNTA0NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/10504546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/valldabo2",
"html_url": "https://github.com/valldabo2",
"followers_url": "https://api.github.com/users/valldabo2/followers",
"following_url": "https://api.github.com/users/valldabo2/following{/other_user}",
"gists_url": "https://api.github.com/users/valldabo2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/valldabo2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/valldabo2/subscriptions",
"organizations_url": "https://api.github.com/users/valldabo2/orgs",
"repos_url": "https://api.github.com/users/valldabo2/repos",
"events_url": "https://api.github.com/users/valldabo2/events{/privacy}",
"received_events_url": "https://api.github.com/users/valldabo2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052333,
"node_id": "MDU6TGFiZWwxODM0MDUyMzMz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Question%20Answering",
"name": "Ex: Question Answering",
"color": "86FFCF",
"default": false,
"description": ""
},
{
"id": 1838876023,
"node_id": "MDU6TGFiZWwxODM4ODc2MDIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distillation",
"name": "Distillation",
"color": "d4c5f9",
"default": false,
"description": "Related to model distillation"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi,
I am using the pipeline for question answering and am wondering what dataset was used to train the underlying model. The loaded model is DistilBertForQuestionAnswering. But I need to know if it used Squad 1.1 or Squad 2.0 which includes the possibility of no answer?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3187/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3186/comments | https://api.github.com/repos/huggingface/transformers/issues/3186/events | https://github.com/huggingface/transformers/pull/3186 | 577,857,555 | MDExOlB1bGxSZXF1ZXN0Mzg1NTQ0NTc0 | 3,186 | CPU/GPU memory benchmarking utilities - Remove support for python 3.5 (now only 3.6+) | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=h1) Report\n> Merging [#3186](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e03129ad447ad7670fcc6206e5eb27a5435d4d86&el=desc) will **decrease** coverage by `0.50%`.\n> The diff coverage is `30.76%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3186 +/- ##\n==========================================\n- Coverage 78.15% 77.64% -0.51% \n==========================================\n Files 98 99 +1 \n Lines 16641 16795 +154 \n==========================================\n+ Hits 13006 13041 +35 \n- Misses 3635 3754 +119 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.34% <15.38%> (-3.07%)` | :arrow_down: |\n| [src/transformers/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmtfdXRpbHMucHk=) | `31.74% <31.74%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.92% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.29% <100.00%> (+0.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.07% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3186/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.53% <0.00%> (-2.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=footer). Last update [e03129a...cb67ca6](https://codecov.io/gh/huggingface/transformers/pull/3186?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,584 | 1,584 | MEMBER | null | This PR add some utilities to benchmark (RAM) memory consumption of the models.
This is actually a generic utility that can work with any arbitrary python code
Ex:
```python
import torch
from transformers import GPT2Model, GPT2Tokenizer
from transformers import start_memory_tracing, stop_memory_tracing
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
sequence = tokenizer.encode("Hello how are you", return_tensors='pt')
# Line by line memory tracing (all code in the module `transformers`).
trace = start_memory_tracing(modules_to_trace="transformers")
output = model(sequence)
summary = stop_memory_tracing(trace)
# Summary contain three fields:
# `sequential`: list of line by line consumption (with line code and location)
# `cumulative`: list of cumulative line by line consumption (when lines are executed several times) ordered from the most memory consuming line to the least (also with line code and location)
# `total`: total memory consumption of the script (default to sum memory increase at each line and ignore released mem, can be seet to count increase and release by less reliable on ubuntu).
# Each `Memory` object contain CPU, GPU and CPU + GPU memory, each both in int and human readable string
print(f"Total memory consumption: {summary.total}")
top_line = summary.cumulative[0]
print(f"Consumed {top_line.cpu_gpu}: {top_line.frame.line_text} at {top_line.frame.filename}:{top_line.frame.line_number}")
```
Incorporated in the `./examples/benchmark.py` script. Example of command-line run:
``` bash
(py37) bash-3.2$ python ./examples/benchmarks.py --models gpt2 --torch --batch_sizes 1 --slice_sizes 64 256 512 512 512 --no_speed --verbose
Running with arguments Namespace(amp=False, average_over=30, batch_sizes=[1], csv_filename=None, fp16=False, keras_predict=False, models=['gpt2'], no_memory=False, no_speed=True, save_to_csv=False, slice_sizes=[64, 256, 512, 512, 512], tensorflow=False, torch=True, torch_cuda=False, torchscript=False, verbose=False, xla=False)
1 / 1
Token indices sequence length is longer than the specified maximum sequence length for this model (2708 > 1024). Running this sequence through the model will result in indexing errors
....
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:487: mem 0.000B: presents = presents + (present,)
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:489: mem 0.000B: if self.output_attentions:
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:477: mem 0.000B: for i, (block, layer_past) in enumerate(zip(self.h, past)):
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:492: mem 0.000B: hidden_states = self.ln_f(hidden_states)
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:494: mem 0.000B: hidden_states = hidden_states.view(*output_shape)
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:496: mem 0.000B: if self.output_hidden_states:
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:499: mem 0.000B: outputs = (hidden_states,)
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:500: mem 0.000B: if self.output_past:
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:501: mem 0.000B: outputs = outputs + (presents,)
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:502: mem 0.000B: if self.output_hidden_states:
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:504: mem 0.000B: if self.output_attentions:
/Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:509: mem 0.000B: return outputs # last hidden state, (presents), (all hidden_states), (attentions)
Top 5 script lines consuming the most memory:
0 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/activations.py:31: mem 276.004MB: return 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
1 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_utils.py:1311: mem 151.520MB: x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight)
2 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:146: mem 146.004MB: w = w * b - 1e4 * (1 - b)
3 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:143: mem 132.004MB: w = w / math.sqrt(v.size(-1))
4 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:187: mem 36.000MB: present = torch.stack((key.transpose(-2, -1), value)) # transpose to have same shapes for stacking
5 => /Users/thomwolf/Documents/GitHub/transformers/src/transformers/modeling_gpt2.py:159: mem 33.000MB: outputs = [torch.matmul(w, v)]
Memory increase computed by summing traced script lines: 843.758MB
=========== RESULTS ===========
======= MODEL CHECKPOINT: gpt2 =======
===== BATCH SIZE: 1 =====
gpt2/1/64: N/A 75.176MB
gpt2/1/256: N/A 349.695MB
gpt2/1/512: N/A 843.758MB
gpt2/1/512: N/A 843.758MB
gpt2/1/512: N/A 843.758MB
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3186/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3186/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3186",
"html_url": "https://github.com/huggingface/transformers/pull/3186",
"diff_url": "https://github.com/huggingface/transformers/pull/3186.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3186.patch",
"merged_at": 1584454631000
} |
https://api.github.com/repos/huggingface/transformers/issues/3185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3185/comments | https://api.github.com/repos/huggingface/transformers/issues/3185/events | https://github.com/huggingface/transformers/pull/3185 | 577,807,710 | MDExOlB1bGxSZXF1ZXN0Mzg1NTA0OTMx | 3,185 | Tokenizers v3.0.0 | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=h1) Report\n> Merging [#3185](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7420a6a9cc1750c2bd2c2c245d00048ec36d3bf0?src=pr&el=desc) will **decrease** coverage by `0.24%`.\n> The diff coverage is `80.42%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3185 +/- ##\n==========================================\n- Coverage 77.79% 77.55% -0.25% \n==========================================\n Files 100 100 \n Lines 17025 17105 +80 \n==========================================\n+ Hits 13245 13265 +20 \n- Misses 3780 3840 +60\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `95.33% <ø> (-1.7%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.36% <100%> (-5.64%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `74.51% <100%> (-0.28%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <100%> (-0.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `86.18% <78.61%> (-5.81%)` | :arrow_down: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/3185/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=footer). Last update [7420a6a...860cf66](https://codecov.io/gh/huggingface/transformers/pull/3185?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Really like the new typings!"
] | 1,583 | 1,586 | 1,586 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3185/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3185/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3185",
"html_url": "https://github.com/huggingface/transformers/pull/3185",
"diff_url": "https://github.com/huggingface/transformers/pull/3185.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3185.patch",
"merged_at": 1586212156000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3184/comments | https://api.github.com/repos/huggingface/transformers/issues/3184/events | https://github.com/huggingface/transformers/issues/3184 | 577,747,855 | MDU6SXNzdWU1Nzc3NDc4NTU= | 3,184 | `Failed to build tokenizers` when installing 2.5.1 version. | {
"login": "RileyShe",
"id": 13896613,
"node_id": "MDQ6VXNlcjEzODk2NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13896613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RileyShe",
"html_url": "https://github.com/RileyShe",
"followers_url": "https://api.github.com/users/RileyShe/followers",
"following_url": "https://api.github.com/users/RileyShe/following{/other_user}",
"gists_url": "https://api.github.com/users/RileyShe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RileyShe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RileyShe/subscriptions",
"organizations_url": "https://api.github.com/users/RileyShe/orgs",
"repos_url": "https://api.github.com/users/RileyShe/repos",
"events_url": "https://api.github.com/users/RileyShe/events{/privacy}",
"received_events_url": "https://api.github.com/users/RileyShe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`tokenizers` is only required when you wish to use the new, fast [tokenizers](https://github.com/huggingface/tokenizers). By default, though, the standard (slower) tokenizers are used. So you do not actually need the `tokenizers` library to run the `transformers` library.\r\n\r\nRelated:\r\nhttps://github.com/huggingface/transformers/issues/2980\r\nhttps://github.com/huggingface/transformers/issues/2831\r\n",
"Can you please open an issue over at https://github.com/huggingface/tokenizers with your OS/Python details?",
"while I am installing transformer==4.4, I am facing issue: Failed to build tokenizers.\r\n"
] | 1,583 | 1,704 | 1,583 | NONE | null | `Failed to build tokenizers` when i try to install transformers==2.5.1
`Failed to build tokenizers` when i try to install tokenizers == 0.5.2
so, i want to know `tokenizers == 0.5.2` is must ?
thanks~ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3184/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3183/comments | https://api.github.com/repos/huggingface/transformers/issues/3183/events | https://github.com/huggingface/transformers/issues/3183 | 577,747,540 | MDU6SXNzdWU1Nzc3NDc1NDA= | 3,183 | About the examples document of bert with SQuAD 2.0 | {
"login": "senp98",
"id": 37979349,
"node_id": "MDQ6VXNlcjM3OTc5MzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/37979349?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/senp98",
"html_url": "https://github.com/senp98",
"followers_url": "https://api.github.com/users/senp98/followers",
"following_url": "https://api.github.com/users/senp98/following{/other_user}",
"gists_url": "https://api.github.com/users/senp98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/senp98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/senp98/subscriptions",
"organizations_url": "https://api.github.com/users/senp98/orgs",
"repos_url": "https://api.github.com/users/senp98/repos",
"events_url": "https://api.github.com/users/senp98/events{/privacy}",
"received_events_url": "https://api.github.com/users/senp98/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, I think this is a mistake, I think it should be `bert-base-uncased` instead. I'm updating it."
] | 1,583 | 1,583 | 1,583 | NONE | null | I'm wondering if there is a mistake.
In the README document of /examples, the training parameters setting for training on SQuAD dataset, the first code block:
```
export SQUAD_DIR=/path/to/SQUAD
python run_squad.py \
--model_type bert \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
Because this model is bert-base-cased, which means it's cased, I think the ``--do_lower_case`` shouldn't be here. Is that a mistake? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3183/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3182/comments | https://api.github.com/repos/huggingface/transformers/issues/3182/events | https://github.com/huggingface/transformers/issues/3182 | 577,719,178 | MDU6SXNzdWU1Nzc3MTkxNzg= | 3,182 | How can i use pipeline with pretrained automodel ? | {
"login": "pasa13142",
"id": 56387619,
"node_id": "MDQ6VXNlcjU2Mzg3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/56387619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pasa13142",
"html_url": "https://github.com/pasa13142",
"followers_url": "https://api.github.com/users/pasa13142/followers",
"following_url": "https://api.github.com/users/pasa13142/following{/other_user}",
"gists_url": "https://api.github.com/users/pasa13142/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pasa13142/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pasa13142/subscriptions",
"organizations_url": "https://api.github.com/users/pasa13142/orgs",
"repos_url": "https://api.github.com/users/pasa13142/repos",
"events_url": "https://api.github.com/users/pasa13142/events{/privacy}",
"received_events_url": "https://api.github.com/users/pasa13142/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There are many examples in [the docs](https://huggingface.co/transformers/usage.html). The [pipeline reference](https://huggingface.co/transformers/main_classes/pipelines.html) will also be helpful."
] | 1,583 | 1,583 | 1,583 | NONE | null | how can i use pipeline with pretrained auto model and tokenizer ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3182/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3181/comments | https://api.github.com/repos/huggingface/transformers/issues/3181/events | https://github.com/huggingface/transformers/issues/3181 | 577,633,825 | MDU6SXNzdWU1Nzc2MzM4MjU= | 3,181 | The implementation of GPT2 masked attention mechanism will cause errors when the model was trained after some iterations. | {
"login": "xunzi2020",
"id": 61956095,
"node_id": "MDQ6VXNlcjYxOTU2MDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/61956095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xunzi2020",
"html_url": "https://github.com/xunzi2020",
"followers_url": "https://api.github.com/users/xunzi2020/followers",
"following_url": "https://api.github.com/users/xunzi2020/following{/other_user}",
"gists_url": "https://api.github.com/users/xunzi2020/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xunzi2020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xunzi2020/subscriptions",
"organizations_url": "https://api.github.com/users/xunzi2020/orgs",
"repos_url": "https://api.github.com/users/xunzi2020/repos",
"events_url": "https://api.github.com/users/xunzi2020/events{/privacy}",
"received_events_url": "https://api.github.com/users/xunzi2020/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for the bug report @xunzi2020! Did you by any chance measure the degree results would differ when replacing `-1e4` by `-1e10`? ",
"Yes, I trained with chinese text (~30GB), it happened.\n\n\nPatrick von Platen <[email protected]> 于2020年3月9日周一 下午4:11写道:\n\n> Thanks for the bug report @xunzi2020 <https://github.com/xunzi2020>! Did\n> you by any chance measure the degree results would differ when replacing\n> -1e4 by -1e10?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3181?email_source=notifications&email_token=AOYV777JVZ6HL4BNOKYY7HDRGSQEXA5CNFSM4LEAB3C2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOGCLSI#issuecomment-596387273>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AOYV777AZV3K5GBW4TFVFFDRGSQEXANCNFSM4LEAB3CQ>\n> .\n>\n",
"Do you have a comparison such as: \r\n\r\n| - | GPT2 with -1e4 vs. GPT2 with -1e10 | \r\n| ------------- | ------------- | \r\n| abs mean(softmax(logits) - softmax(logits)) | ???? |\r\n| relative mean(softmax(logits) - softmax(logits)/softmax(logits)) | ???? |\r\n\r\nlets say averaged over 100 - 1000 input samples and all GPT2 logits (50256)? \r\n\r\nThat would be great to quantify the impact this change would have.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
## Information
Model I am using (GPT2):
Language I am using the model on ( Chinese ...):
File in modeling_gpt2.py:
146 w = w * b - 1e4 * (1 - b) # here the bias "-1e4" is too big, suggest replace it to "-1e10"
The item value of "query * key" may be close to or lower to "-1e4" , then after the softmax operation, the weights of masked attention may be attend to "unseen" context. This will cause inference errors when to evaluate the model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3181/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3180/comments | https://api.github.com/repos/huggingface/transformers/issues/3180/events | https://github.com/huggingface/transformers/pull/3180 | 577,585,880 | MDExOlB1bGxSZXF1ZXN0Mzg1MzI4Njg1 | 3,180 | NER - pl example | {
"login": "shubhamagarwal92",
"id": 7984532,
"node_id": "MDQ6VXNlcjc5ODQ1MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7984532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shubhamagarwal92",
"html_url": "https://github.com/shubhamagarwal92",
"followers_url": "https://api.github.com/users/shubhamagarwal92/followers",
"following_url": "https://api.github.com/users/shubhamagarwal92/following{/other_user}",
"gists_url": "https://api.github.com/users/shubhamagarwal92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shubhamagarwal92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shubhamagarwal92/subscriptions",
"organizations_url": "https://api.github.com/users/shubhamagarwal92/orgs",
"repos_url": "https://api.github.com/users/shubhamagarwal92/repos",
"events_url": "https://api.github.com/users/shubhamagarwal92/events{/privacy}",
"received_events_url": "https://api.github.com/users/shubhamagarwal92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This looks great to me. Thanks @shubhamagarwal92. \r\n\r\nYou need to just run the black command, I believe to is `make style` to fix formatting. Otherwise it looks good.",
"@srush I ran the `make style` which changed a lot of files, however, the `check_code_quality` is still failing! \r\nDo you want me to revert the last two commits and manually change the `'logs'` to `\"logs\"`\r\n\r\nBTW, `pip install -e \".[dev]\"` is failing on both mac and linux for `tensorflow` and on `sentencepiece` for mac. I had to manually install the `[\"black\", \"isort\", \"flake8\"]` packages. Python=3.8.2 in conda env. ",
"> BTW, pip install -e \".[dev]\" is failing on both mac and linux for tensorflow and on sentencepiece for mac.\r\n\r\nThat would be because [TensorFlow only supports python 3.5-3.7](https://www.tensorflow.org/install/pip?lang=python3#system-requirements), unfortunately.",
"> > BTW, pip install -e \".[dev]\" is failing on both mac and linux for tensorflow and on sentencepiece for mac.\r\n> \r\n> That would be because [TensorFlow only supports python 3.5-3.7](https://www.tensorflow.org/install/pip?lang=python3#system-requirements), unfortunately.\r\n\r\n@LysandreJik Thanks. Installs on ubuntu with python 3.6. \r\n\r\nHowever, on mac: \r\n```\r\nconda create -n transformers_dev python=3.6 -y\r\nconda activate transformers_dev\r\npip install -e \".[dev]\"\r\n\r\n\r\nFailed to build tokenizers\r\nERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly\r\n```\r\n\r\nMac specs:\r\n```\r\nPython 3.6.10 | packaged by conda-forge | (default, Mar 5 2020, 09:56:10) \r\n[GCC Clang 9.0.1 ] on darwin\r\n```",
"Hi @shubhamagarwal92 ,\r\n\r\nthanks for that PR and fixing the issues :+1: \r\n\r\nI just ran the `run_pl.sh` script, training and final testing run without errors now. However, during training precision, recall and F1 are always 0 -> final output on the test set shows:\r\n\r\n```bash\r\nTEST RESULTS\r\n{'val_loss': tensor(7.0679), 'precision': 0.0, 'recall': 0.0, 'f1': 0}\r\n----------------------------------------------------------------------------------------------------\r\nTesting: 200it [00:07, 27.98it/s]\r\n```\r\n\r\nLast lines of the prediction output:\r\n\r\n```bash\r\nder I-OTHderiv\r\nBibliothek I-OTHderiv\r\nberufen I-OTHderiv\r\nwurde I-OTHderiv\r\n, I-OTHderiv\r\nverließ I-OTHderiv\r\nGardthausen I-OTHderiv\r\nden I-OTHderiv\r\nBibliotheksdienst I-OTHderiv\r\n. I-OTHderiv\r\n```\r\n",
"> I just ran the `run_pl.sh` script, training and final testing run without errors now. However, during training precision, recall and F1 are always 0 -> final output on the test set shows:\r\n> \r\n> ```shell\r\n> TEST RESULTS\r\n> {'val_loss': tensor(7.0679), 'precision': 0.0, 'recall': 0.0, 'f1': 0}\r\n> ----------------------------------------------------------------------------------------------------\r\n> Testing: 200it [00:07, 27.98it/s]\r\n> ```\r\n\r\nThanks for reporting this. Could you please verify the version of `pytorch-lightning`. For me it is\r\n`pytorch-lightning==0.7.1`, `transformers==2.5.1` and the results as reported in the [README](https://github.com/huggingface/transformers/pull/3180/commits/ed39624dd0d0f3bce55352a8c4c9a8f515793e29#diff-eb7fd389de7be266012669eab7db207bR119).\r\n\r\n\r\nAlso could you please check if the results in `${OUTPUT_DIR}/test_results.txt` mentioned [here](https://github.com/huggingface/transformers/pull/3180/commits/84ee92d47ee6d659edaf6a61d09b393ebeea4d5b#diff-5a6311e9856e7b0057d9c1b85cd85fadR27)\r\nalso correspond to 0. \r\n\r\nIt works for me as:\r\n\r\n\r\n",
"Hi,\r\n\r\nI'm using the same versions of both `pytorch-lightning` and `transformers` 😂\r\n\r\nOutput of `test_results` is:\r\n\r\n```bash\r\n$ cat germeval-model/test_results.txt \r\nf1 = 0\r\nprecision = 0.0\r\nrecall = 0.0\r\nval_loss = tensor(9.4173)\r\n```\r\n\r\nBut I'm going to test it on another machine :)",
"\r\n> But I'm going to test it on another machine :)\r\n\r\nI am also attaching my environment file via `pip freeze > requirements.txt`:\r\n[requirements.txt](https://github.com/huggingface/transformers/files/4307693/requirements.txt)\r\n\r\nPlease let me know if this doesn't work. ",
"@shubhamagarwal92 I think somehow you have the wrong version of our style checks installed. \r\n\r\nCan you try running under this command?\r\n```\r\nsudo pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort\r\nsudo pip install .[tf,torch,quality]\r\n```\r\n\r\n@LysandreJik we have to fix this, it is really confusing...\r\n\r\n@stefan-it would love to see your log as well. Could you also try `rm cached*` I think maybe your feature cache got messed up?\r\n\r\n\r\n\r\n",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=h1) Report\n> Merging [#3180](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e03129ad447ad7670fcc6206e5eb27a5435d4d86?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3180 +/- ##\n==========================================\n+ Coverage 78.15% 78.16% +0.01% \n==========================================\n Files 98 98 \n Lines 16641 16641 \n==========================================\n+ Hits 13006 13008 +2 \n+ Misses 3635 3633 -2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.72% <0%> (+0.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=footer). Last update [e03129a...9f949d3](https://codecov.io/gh/huggingface/transformers/pull/3180?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> ```\r\n> sudo pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort\r\n> sudo pip install .[tf,torch,quality]\r\n> ```\r\n\r\n@srush I reverted the last 3 style related commits, force-pushed and added a small commit to pass all the checks. Please merge if everything is fine. \r\n\r\nAlso, for `isort`, I guess you meant using `git+https://` \r\n```\r\npip install git+https://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort\r\n```\r\nThis link is also wrong at [contributing.md](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests). Could you please also state the python version in the md. \r\n\r\nThis command `pip install .[tf,torch,quality]` is still failing on Mac as mentioned in my previous comment.\r\n\r\n```\r\nFailed to build tokenizers\r\nERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly\r\n```",
"Thanks @shubhamagarwal92. Sorry for the annoyance.\r\n\r\n@LysandreJik this lgtm. ",
"@srush Happy to help! :) \r\n\r\nThanks for approving the PR!",
"@shashwath94 I think I've found the reason for the bad evaluation results: I'm using apex and the `--fp16` parameter in the `run_pl.sh` script!\r\n\r\nDo you have any idea, why it is not working using half precision 🤔",
"Ah, I will check with pytorch-lightning. It is plausible we are not integrating correctly with them ",
"@srush While you are it, could you please check the status of my PR in pl as well. \r\n\r\nhttps://github.com/PyTorchLightning/pytorch-lightning/pull/1094\r\n\r\nBasically, I was observing memory leak on GPU0 if other GPU id (eg. [1]) was provided when running the NER example.\r\n\r\nAFAIK, the solution is `torch.cuda.set_device()` "
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | 1. Solves most of the issues raised in #3159
2. Streamlines shell script pipeline
3. pl logs related changes
4. added in Readme | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3180/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3180",
"html_url": "https://github.com/huggingface/transformers/pull/3180",
"diff_url": "https://github.com/huggingface/transformers/pull/3180.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3180.patch",
"merged_at": 1583801019000
} |
https://api.github.com/repos/huggingface/transformers/issues/3179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3179/comments | https://api.github.com/repos/huggingface/transformers/issues/3179/events | https://github.com/huggingface/transformers/issues/3179 | 577,520,458 | MDU6SXNzdWU1Nzc1MjA0NTg= | 3,179 | I can not import transformers | {
"login": "over-shine",
"id": 30746603,
"node_id": "MDQ6VXNlcjMwNzQ2NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/30746603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/over-shine",
"html_url": "https://github.com/over-shine",
"followers_url": "https://api.github.com/users/over-shine/followers",
"following_url": "https://api.github.com/users/over-shine/following{/other_user}",
"gists_url": "https://api.github.com/users/over-shine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/over-shine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/over-shine/subscriptions",
"organizations_url": "https://api.github.com/users/over-shine/orgs",
"repos_url": "https://api.github.com/users/over-shine/repos",
"events_url": "https://api.github.com/users/over-shine/events{/privacy}",
"received_events_url": "https://api.github.com/users/over-shine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have solved it by rebooting my computer and reinstall some modules that transformers can not found but have been installed",
"This issue still exists. I have Tensorflow 2.1 with GPU and transformers version 2.8.0\r\n\r\nI tried with rebooting and creating new conda environments multiple times but still no success :(",
"I can import it and my python version is 3.7.6 , 2.8.0 for transformers and 2.1 for tf. Maybe you could try to upgrade your python if its' version is under 3.7.",
"> This issue still exists. I have Tensorflow 2.1 with GPU and transformers version 2.8.0\r\n> \r\n> I tried with rebooting and creating new conda environments multiple times but still no success :(\r\n\r\nI can import it and my python version is 3.7.6 , 2.8.0 for transformers and 2.1 for tf. Maybe you could try to upgrade your python if its' version is under 3.7.",
"Hmm, the next day I tried it again with the same environment and it works xD Probably because of rebooting or something. But I also rebooted the first day, idk..."
] | 1,583 | 1,586 | 1,583 | NONE | null | # 🐛 Bug
## Information
when I execute "from transformers import TFBertModel, BertModel" in ipython, the error, "ImportError: cannot import name 'BartConfig' from 'transformers.configuration_auto'" was raised.
This error occured after the tensorflow update from version 2.0 to version 2.1 and the python update from version 3.6 to 3.7.
In addition, after I update the python from version 3.6 to 3.7, I installed torch1.4 and tensorflow2.1 in a same env, the "import" is always failed but when I "import transformers" in other env which only include torch1.4 and python3.7, it succeed
I want to know how to make it. Thank you.
- `transformers` version:2.5.1
- Platform: windows10
- Python version: 3.7.6
- PyTorch version (GPU?):GPU1.4
- Tensorflow version (GPU?):GPU2.1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3179/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3178/comments | https://api.github.com/repos/huggingface/transformers/issues/3178/events | https://github.com/huggingface/transformers/issues/3178 | 577,492,306 | MDU6SXNzdWU1Nzc0OTIzMDY= | 3,178 | Pretraining QA corpora from scratch with sentence pairs | {
"login": "yuanbit",
"id": 12972261,
"node_id": "MDQ6VXNlcjEyOTcyMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/12972261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanbit",
"html_url": "https://github.com/yuanbit",
"followers_url": "https://api.github.com/users/yuanbit/followers",
"following_url": "https://api.github.com/users/yuanbit/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanbit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanbit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanbit/subscriptions",
"organizations_url": "https://api.github.com/users/yuanbit/orgs",
"repos_url": "https://api.github.com/users/yuanbit/repos",
"events_url": "https://api.github.com/users/yuanbit/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanbit/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I would like to pretrain a non-factoid QA corpora (question and answer passages) from scratch using BERT. I have looked at both:
https://huggingface.co/blog/how-to-train and
https://gist.github.com/aditya-malte/2d4f896f471be9c38eb4d723a710768b#file-smallberta_pretraining-ipynb
I would like to confirm that what I am doing is correct:
1. I concatenated the question and answers with a [SEP] token, so each line in my input data file looks like:
question [SEP] answer
2. I am running the script with --line_by_line
I am uncertain if this is correct because how would the script know which is sentence A and which is sentence B, or is this not necessary?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3178/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3177/comments | https://api.github.com/repos/huggingface/transformers/issues/3177/events | https://github.com/huggingface/transformers/pull/3177 | 577,486,120 | MDExOlB1bGxSZXF1ZXN0Mzg1MjU2MDg1 | 3,177 | Distilgpt2 finetuning and text generation | {
"login": "tripathiaakash",
"id": 15000270,
"node_id": "MDQ6VXNlcjE1MDAwMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/15000270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tripathiaakash",
"html_url": "https://github.com/tripathiaakash",
"followers_url": "https://api.github.com/users/tripathiaakash/followers",
"following_url": "https://api.github.com/users/tripathiaakash/following{/other_user}",
"gists_url": "https://api.github.com/users/tripathiaakash/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tripathiaakash/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tripathiaakash/subscriptions",
"organizations_url": "https://api.github.com/users/tripathiaakash/orgs",
"repos_url": "https://api.github.com/users/tripathiaakash/repos",
"events_url": "https://api.github.com/users/tripathiaakash/events{/privacy}",
"received_events_url": "https://api.github.com/users/tripathiaakash/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @BlackJack01 - thanks so much for this great contribution! We are still going through the notebook and discussing how to add it :-) Will let you know soon! ",
"Hi @BlackJack01, sorry for the late answer. \r\nWe now have community notebooks here: https://github.com/huggingface/transformers/tree/master/notebooks#community-notebooks\r\nFeel free to open a PR to add it there :-) "
] | 1,583 | 1,591 | 1,591 | CONTRIBUTOR | null | This ipynb notebook contains a finetuning and text generation tutorial for distilgpt2. The tutorial also have used code from run_generation.py file to make generation faster than using the original file for every iteration. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3177/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3177",
"html_url": "https://github.com/huggingface/transformers/pull/3177",
"diff_url": "https://github.com/huggingface/transformers/pull/3177.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3177.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3176/comments | https://api.github.com/repos/huggingface/transformers/issues/3176/events | https://github.com/huggingface/transformers/issues/3176 | 577,473,552 | MDU6SXNzdWU1Nzc0NzM1NTI= | 3,176 | GLUE test set predictions | {
"login": "shoarora",
"id": 16643856,
"node_id": "MDQ6VXNlcjE2NjQzODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/16643856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shoarora",
"html_url": "https://github.com/shoarora",
"followers_url": "https://api.github.com/users/shoarora/followers",
"following_url": "https://api.github.com/users/shoarora/following{/other_user}",
"gists_url": "https://api.github.com/users/shoarora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shoarora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shoarora/subscriptions",
"organizations_url": "https://api.github.com/users/shoarora/orgs",
"repos_url": "https://api.github.com/users/shoarora/repos",
"events_url": "https://api.github.com/users/shoarora/events{/privacy}",
"received_events_url": "https://api.github.com/users/shoarora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hi @shoarora \r\n\r\ncan you share the script to to report performance on test set?",
"@Mahmedturk you can check out the branch in PR #3405 \r\n\r\nIt's diverge pretty heavily from master and I haven't updated it yet, but you should still be able to run `run_glue.py` off that branch with the `--do_test` flag that I added and it should produce the `.tsv` files required for submission. ",
"@shoarora I pulled the repo to update run_glue.py as I wanted to use this new feature. However, I now get an error when I run run_glue.py! Please see below the output of the error message. It looks like in previous versions, there weren't any keyword arguments named \"mode\" in GlueDataset() --possible? \r\n\r\n`Traceback (most recent call last):\r\n File \"./transformers/examples/text-classification/run_glue.py\", line 228, in <module>\r\n main()\r\n File \"./transformers/examples/text-classification/run_glue.py\", line 139, in main\r\n test_dataset = GlueDataset(data_args, tokenizer=tokenizer, mode=\"test\") if training_args.do_predict else None\r\nTypeError: __init__() got an unexpected keyword argument 'mode'`",
"@AMChierici I didn't author #4463, which is what has made it to master to enable this feature. I haven't played with it yet so sorry I can't be of more help",
"@AMChierici make sure you run from master, there's indeed a `mode` kwarg now.\r\n\r\n@shoarora Thanks for this first PR and I did check yours while merging the other (to make sure that the indices in csv parsing, etc. were correct)",
"Thanks, @julien-c . Yes, solved.. In fact, I was not running from master.\r\n",
"downloaded master right now.\r\n File \"examples/text-classification/run_glue.py\", line 143, in main\r\n if training_args.do_eval\r\nTypeError: __init__() got an unexpected keyword argument 'mode'"
] | 1,583 | 1,594 | 1,590 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
The `run_glue` script is super helpful. But it currently doesn't implement producing predictions on the test datasets for the GLUE tasks. I think this would be extremely helpful for a lot of people. I'm sure there are plenty of people who have implemented this functionality themselves, but I haven't found any. Since `transformers` already provides train and dev for GLUE, it would be cool to complete the feature set with providing test set predictions.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I'm personally working on a branch that extends the `glue_processors` to support the test sets (which are already downloaded by the recommended `download_glue.py` script. I also update the `run_glue.py` script to produce the `*.tsv` files required by the GLUE online submission interface.
I think I'm a couple days out from testing/completing my implementation. I'm also sure plenty of implementations exist of this. If there are no other plans to support this in the works, I'm happy to submit a PR.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3176/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3176/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3175/comments | https://api.github.com/repos/huggingface/transformers/issues/3175/events | https://github.com/huggingface/transformers/pull/3175 | 577,473,168 | MDExOlB1bGxSZXF1ZXN0Mzg1MjQ2NTky | 3,175 | Updated `Tokenw ise` in print statement to `Token wise` | {
"login": "param087",
"id": 26374564,
"node_id": "MDQ6VXNlcjI2Mzc0NTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26374564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/param087",
"html_url": "https://github.com/param087",
"followers_url": "https://api.github.com/users/param087/followers",
"following_url": "https://api.github.com/users/param087/following{/other_user}",
"gists_url": "https://api.github.com/users/param087/gists{/gist_id}",
"starred_url": "https://api.github.com/users/param087/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/param087/subscriptions",
"organizations_url": "https://api.github.com/users/param087/orgs",
"repos_url": "https://api.github.com/users/param087/repos",
"events_url": "https://api.github.com/users/param087/events{/privacy}",
"received_events_url": "https://api.github.com/users/param087/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=h1) Report\n> Merging [#3175](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e03129ad447ad7670fcc6206e5eb27a5435d4d86?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3175 +/- ##\n=======================================\n Coverage 78.15% 78.15% \n=======================================\n Files 98 98 \n Lines 16641 16641 \n=======================================\n Hits 13006 13006 \n Misses 3635 3635\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=footer). Last update [e03129a...70d11c4](https://codecov.io/gh/huggingface/transformers/pull/3175?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3175/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3175",
"html_url": "https://github.com/huggingface/transformers/pull/3175",
"diff_url": "https://github.com/huggingface/transformers/pull/3175.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3175.patch",
"merged_at": 1583679331000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3174/comments | https://api.github.com/repos/huggingface/transformers/issues/3174/events | https://github.com/huggingface/transformers/issues/3174 | 577,451,938 | MDU6SXNzdWU1Nzc0NTE5Mzg= | 3,174 | How can I assign a specific gpu when using examples/run_language_modeling.py? | {
"login": "ridiculouz",
"id": 56992804,
"node_id": "MDQ6VXNlcjU2OTkyODA0",
"avatar_url": "https://avatars.githubusercontent.com/u/56992804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ridiculouz",
"html_url": "https://github.com/ridiculouz",
"followers_url": "https://api.github.com/users/ridiculouz/followers",
"following_url": "https://api.github.com/users/ridiculouz/following{/other_user}",
"gists_url": "https://api.github.com/users/ridiculouz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ridiculouz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ridiculouz/subscriptions",
"organizations_url": "https://api.github.com/users/ridiculouz/orgs",
"repos_url": "https://api.github.com/users/ridiculouz/repos",
"events_url": "https://api.github.com/users/ridiculouz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ridiculouz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This might answer your question: https://stackoverflow.com/questions/39649102/how-do-i-select-which-gpu-to-run-a-job-on",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hello, I'm wondering if I can assign a specific gpu when using examples/run_language_modeling.py to train a language model?
Lots of thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3174/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3173/comments | https://api.github.com/repos/huggingface/transformers/issues/3173/events | https://github.com/huggingface/transformers/issues/3173 | 577,393,659 | MDU6SXNzdWU1NzczOTM2NTk= | 3,173 | Get the CNN/Daily Mail Data for BART | {
"login": "andr-ec",
"id": 16169185,
"node_id": "MDQ6VXNlcjE2MTY5MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/16169185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andr-ec",
"html_url": "https://github.com/andr-ec",
"followers_url": "https://api.github.com/users/andr-ec/followers",
"following_url": "https://api.github.com/users/andr-ec/following{/other_user}",
"gists_url": "https://api.github.com/users/andr-ec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andr-ec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andr-ec/subscriptions",
"organizations_url": "https://api.github.com/users/andr-ec/orgs",
"repos_url": "https://api.github.com/users/andr-ec/repos",
"events_url": "https://api.github.com/users/andr-ec/events{/privacy}",
"received_events_url": "https://api.github.com/users/andr-ec/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Its a typo in the docs. I just used cnn.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | CONTRIBUTOR | null | @sshleifer
In the README for the BART summarization it says:
> download both CNN and Daily Mail datasets from Kyunghyun Cho's website
`tar -xvf cnn_stories.tgz && tar -xvf dailymail_stories.tgz`
> this should make a directory called cnn_dm/ with files like test.source. To use your own data, copy that files format. Each article to be summarized is on its own line.
This doesn't produce a cnn_dm directory, it produces two different folders. The contents of the folders are `text.story` files, not `test.source`.
Did you use [this repo](https://github.com/artmatsak/cnn-dailymail)?
Or did you get the data from somewhere else?
Happy to submit a PR either way!
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3173/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3172/comments | https://api.github.com/repos/huggingface/transformers/issues/3172/events | https://github.com/huggingface/transformers/issues/3172 | 577,349,012 | MDU6SXNzdWU1NzczNDkwMTI= | 3,172 | Quick tour TF 2.0 | {
"login": "celsofranssa",
"id": 11181748,
"node_id": "MDQ6VXNlcjExMTgxNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11181748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/celsofranssa",
"html_url": "https://github.com/celsofranssa",
"followers_url": "https://api.github.com/users/celsofranssa/followers",
"following_url": "https://api.github.com/users/celsofranssa/following{/other_user}",
"gists_url": "https://api.github.com/users/celsofranssa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/celsofranssa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/celsofranssa/subscriptions",
"organizations_url": "https://api.github.com/users/celsofranssa/orgs",
"repos_url": "https://api.github.com/users/celsofranssa/repos",
"events_url": "https://api.github.com/users/celsofranssa/events{/privacy}",
"received_events_url": "https://api.github.com/users/celsofranssa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Our models are TensorFlow 2.x only, while your notebook is in TensorFlow 1.x:\r\n\r\n```\r\nThe default version of TensorFlow in Colab will soon switch to TensorFlow 2.x.\r\nWe recommend you upgrade now or ensure your notebook will continue to use TensorFlow 1.x via the %tensorflow_version 1.x magic: more info.\r\n```\r\n\r\nYou can use the following command at the start of your notebook to use TensorFlow 2.x:\r\n\r\n```\r\n%tensorflow_version 2.x\r\n```",
"@LysandreJik thank you."
] | 1,583 | 1,583 | 1,583 | NONE | null | # 🐛 Bug
## Information
I am trying **Quick tour TF 2.0**.
The problem arises when using a quick example: **How a TensorFlow 2.0 model can be trained in 12 lines of code**:
```python
import tensorflow as tf
import tensorflow_datasets
from transformers import *
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
data = tensorflow_datasets.load('glue/mrpc')
# Prepare dataset for GLUE as a tf.data.Dataset instance
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
# Train and evaluate using tf.keras.Model.fit()
history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
validation_data=valid_dataset, validation_steps=7)
```
which produces the output below:
```python
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-4-ea94e17f2f79> in <module>()
1 tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
----> 2 model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
3 data = tensorflow_datasets.load('glue/mrpc')
NameError: name 'TFBertForSequenceClassification' is not defined
```
The above behavior can be reproduced using this [Colab ](https://colab.research.google.com/drive/1aAmOVlvkuP9PLOuGKVx7-k0vsVBoD506)notebook.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Any help will be much appreciated.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3172/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3171/comments | https://api.github.com/repos/huggingface/transformers/issues/3171/events | https://github.com/huggingface/transformers/issues/3171 | 577,343,390 | MDU6SXNzdWU1NzczNDMzOTA= | 3,171 | Do we have a whole-word-masked version of BERT? | {
"login": "ridiculouz",
"id": 56992804,
"node_id": "MDQ6VXNlcjU2OTkyODA0",
"avatar_url": "https://avatars.githubusercontent.com/u/56992804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ridiculouz",
"html_url": "https://github.com/ridiculouz",
"followers_url": "https://api.github.com/users/ridiculouz/followers",
"following_url": "https://api.github.com/users/ridiculouz/following{/other_user}",
"gists_url": "https://api.github.com/users/ridiculouz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ridiculouz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ridiculouz/subscriptions",
"organizations_url": "https://api.github.com/users/ridiculouz/orgs",
"repos_url": "https://api.github.com/users/ridiculouz/repos",
"events_url": "https://api.github.com/users/ridiculouz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ridiculouz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, there are a few available on the hub, you can search for [whole word masking](https://huggingface.co/models?search=whole-word-masking) or [wwm](https://huggingface.co/models?search=wwm).\r\n\r\nFor your second question, you would have a better answer if you opened an issue on [huggingface/toeknizers](https://github.com/huggingface/tokenizers/issues) instead."
] | 1,583 | 1,583 | 1,583 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Thanks for the excellent work! But I'm wonder is there a whole-word-masked version of BERT? Moreover, how can I adapt the Tokenizer class to make it support other parsing method (e.g. In Chinese the BERTTokenizer simply parses the sequences in character level, while parser like jieba can parse them into Chinese words. How can I keep the features of Tokenizer class while using other parsing method?)? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3171/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3170/comments | https://api.github.com/repos/huggingface/transformers/issues/3170/events | https://github.com/huggingface/transformers/issues/3170 | 577,331,756 | MDU6SXNzdWU1NzczMzE3NTY= | 3,170 | Semantic Code Retrieval using Transformers | {
"login": "celsofranssa",
"id": 11181748,
"node_id": "MDQ6VXNlcjExMTgxNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11181748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/celsofranssa",
"html_url": "https://github.com/celsofranssa",
"followers_url": "https://api.github.com/users/celsofranssa/followers",
"following_url": "https://api.github.com/users/celsofranssa/following{/other_user}",
"gists_url": "https://api.github.com/users/celsofranssa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/celsofranssa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/celsofranssa/subscriptions",
"organizations_url": "https://api.github.com/users/celsofranssa/orgs",
"repos_url": "https://api.github.com/users/celsofranssa/repos",
"events_url": "https://api.github.com/users/celsofranssa/events{/privacy}",
"received_events_url": "https://api.github.com/users/celsofranssa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You might be interested in:\r\n- https://huggingface.co/huggingface/CodeBERTa-small-v1#codeberta and https://huggingface.co/huggingface/CodeBERTa-language-id\r\n- more generally, https://huggingface.co/blog/how-to-train",
"Wow, it helped a lot @julien-c. Basically, did you train a language model using CodeSearchNet dataset?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@julien-c,\r\n\r\nIs there an approach where this tutorial https://huggingface.co/blog/how-to-train is trained over TPU cores?",
"maybe @LysandreJik or @sgugger have a link to a notebook?",
"I haven't tried training in notebooks on TPU, only with the example scripts.",
"Unfortunately I only have notebooks that run the example scripts on TPU, nothing similar to the `how-to-train` blogpost.",
"Thanks @julien-c, @sgugger and @LysandreJik,\r\n\r\nMaybe I can adapt the [Language Modeling example script](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) applying [Pytorch Lightning](https://www.pytorchlightning.ai/) approach which easily support TPUs.",
"If you're using the language modeling script, running it on TPUs is supported, just follow the instructions [here](https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus).",
"> If you're using the language modeling script, running it on TPUs is supported, just follow the instructions [here](https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus).\r\n\r\nGreat, but it runs on Google Colab?",
"Yes it does run on colab!",
"You can find an example running the `run_glue.py` script [here](https://colab.research.google.com/drive/15q6UUzwNugWvVXNfOkWlGCaKIUvLZpxd?usp=sharing). You can do the same with the language modeling script! (Cloning the repository and running the script from there would be cleaner than `wget`ting all the files like it's done in this colab, though)",
"> You can find an example running the `run_glue.py` script [here](https://colab.research.google.com/drive/15q6UUzwNugWvVXNfOkWlGCaKIUvLZpxd?usp=sharing). You can do the same with the language modeling script! (Cloning the repository and running the script from there would be cleaner than `wget`ting all the files like it's done in this colab, though)\r\n\r\nFantastic! That was of great help.",
"@LysandreJik, @sgugger,\r\n\r\nUnfortunately, even using a small dataset (~400MB), the Colab killed the process due to the use of all available RAM (12.72GB)."
] | 1,583 | 1,602 | 1,590 | NONE | null | I am entering the world of transformers and would like to use some architectures to create a semantic search engine to retrieve source code (Python, Javascript, Ruby, Go, Java, and PHP code).
Currently, the [dataset ](https://github.com/github/CodeSearchNet#data-details)contains 2 million pairs **(code, docstring)**, where code is a list of tokens from a method or function and docstring is a short description of the code in natural language.
As a starting point, it would be interesting to construct a model architecture that receives the code and the docstring **([ [code], [docstring] ])** as input example and outputs the code embedding and docstring embedding. Using cosine similarity as loss function the model could be fine-tuned to encode both code and docstring to the same embedding space. As shown in the figure below:
<pre> <img src="https://i.stack.imgur.com/4fx3h.png" width="480">
</pre>
I started reading and tokenizing the dataset:
```python
from transformers import BertTokenizer
# reads a list of [[code], [docstring]]
reader = CodeDocstringReader(dataset_path)
# loads tokenizer
model_name = "bert-base-uncased"
tokenizer = BertTokenizer.from_pretrained(model_name, do_lower_case=True)
# returns a list of tokenized examples
# [[code_tokes_ids], [docstring_tokens_ids]]
tokenized_features = tokenizer_examples(
reader.get_examples(),
tokenizer
)
```
The definition and training of the model are still incomplete, but it is outlined as:
```python
import tensorflow as tf
from transformers import BertModel
class JointEncoder(tf.keras.Model):
"""Encodes the code and docstring into an same space of embeddings."""
def __init__(self,
path,
name="jointencoder"):
super(JointEncoder, self).__init__(name=name)
self.bert = BertModel.from_pretrained(path)
def call(self, inputs):
"""Returns code and docstring embeddings"""
...
code_embedding = ..
docstring_embedding = ..
return code_embedding, docstring_embedding
```
However, I'm stuck on how to code this simple architecture. Could you give me some directions?
Thanks in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3170/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3169/comments | https://api.github.com/repos/huggingface/transformers/issues/3169/events | https://github.com/huggingface/transformers/issues/3169 | 577,281,139 | MDU6SXNzdWU1NzcyODExMzk= | 3,169 | issues while modifying modeling_roberta.py file | {
"login": "nrjvarshney",
"id": 19836137,
"node_id": "MDQ6VXNlcjE5ODM2MTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/19836137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nrjvarshney",
"html_url": "https://github.com/nrjvarshney",
"followers_url": "https://api.github.com/users/nrjvarshney/followers",
"following_url": "https://api.github.com/users/nrjvarshney/following{/other_user}",
"gists_url": "https://api.github.com/users/nrjvarshney/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nrjvarshney/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nrjvarshney/subscriptions",
"organizations_url": "https://api.github.com/users/nrjvarshney/orgs",
"repos_url": "https://api.github.com/users/nrjvarshney/repos",
"events_url": "https://api.github.com/users/nrjvarshney/events{/privacy}",
"received_events_url": "https://api.github.com/users/nrjvarshney/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's more of a general Python question related to imports than an issue with the library, you would have more luck in trying with stack overflow.\r\n\r\nI believe the easiest way to modify files is to clone the repository and install it in your environment as an editable:\r\n\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install -e .\r\n```\r\n\r\nEvery file modification will be directly reflected in your Python runtime (if you're on Jupyter you would need to restart your kernel for it to take effect).",
"Thanks a lot. \r\nActually the problem was that I was appending the path (i.e src/transformers) at the end in PYTHON_PATH because of which it was loading modules from the transformers library. I added the source path at index 0 and now it is working the way I want it to work.\r\n\r\nand thanks @LysandreJik for the restart kernel trick. \r\n"
] | 1,583 | 1,583 | 1,583 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
I need the CLS representation from RobertaForMultipleChoice
For this, I changed
outputs = (reshaped_logits,) + outputs[2:]
To
outputs = (reshaped_logits,) + outputs[1:]
in file https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_roberta.py
since outputs[1] gives the CLS representation.
Now to incorporate this change
I need to import RobertaForMultipleChoice from this file and replace "from transformers import ..." in https://github.com/huggingface/transformers/blob/master/examples/run_multiple_choice.py
to "from modeling_roberta.py import ..."
I am getting import issues

Can somebody help in resolving this?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3169/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3168/comments | https://api.github.com/repos/huggingface/transformers/issues/3168/events | https://github.com/huggingface/transformers/issues/3168 | 577,269,494 | MDU6SXNzdWU1NzcyNjk0OTQ= | 3,168 | Can we use GPT-2 sentence embedding for classification tasks? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"GPT-2 and BERT are both transformer networks with very similar architectures. You can use the GPT-2 embeddings the same way you used BERT embeddings.\r\n\r\nAs you said, GPT-2 only handles left context. You can read [the paper](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) where the authors showcase results on several tasks in a zero-shot setting (section 3).",
"I recently imported the GPT2Model and built a simple classifier. I think the model is too naive. And could use some improvements. If you notice any mistakes, please correct me. :) \r\n```\r\nclass SimpleGPT2SequenceClassifier(nn.Module):\r\n def __init__(self, hidden_size: int, num_classes:int ,max_seq_len:int, gpt_model_name:str, \r\n cache_dir:str):\r\n super(SimpleGPT2SequenceClassifier,self).__init__()\r\n self.gpt2model = GPT2Model.from_pretrained(\r\n gpt_model_name, cache_dir = cache_dir\r\n )\r\n self.fc1 = nn.Linear(hidden_size, num_classes)\r\n \r\n def forward(self, x_in):\r\n \"\"\"\r\n Args:\r\n x_in: encoded inputs ids of sent.\r\n \"\"\"\r\n \r\n gpt_out = self.gpt2model(x_in)[0] #returns tuple\r\n batch_size = gpt_out.shape[0]\r\n prediction_vector = self.fc1(gpt_out.view(batch_size,-1)) #(batch_size , max_len, num_classes)\r\n \r\n return prediction_vector\r\n```\r\nFor preprocessing the text before encoding them with the tokenizer.\r\n\r\n```\r\npunkt_sentence_detector = nltk.data.load('tokenizers/punkt/english.pickle')\r\nclass GPT2Preprocessor:\r\n def __init__(self, transformer_tokenizer, sentence_detector):\r\n self.transformer_tokenizer = transformer_tokenizer\r\n self.sentence_detector = sentence_detector\r\n\r\n def add_eos_tokens(self, text):\r\n eos_token = \" \" + self.transformer_tokenizer.eos_token + \" \"\r\n sentences = self.sentence_detector.tokenize(text)\r\n eos_added_text = (\r\n eos_token.join(sentences) + \" \" + self.transformer_tokenizer.eos_token\r\n )\r\n return eos_added_text\r\n```",
"I tried GPT-2 embeddings and compare them with Roberta embeddings for the task of sentiment classification (both networks were frozen during the training). GPT-2 couldn't outperform the results of Roberta.",
"@cozek from the code, it isn't obvious whether you've frozen gpt2 layers or not ?",
"> @cozek from the code, it isn't obvious whether you've frozen gpt2 layers or not ?\r\n\r\nOf course, I have not frozen any layers. It is not always necessary to freeze the layers. If required you can easily freeze the layers as necessary. ",
"> I tried GPT-2 embeddings and compare them with Roberta embeddings for the task of sentiment classification (both networks were frozen during the training). GPT-2 couldn't outperform the results of Roberta.\r\n\r\nDo you still have the notebooks? I would be interested to see how you implemented a classification head on top of gpt-2. ",
"> > I tried GPT-2 embeddings and compare them with Roberta embeddings for the task of sentiment classification (both networks were frozen during the training). GPT-2 couldn't outperform the results of Roberta.\r\n> \r\n> Do you still have the notebooks? I would be interested to see how you implemented a classification head on top of gpt-2.\r\n\r\nhttps://github.com/cozek/OffensEval2020-code/blob/master/notebooks/Eng%20Task%20A%20-%20Ensemble%20DistilGPT2.ipynb\r\n\r\nHere you go. I used it for OffenEval 2020, Hate Speech Detection. I used the distilled version. Feel free to swap it out and take the full GPT-2. We got 0.90 Macro f1 with this model. ",
"You can add a CLS token to the vocabulary \r\n\r\n`tokenizer.add_special_tokens({'cls_token': '[CLS]'})\r\n model.resize_token_embeddings(len(tokenizer))`\r\n\r\nThen append this CLS token at the end of your input \r\nThen use the representation of this CLS token for classification as done in BERT.\r\ncc @cozek ",
"> > > I tried GPT-2 embeddings and compare them with Roberta embeddings for the task of sentiment classification (both networks were frozen during the training). GPT-2 couldn't outperform the results of Roberta.\r\n> > \r\n> > \r\n> > Do you still have the notebooks? I would be interested to see how you implemented a classification head on top of gpt-2.\r\n> \r\n> https://github.com/cozek/OffensEval2020-code/blob/master/notebooks/Eng%20Task%20A%20-%20Ensemble%20DistilGPT2.ipynb\r\n> \r\n> Here you go. I used it for OffenEval 2020, Hate Speech Detection. I used the distilled version. Feel free to swap it out and take the full GPT-2. We got 0.90 Macro f1 with this model.\r\n\r\nThanks a lot. Very helpful! ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@cozek I see in your code that you concatenate all the token embeddings together to produce the sentence representation and then pass that through `fc1`:\r\n\r\n\r\n```python\r\n gpt_out = self.gpt2model(x_in)[0] #returns tuple\r\n batch_size = gpt_out.shape[0]\r\n prediction_vector = self.fc1(gpt_out.view(batch_size,-1))\r\n```\r\n\r\nInstead of concatenating all the token embeddings, did you try:\r\n\r\n1. pooling over all the tokens to get the sentence representation? For example, max pooling or mean pooling?\r\n2. using the embedding of the last token?\r\n\r\n@AsmirMumin ",
"> @cozek I see in your code that you concatenate all the token embeddings together to produce the sentence representation and then pass that through `fc1`:\r\n> \r\n> ```python\r\n> gpt_out = self.gpt2model(x_in)[0] #returns tuple\r\n> batch_size = gpt_out.shape[0]\r\n> prediction_vector = self.fc1(gpt_out.view(batch_size,-1))\r\n> ```\r\n> \r\n> Instead of concatenating all the token embeddings, did you try:\r\n> \r\n> 1. pooling over all the tokens to get the sentence representation? For example, max pooling or mean pooling?\r\n> 2. using the embedding of the last token?\r\n\r\nI did not try 1 or 2. Option 1 seems logical as it would reduce the size of the FC layer and increase training speed. \r\nI am not familiar with option 2.\r\n\r\n"
] | 1,583 | 1,600 | 1,597 | CONTRIBUTOR | null | I am experimenting on the use of transformer embeddings in sentence classification tasks **without finetuning them**. I have used BERT embeddings and those experiments gave me very good results. Now I want to use GPT-2 embeddings (without fine-tuning). So I have two questions,
1. Can I use GPT-2 embeddings like that (because I know Gpt-2 is trained on the left to right)
2. Is there any example uses of GPT-2 in classification tasks other than generation tasks?
3. If I can use GPT-2 embeddings, how should I do it ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3168/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3167/comments | https://api.github.com/repos/huggingface/transformers/issues/3167/events | https://github.com/huggingface/transformers/issues/3167 | 577,254,035 | MDU6SXNzdWU1NzcyNTQwMzU= | 3,167 | padding and attention mask does not work as intended in batch input in GPT2 language model | {
"login": "mainulquraishi",
"id": 14335238,
"node_id": "MDQ6VXNlcjE0MzM1MjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/14335238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mainulquraishi",
"html_url": "https://github.com/mainulquraishi",
"followers_url": "https://api.github.com/users/mainulquraishi/followers",
"following_url": "https://api.github.com/users/mainulquraishi/following{/other_user}",
"gists_url": "https://api.github.com/users/mainulquraishi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mainulquraishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mainulquraishi/subscriptions",
"organizations_url": "https://api.github.com/users/mainulquraishi/orgs",
"repos_url": "https://api.github.com/users/mainulquraishi/repos",
"events_url": "https://api.github.com/users/mainulquraishi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mainulquraishi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi mainulquraishi, \r\n\r\nAs mentioned in earlier issues #3031, #3021, #2975, #3069 it is not advised to use GPT2LMHeadModel in inference using a padded batch. To answer your two questions:\r\n\r\n1. You don't get the same output in your second code snippet as in your first code snippet because you \"argmax\" the next token from the logits corresponding to the masked token on position 3. \r\nYou would get the same output 'a' if you would take the \"argmax\" the next token from the **last non-padded token** which is at postion 2 in your example. \r\n\r\n2. The attention mask works as far as I can see. Using an attention mask means that logits at **other** positions than the masked position input are not influenced by the masked position input.\r\nThis means that if you mask position 3 you will see that changing the input for position 3 will not change the output for postion 4 - N, but changing the input for position 3 will surely influence the output of position 3 (A token cannot mask its own output).\r\n\r\nI would advise you to take a good look at Issue #3021 ",
"Hi @patrickvonplaten I ran into the same issue that you described properly in your first point.\r\nSome questions for the record:\r\na) Could you please describe why position_ids argument is not required here? It's not clear for me why it was needed in https://github.com/huggingface/transformers/issues/3021 and not here.\r\nb) Any padded batch will likely have sentences with many different lengths. Is there a way that `GPT2LMHeadModel` is able to identify the last non-padded token for each sentence (maybe via the attention-mask) so we get the corresponding logits easily? Any function to do that filtering? If not, I guess we can do it from the client side via some tensor operation to discard the last padded tokens (we can infer the last padded tokens via the attention mask). Is this correct?\r\nc) Could we apply what we are discussing here in terms of padding to run the model with Torchscript? Any advice / warning here?",
"The answer for (b) can be found in the code snippet that Patrick added in https://github.com/huggingface/transformers/issues/3021 . The following does the trick:\r\n```\r\nlast_non_masked_idx = torch.sum(attention_mask, dim=1) - 1\r\nstart_idx = inp_idx = (last_non_masked_idx).view(-1, 1).repeat(1, st.tokenizer.vocab_size).unsqueeze(1)\r\nlogits = logits.gather(1, start_idx).squeeze(1)\r\n```\r\n\r\n",
"You just saved me @Damiox! Thank you so much :)"
] | 1,583 | 1,587 | 1,583 | NONE | null | The following code is without batch:
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
context=torch.tensor([tokenizer.encode("This is")])
output, past = model(context)
token = torch.argmax(output[..., -1, :])
print(tokenizer.decode(token.item()))
output: ' a'
```
Now, I extended this to batch setting:
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
context=[torch.tensor(tokenizer.encode("This is ")),torch.tensor(tokenizer.encode("Hello How are "))]
context=pad_sequence(context,batch_first=True)
mask=torch.tensor([[1,1,0],[1,1,1]])
output, past = model(context,attention_mask=mask)
token = torch.argmax(output[..., -1, :],dim=1)
tokenizer.decode(token)
output: '\n you'
```
Here `\n` is next token for the first context and `you` is next token for second context of the batch.
But The expected next token for the first context is "a", since all the setting are same. Futhermore, if you reduce the second context to 2 token you will get `'a'` in this batch setting. So clearly, model can not understand the padding.
Also, **the attention mask does not work**. Because,
after padding the next token of sequence "`this is`" is 0 (zero). And according to the attention mask (`[1,1,0]`), this zero should be avoided and only the tokens `this` and `is` should be attended. The proofs that this attention masking is not working are:
- Use attention mask [1,1,1], that means attend even on the padding zero, you get the same output
which is `\n'.
- Use the the string `this is!`. Here `!` has the zero index in the vocabulary matrix. Again you get the same output which is `\n'.
Only time, it is possible to get desirable output without the batch settings and attention mask ( now it seems, it does not matter because it has no effect anyway)
Then I found [this](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.pad_token), which suggest to use `pad_token`. So I used like following:
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
from torch.nn.utils.rnn import pad_sequence
tokenizer = GPT2Tokenizer.from_pretrained("gpt2",pad_token="<PAD>")
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
context=[torch.tensor(tokenizer.encode("This is <PAD> ")),torch.tensor(tokenizer.encode("Hello How are"))]
context=torch.stack(context)
print(context)
mask=torch.tensor([[1,1,0],[1,1,1]])
output, past = model(context,attention_mask=mask)
token = torch.argmax(output[..., -1, :],dim=1)
tokenizer.decode(token)
output: 'The you'
```
Here `The` is next token for the first context and `you` is next token for second context of the batch. This is also not working. Because `The` is not expected for the first context.
How do I use variable length sequence in batch setting in gpt/gpt2 model?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3167/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3166/comments | https://api.github.com/repos/huggingface/transformers/issues/3166/events | https://github.com/huggingface/transformers/issues/3166 | 577,251,365 | MDU6SXNzdWU1NzcyNTEzNjU= | 3,166 | [BERT] Implementation of the sliding window for long sequences | {
"login": "wasiahmad",
"id": 17520413,
"node_id": "MDQ6VXNlcjE3NTIwNDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/17520413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wasiahmad",
"html_url": "https://github.com/wasiahmad",
"followers_url": "https://api.github.com/users/wasiahmad/followers",
"following_url": "https://api.github.com/users/wasiahmad/following{/other_user}",
"gists_url": "https://api.github.com/users/wasiahmad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wasiahmad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasiahmad/subscriptions",
"organizations_url": "https://api.github.com/users/wasiahmad/orgs",
"repos_url": "https://api.github.com/users/wasiahmad/repos",
"events_url": "https://api.github.com/users/wasiahmad/events{/privacy}",
"received_events_url": "https://api.github.com/users/wasiahmad/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I have the same issue.. anyone has any inputs on this "
] | 1,583 | 1,614 | 1,589 | NONE | null | I was trying to look for the references where the sliding window is implemented to process long sequences. How do we split a long sequence and then after getting the embeddings, how do we unpack them? I am unable to find the code segments that handle these operations.
Also, is it possible to describe the main trick? I am trying to implement it in plain PyTorch. I am unable to implement in Batches without running any loops.
Any help would be appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3166/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3165/comments | https://api.github.com/repos/huggingface/transformers/issues/3165/events | https://github.com/huggingface/transformers/issues/3165 | 577,238,599 | MDU6SXNzdWU1NzcyMzg1OTk= | 3,165 | Reason for speedup | {
"login": "mamesmak",
"id": 61725921,
"node_id": "MDQ6VXNlcjYxNzI1OTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/61725921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mamesmak",
"html_url": "https://github.com/mamesmak",
"followers_url": "https://api.github.com/users/mamesmak/followers",
"following_url": "https://api.github.com/users/mamesmak/following{/other_user}",
"gists_url": "https://api.github.com/users/mamesmak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mamesmak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mamesmak/subscriptions",
"organizations_url": "https://api.github.com/users/mamesmak/orgs",
"repos_url": "https://api.github.com/users/mamesmak/repos",
"events_url": "https://api.github.com/users/mamesmak/events{/privacy}",
"received_events_url": "https://api.github.com/users/mamesmak/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Depending on which version you are using, it might be that you are using the fast [tokenizers ](https://github.com/huggingface/tokenizers) library which offers a much improved tokenizer interface built on Rust. ",
"Thanks for the reply. I don't think it is because of the tokenizer. I measured the encoder part and see huge improvement in the encoder speed. Still cannot figure out the difference. comparing pytorch-transformers 1.1 with transformers 2.5.1.",
"It is unlikely that there have been architecture changes, but it might be caused by a different torch/tensorflow version? Are you testing both transformers versions on the same framework version? It is likely that this is caused by an optimisation of an activation function. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | I used to develop a code based on the pytorch_transformers package (v1.1) and now transfering to the current version. I see a 2x speedup in Bert run_glue.py. Wondering what is the major reason? Looking at the code, I couldn't find major differences. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3165/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3164/comments | https://api.github.com/repos/huggingface/transformers/issues/3164/events | https://github.com/huggingface/transformers/issues/3164 | 577,163,002 | MDU6SXNzdWU1NzcxNjMwMDI= | 3,164 | wrong configuration of ALBERT xlarge | {
"login": "zheyuye",
"id": 37728728,
"node_id": "MDQ6VXNlcjM3NzI4NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/37728728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zheyuye",
"html_url": "https://github.com/zheyuye",
"followers_url": "https://api.github.com/users/zheyuye/followers",
"following_url": "https://api.github.com/users/zheyuye/following{/other_user}",
"gists_url": "https://api.github.com/users/zheyuye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zheyuye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zheyuye/subscriptions",
"organizations_url": "https://api.github.com/users/zheyuye/orgs",
"repos_url": "https://api.github.com/users/zheyuye/repos",
"events_url": "https://api.github.com/users/zheyuye/events{/privacy}",
"received_events_url": "https://api.github.com/users/zheyuye/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Where did you get 32 from?\r\n\r\nThe [official file ](https://tfhub.dev/google/albert_xlarge/2)says 16.",
"Well, the model configuration in tar file downloaded from TF Hub shows 32, which conflicts with the official definition.",
"Indeed, that's right. I'll follow the issue you opened https://github.com/google-research/ALBERT/issues/180 and act accordingly.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
## Information
If you are **reproducing** the [ALBERT](https://github.com/google-research/albert), the `num_attention_heads` of albert xlarge should be 32 instead of 16 as
https://github.com/huggingface/transformers/blob/db9279dedbb9c5e7d24569a1ac3f74f9d5c3eb18/src/transformers/configuration_albert.py#L27
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3164/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3163/comments | https://api.github.com/repos/huggingface/transformers/issues/3163/events | https://github.com/huggingface/transformers/issues/3163 | 577,096,973 | MDU6SXNzdWU1NzcwOTY5NzM= | 3,163 | Inference is slow with | {
"login": "simonefrancia",
"id": 7140210,
"node_id": "MDQ6VXNlcjcxNDAyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7140210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonefrancia",
"html_url": "https://github.com/simonefrancia",
"followers_url": "https://api.github.com/users/simonefrancia/followers",
"following_url": "https://api.github.com/users/simonefrancia/following{/other_user}",
"gists_url": "https://api.github.com/users/simonefrancia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonefrancia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonefrancia/subscriptions",
"organizations_url": "https://api.github.com/users/simonefrancia/orgs",
"repos_url": "https://api.github.com/users/simonefrancia/repos",
"events_url": "https://api.github.com/users/simonefrancia/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonefrancia/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | CONTRIBUTOR | null | Hi @julien-c,
I know that @loretoparisi already talked with you guys about some problems we have with inference using BertTokenizer and BertForSequenceClassification Classes.
So for starting to discuss about that, we would like to show you the most important lines of our inference and training code, that probably can have some bugs and slowdowns.
Briefly, we are developing a language classifier trained on 93 languages, so we started from multilingual bert ( both tokenizer and model ) and then we want to finetune it on our dataset that has about 8M lines.
# INFERENCE CODE
```
# tokenize entire text
tokens = tokenizer.encode(original_text.strip())
# remove bos and eos tokens
tokens = tokens[1:-1]
# get number of slices we have to insert in DL model
number_of_slices = len(tokens) // (MAX_SEQUENCE_LENGTH - 2)
if len(tokens) % (MAX_SEQUENCE_LENGTH - 2) != 0: number_of_slices +=1
# create slices to be inserted
slices = []
for index in range(number_of_slices):
slice_ = tokens[ index*(MAX_SEQUENCE_LENGTH - 2) : (index+1)*(MAX_SEQUENCE_LENGTH - 2)]
slice_ = [tokenizer.bos_token_id] + slice_ + [tokenizer.eos_token_id]
slices.append(slice_)
# for every slice, preprocess data creating mask and padding
texts = []
masks = []
for text in slices:
padding = [tokenizer.pad_token_id] * (MAX_SEQUENCE_LENGTH - len(text))
mask = torch.zeros(MAX_SEQUENCE_LENGTH, dtype=torch.int32).tolist()
mask[:len(text)] = [1]*len(text)
text = text + padding
texts.append(text)
masks.append(mask)
# texts to tensor pytorch
texts = torch.tensor(texts)
# masks to tensor pytorch
masks = torch.tensor(masks)
#inference from DL model
logits = model(texts, attention_mask=masks)
# stack logits
logits = torch.stack(logits).mean(dim=0)
#sum logits in order to have a single logits
logits = torch.sum(logits, dim=0)
```
# TRAINING CODE
```
tokenizer = BertTokenizer.from_pretrained( os.path.join(data_path, 'model') )
special_tokens_dict = {'eos_token': '[CLS]',
'unk_token' : '[UNK]',
'eos_token' : '[SEP]',
'bos_token' : '[CLS]',
'pad_token' : '[PAD]' }
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
train_loader, validation_loader = load_datasets(data_path,
tokenizer,
batch_size,
max_sequence_length,
random_sequence_length,
epoch_size,
token_dropout,
seed)
model = BertForSequenceClassification.from_pretrained(os.path.join(data_path, 'model'))
print("Num_labels:")
print(model.num_labels)
if torch.cuda.is_available():
model.cuda()
if rank == 0:
summary(model)
if distributed():
dist.barrier()
if world_size > 1:
model = DistributedDataParallel(model, [rank], output_device=rank, find_unused_parameters=True)
optimizer = Adam(model.parameters(), lr=learning_rate, weight_decay=weight_decay)
epoch_loop = count(1) if max_epochs is None else range(1, max_epochs + 1)
logdir = os.environ.get("LOGDIR", "logs")
os.makedirs(logdir, exist_ok=True)
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(logdir) if rank == 0 else None
best_validation_accuracy = 0
for epoch in epoch_loop:
try:
if world_size > 1:
train_loader.sampler.set_epoch(epoch)
validation_loader.sampler.set_epoch(epoch)
train_metrics = train(model, optimizer, device, train_loader, f'Epoch {epoch}')
validation_metrics = validate(model, device, validation_loader)
```
Do you have some suggestion about that our inference is slow and loading model is very long?
We do not use hub actually, we pre download model on local disk and we load with local path.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3163/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3163/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3162/comments | https://api.github.com/repos/huggingface/transformers/issues/3162/events | https://github.com/huggingface/transformers/issues/3162 | 577,076,625 | MDU6SXNzdWU1NzcwNzY2MjU= | 3,162 | Does this project have this function ? | {
"login": "SeekPoint",
"id": 18051187,
"node_id": "MDQ6VXNlcjE4MDUxMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/18051187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SeekPoint",
"html_url": "https://github.com/SeekPoint",
"followers_url": "https://api.github.com/users/SeekPoint/followers",
"following_url": "https://api.github.com/users/SeekPoint/following{/other_user}",
"gists_url": "https://api.github.com/users/SeekPoint/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SeekPoint/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SeekPoint/subscriptions",
"organizations_url": "https://api.github.com/users/SeekPoint/orgs",
"repos_url": "https://api.github.com/users/SeekPoint/repos",
"events_url": "https://api.github.com/users/SeekPoint/events{/privacy}",
"received_events_url": "https://api.github.com/users/SeekPoint/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"https://github.com/huggingface/transformers/issues/2311",
"@frankniujc it is helpful\r\nbut maybe a better way is take the all tokens in a whole, not prediction the next tokens",
"The probability of a sentence P(s0s1s2s3s4...sn) = P(s1|s0) * P(s2|s0s1) * P(s3|s0s1s2) * ... * P(sn|s0s1s2...sn-1)\r\n\r\nSo you can do something like this\r\n```Python\r\ndef sentence_probability(sent):\r\n bos = tokenizer.encode('<|endoftext|>')\r\n tokens = tokenizer.encode(sent)\r\n tokens = bos + tokens\r\n input_ids = torch.tensor(tokens).unsqueeze(0).to('cuda')\r\n\r\n sent_probs = []\r\n\r\n for i, next_word in enumerate(tokens[1:]):\r\n next_word_logits = model(input_ids[:,:i+1])[0][0, -1].detach()\r\n next_word_prob = F.log_softmax(next_word_logits, dim=0)[next_word].item()\r\n\r\n sent_probs.append(next_word_prob)\r\n\r\n return sum(sent_probs)\r\n```",
"@loveJasmine Have a look at [`lm-scorer`](https://github.com/simonepri/lm-scorer).\r\n\r\nIt is a tiny wrapper around `transformers` I wrote that allows you to get sentences probabilities using models that support it (only GPT2 models are implemented at the time of writing).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,592 | 1,592 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
can we use this project to calculate the probability that a input text as a real/resonable sentence base on the corpus we trained | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3162/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3161/comments | https://api.github.com/repos/huggingface/transformers/issues/3161/events | https://github.com/huggingface/transformers/issues/3161 | 577,053,270 | MDU6SXNzdWU1NzcwNTMyNzA= | 3,161 | urgent - ROBERTA on WSC | {
"login": "yes1234man",
"id": 59166627,
"node_id": "MDQ6VXNlcjU5MTY2NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/59166627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yes1234man",
"html_url": "https://github.com/yes1234man",
"followers_url": "https://api.github.com/users/yes1234man/followers",
"following_url": "https://api.github.com/users/yes1234man/following{/other_user}",
"gists_url": "https://api.github.com/users/yes1234man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yes1234man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yes1234man/subscriptions",
"organizations_url": "https://api.github.com/users/yes1234man/orgs",
"repos_url": "https://api.github.com/users/yes1234man/repos",
"events_url": "https://api.github.com/users/yes1234man/events{/privacy}",
"received_events_url": "https://api.github.com/users/yes1234man/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The conversion script should pretty much work out of the box, so feel free to do it and we welcome a PR (and we can upload the converted weights to our S3)",
"which script you mean? sorry i did not get it. i think this is not possible\nto train it on huggingface is this?\n\nOn Fri, Mar 6, 2020, 10:12 PM Julien Chaumond <[email protected]>\nwrote:\n\n> The conversion script should pretty much work out of the box, so feel free\n> to do it and we welcome a PR (and we can upload the converted weights to\n> our S3)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3161?email_source=notifications&email_token=AODM7I44VQEQWKUWI5SYT3TRGFRMLA5CNFSM4LDD7F42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOC335Y#issuecomment-595967479>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AODM7I6VCDKQWMAKL4XDHETRGFRMLANCNFSM4LDD7F4Q>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | Hi
I am looking for how to run ROBERTA on WSC data, similar to the example in fairseq:
https://github.com/pytorch/fairseq/tree/master/examples/roberta
However, fairseq is hard to use and modify and I really appreciate if you could add this to your great repo
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3161/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3160/comments | https://api.github.com/repos/huggingface/transformers/issues/3160/events | https://github.com/huggingface/transformers/pull/3160 | 577,031,078 | MDExOlB1bGxSZXF1ZXN0Mzg0OTAzMjYw | 3,160 | [Bart] add imports to examples | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=h1) Report\n> Merging [#3160](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6ffe03a0a1d472a4e5941793fd361d2b82c8be3f?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3160 +/- ##\n==========================================\n+ Coverage 78.11% 78.12% +0.01% \n==========================================\n Files 98 98 \n Lines 16651 16651 \n==========================================\n+ Hits 13007 13009 +2 \n+ Misses 3644 3642 -2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3160/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.41% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3160/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.72% <0%> (+0.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=footer). Last update [6ffe03a...0f206be](https://codecov.io/gh/huggingface/transformers/pull/3160?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3160/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3160",
"html_url": "https://github.com/huggingface/transformers/pull/3160",
"diff_url": "https://github.com/huggingface/transformers/pull/3160.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3160.patch",
"merged_at": 1583511334000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3159/comments | https://api.github.com/repos/huggingface/transformers/issues/3159/events | https://github.com/huggingface/transformers/issues/3159 | 577,009,536 | MDU6SXNzdWU1NzcwMDk1MzY= | 3,159 | NER: some issues in PyTorch Lightning example | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"After fixing the these issues the output predictions seems very weird, maybe this is in an label 2 id bug?\r\n\r\n```bash\r\nNachdem I-OTH\r\ner I-PERpart\r\n1907 B-ORGderiv\r\nnicht I-PER\r\nzum I-PER\r\nDirektor I-PER\r\nder I-PERderiv\r\nBibliothek B-ORGpart\r\nberufen I-PER\r\nwurde I-PER\r\n, I-PER\r\nverließ B-PERderiv\r\nGardthausen I-PERderiv\r\nden I-ORG\r\nBibliotheksdienst B-ORGpart\r\n. I-PERderiv\r\n```\r\n\r\n😂",
"cc @srush (via #3053)",
"Thanks I will take a look. Can you verify you are on the latest pytorch-lightning? `prepare_data` was just added. \r\n\r\nAlso can you post your log. What was the Val accuracy? Are you on single GPU?",
"@srush Please see the PR #3180 I have updated the bash script and README to run effortlessly. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | COLLABORATOR | null | Hi,
I wanted to try out the new NER example script (`./ner/run_pl_ner.py`) that uses PyTorch Lightning.
Here are some bugs that I've found:
Dataset preparation method is not called. Usually, InputBatch batches or input features are written and stored in a file. However, the `prepare_data()` [1] method is not called and no input features are written. I fixed that adding this method to the `train_dataloader()` [2] function, but I'm not sure if it's the right place.
Model training will work then.
Evaluation is currently not working correctly. The checkpoint output file name is:
```bash
# ls
'checkpointepoch=0.ckpt' 'checkpointepoch=1.ckpt' 'checkpointepoch=2.ckpt'
```
so the pattern `checkpointepoch=<number_epoch>.ckpt` is used, whereas the main script expects an output checkpoint pattern of `checkpoint_<number_epoch>.ckpt` [3]
[1] https://github.com/huggingface/transformers/blob/master/examples/ner/run_pl_ner.py#L56-L80
[2] https://github.com/huggingface/transformers/blob/master/examples/ner/transformer_base.py#L126-L139
[3] https://github.com/huggingface/transformers/blob/master/examples/ner/run_pl_ner.py#L220 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3159/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3158/comments | https://api.github.com/repos/huggingface/transformers/issues/3158/events | https://github.com/huggingface/transformers/pull/3158 | 577,008,664 | MDExOlB1bGxSZXF1ZXN0Mzg0ODg0NjE3 | 3,158 | [Bart] _prepare_decoder_inputs should use large negative | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @tomhosking for noticing!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=h1) Report\n> Merging [#3158](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6ffe03a0a1d472a4e5941793fd361d2b82c8be3f?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3158 +/- ##\n==========================================\n+ Coverage 78.11% 78.12% +<.01% \n==========================================\n Files 98 98 \n Lines 16651 16651 \n==========================================\n+ Hits 13007 13008 +1 \n+ Misses 3644 3643 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.57% <100%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=footer). Last update [6ffe03a...71e626b](https://codecov.io/gh/huggingface/transformers/pull/3158?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Also renames some things and adds a nice test.
I suspect that this didn't break integration tests because we don't have a serious integration test with decoder_input_ids set (e.g. calculating loss for a summarization example) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3158/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3158",
"html_url": "https://github.com/huggingface/transformers/pull/3158",
"diff_url": "https://github.com/huggingface/transformers/pull/3158.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3158.patch",
"merged_at": 1583528797000
} |
https://api.github.com/repos/huggingface/transformers/issues/3157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3157/comments | https://api.github.com/repos/huggingface/transformers/issues/3157/events | https://github.com/huggingface/transformers/pull/3157 | 576,966,067 | MDExOlB1bGxSZXF1ZXN0Mzg0ODQ5MTM4 | 3,157 | [model_cards]Add albert chinese model | {
"login": "voidful",
"id": 10904842,
"node_id": "MDQ6VXNlcjEwOTA0ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/10904842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/voidful",
"html_url": "https://github.com/voidful",
"followers_url": "https://api.github.com/users/voidful/followers",
"following_url": "https://api.github.com/users/voidful/following{/other_user}",
"gists_url": "https://api.github.com/users/voidful/gists{/gist_id}",
"starred_url": "https://api.github.com/users/voidful/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/voidful/subscriptions",
"organizations_url": "https://api.github.com/users/voidful/orgs",
"repos_url": "https://api.github.com/users/voidful/repos",
"events_url": "https://api.github.com/users/voidful/events{/privacy}",
"received_events_url": "https://api.github.com/users/voidful/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=h1) Report\n> Merging [#3157](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6ffe03a0a1d472a4e5941793fd361d2b82c8be3f?src=pr&el=desc) will **decrease** coverage by `0.15%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3157 +/- ##\n==========================================\n- Coverage 78.11% 77.96% -0.16% \n==========================================\n Files 98 98 \n Lines 16651 16651 \n==========================================\n- Hits 13007 12982 -25 \n- Misses 3644 3669 +25\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.35% <0%> (-5.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=footer). Last update [6ffe03a...491bea5](https://codecov.io/gh/huggingface/transformers/pull/3157?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for sharing. I'll just switch the language tag to chinese (tags are case-sensitive)\r\n\r\nModel pages:\r\n[`voidful/albert_chinese_tiny`](https://huggingface.co/voidful/albert_chinese_tiny)\r\n[`voidful/albert_chinese_small`](https://huggingface.co/voidful/albert_chinese_small)\r\n[`voidful/albert_chinese_base`](https://huggingface.co/voidful/albert_chinese_base)\r\n[`voidful/albert_chinese_large`](https://huggingface.co/voidful/albert_chinese_large)\r\n[`voidful/albert_chinese_xlarge`](https://huggingface.co/voidful/albert_chinese_xlarge)\r\n[`voidful/albert_chinese_xxlarge`](https://huggingface.co/voidful/albert_chinese_xxlarge)"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Hi,
this PR adds the model card for albert-chinese model
- albert_chinese_tiny
- albert_chinese_small
- albert_chinese_base
- albert_chinese_large
- albert_chinese_xlarge
- albert_chinese_xxlarge
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3157/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3157/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3157",
"html_url": "https://github.com/huggingface/transformers/pull/3157",
"diff_url": "https://github.com/huggingface/transformers/pull/3157.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3157.patch",
"merged_at": 1583533412000
} |
https://api.github.com/repos/huggingface/transformers/issues/3156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3156/comments | https://api.github.com/repos/huggingface/transformers/issues/3156/events | https://github.com/huggingface/transformers/pull/3156 | 576,839,737 | MDExOlB1bGxSZXF1ZXN0Mzg0NzQ2NzI5 | 3,156 | Partially fix space only input without special tokens added int the output | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [] | 1,583 | 1,651 | 1,584 | MEMBER | null | Original issue #3091.
It fixes the issue for all non BPE-based tokenizers. For BPE ones, the output is different from Python and Rust:
GPT2:
- Python : `[]`
- Rust: `['Ġ']`
Roberta:
- Python: `[]`
- Rust: `['<s>', 'Ġ', '</s>']`
Rust seems the right one here. I should have a look at Roberta why it include the special_tokens even if not asked to do so.
cc @n1t0 cc @LysandreJik fyi.
Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3156/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3156",
"html_url": "https://github.com/huggingface/transformers/pull/3156",
"diff_url": "https://github.com/huggingface/transformers/pull/3156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3156.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3155/comments | https://api.github.com/repos/huggingface/transformers/issues/3155/events | https://github.com/huggingface/transformers/issues/3155 | 576,779,125 | MDU6SXNzdWU1NzY3NzkxMjU= | 3,155 | i want to browse and store image in data base in tkinter can u give me suggessions? | {
"login": "vrajesh16",
"id": 60382059,
"node_id": "MDQ6VXNlcjYwMzgyMDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/60382059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vrajesh16",
"html_url": "https://github.com/vrajesh16",
"followers_url": "https://api.github.com/users/vrajesh16/followers",
"following_url": "https://api.github.com/users/vrajesh16/following{/other_user}",
"gists_url": "https://api.github.com/users/vrajesh16/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vrajesh16/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrajesh16/subscriptions",
"organizations_url": "https://api.github.com/users/vrajesh16/orgs",
"repos_url": "https://api.github.com/users/vrajesh16/repos",
"events_url": "https://api.github.com/users/vrajesh16/events{/privacy}",
"received_events_url": "https://api.github.com/users/vrajesh16/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"\r\n"
] | 1,583 | 1,583 | 1,583 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3155/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3154/comments | https://api.github.com/repos/huggingface/transformers/issues/3154/events | https://github.com/huggingface/transformers/issues/3154 | 576,720,004 | MDU6SXNzdWU1NzY3MjAwMDQ= | 3,154 | seed parameter for model generate() | {
"login": "minimaxir",
"id": 2179708,
"node_id": "MDQ6VXNlcjIxNzk3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2179708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minimaxir",
"html_url": "https://github.com/minimaxir",
"followers_url": "https://api.github.com/users/minimaxir/followers",
"following_url": "https://api.github.com/users/minimaxir/following{/other_user}",
"gists_url": "https://api.github.com/users/minimaxir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minimaxir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minimaxir/subscriptions",
"organizations_url": "https://api.github.com/users/minimaxir/orgs",
"repos_url": "https://api.github.com/users/minimaxir/repos",
"events_url": "https://api.github.com/users/minimaxir/events{/privacy}",
"received_events_url": "https://api.github.com/users/minimaxir/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @minimaxir - thanks for the Feature request! \r\n\r\nNot too sure about this though. Two arguments against it from my side:\r\n\r\n1. I feel like it's very easy to set the seed parameter before calling `generate()` without any real drawback. \r\n2. Also we want all our `generate()` arguments to have default values with a lot of them defined in `configuration_utils.py`. Adding a `seed` argument would either break this logic or set a default seed value in the `PretrainedConfig` class in `configuration_utils.py` which I definitely don't want to do.",
"Same here, thanks for the proposition @minimaxir but I feel like there are many ways you can/should assigne seeds (e.g. [here](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py#L104-L109)) and I don't think we would be comfortable with having this inside the model it-self.",
"That's fair; for my purposes, it's not too difficult to wrap. (and I inadvertently realized it's better to wrap it for batch generation too)\r\n\r\nThanks!"
] | 1,583 | 1,583 | 1,583 | NONE | null | # 🚀 Feature request
There should be a `seed` parameter for the `generate()` function of a model.
Although a seed can be manually set before calling `generate()` (as tested in #3063), using it as a parameter is more intuitive (and covers all the bases)
## Motivation
Generation reproducibility (also good for CI tests)
## Your contribution
The implementation `set_seed()` functions around the repo (e.g. https://github.com/huggingface/transformers/blob/6b1ff250842f52136d5159bb67a26b50ba01485d/examples/run_generation.py#L74) should be sufficient. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3154/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3153/comments | https://api.github.com/repos/huggingface/transformers/issues/3153/events | https://github.com/huggingface/transformers/issues/3153 | 576,659,688 | MDU6SXNzdWU1NzY2NTk2ODg= | 3,153 | Fresh macOS install errors out on import | {
"login": "Snarik",
"id": 5012544,
"node_id": "MDQ6VXNlcjUwMTI1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5012544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Snarik",
"html_url": "https://github.com/Snarik",
"followers_url": "https://api.github.com/users/Snarik/followers",
"following_url": "https://api.github.com/users/Snarik/following{/other_user}",
"gists_url": "https://api.github.com/users/Snarik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Snarik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Snarik/subscriptions",
"organizations_url": "https://api.github.com/users/Snarik/orgs",
"repos_url": "https://api.github.com/users/Snarik/repos",
"events_url": "https://api.github.com/users/Snarik/events{/privacy}",
"received_events_url": "https://api.github.com/users/Snarik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
Fresh Install errors out on import
## Information
Following from the README instructions, created a venv, installed torch and tensorflow. Installed transformers. Upon import
Model I am using (Bert, XLNet ...):
transformers==2.5.1
Language I am using the model on (English, Chinese ...):
N/A
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a new virtual env ```$ virtualenv -p python3 venv```
2. Source your new venv ```$ source venv/usr/local/bin/activate```
3. Pip install packages ``` (venv) $ pip install torch tensorflow transformers```
4. import package ``` python -m transformers```
```python
>>> import transformers
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/runpy.py", line 142, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/runpy.py", line 109, in _get_module_details
__import__(pkg_name)
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/transformers/__init__.py", line 22, in <module>
from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/transformers/configuration_albert.py", line 18, in <module>
from .configuration_utils import PretrainedConfig
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/transformers/configuration_utils.py", line 25, in <module>
from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/transformers/file_utils.py", line 53, in <module>
import tensorflow as tf
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow/__init__.py", line 101, in <module>
from tensorflow_core import *
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow_core/__init__.py", line 40, in <module>
from tensorflow.python.tools import module_util as _module_util
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 959, in _find_and_load_unlocked
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow_core/python/__init__.py", line 64, in <module>
from tensorflow.core.framework.graph_pb2 import *
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/tensorflow_core/core/framework/graph_pb2.py", line 7, in <module>
from google.protobuf import descriptor as _descriptor
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/google/protobuf/__init__.py", line 37, in <module>
__import__('pkg_resources').declare_namespace(__name__)
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/__init__.py", line 84, in <module>
__import__('pkg_resources.extern.packaging.requirements')
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/_vendor/packaging/requirements.py", line 9, in <module>
from pkg_resources.extern.pyparsing import stringStart, stringEnd, originalTextFor, ParseException
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/extern/__init__.py", line 43, in load_module
__import__(extant)
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/_vendor/pyparsing.py", line 4756, in <module>
_escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(lambda s,l,t:t[0][1])
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/_vendor/pyparsing.py", line 1284, in setParseAction
self.parseAction = list(map(_trim_arity, list(fns)))
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/_vendor/pyparsing.py", line 1066, in _trim_arity
this_line = extract_stack(limit=2)[-1]
File "/Users/kiran/Library/Python/3.7/lib/python/site-packages/pkg_resources/_vendor/pyparsing.py", line 1050, in extract_stack
frame_summary = traceback.extract_stack(limit=-offset+limit-1)[offset]
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/traceback.py", line 211, in extract_stack
stack = StackSummary.extract(walk_stack(f), limit=limit)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/traceback.py", line 363, in extract
f.line
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/traceback.py", line 285, in line
self._line = linecache.getline(self.filename, self.lineno).strip()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/linecache.py", line 16, in getline
lines = getlines(filename, module_globals)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/linecache.py", line 48, in getlines
for mod in sys.modules.values():
RuntimeError: dictionary changed size during iteration
```
## Expected behavior
Importing transformers into the python runtime should import transformers.
- `transformers` version: 2.5.1
- Platform: macOS Catalina 10.15.3
- Python version: Python 3.7.3
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): N/A
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3153/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3152/comments | https://api.github.com/repos/huggingface/transformers/issues/3152/events | https://github.com/huggingface/transformers/issues/3152 | 576,652,966 | MDU6SXNzdWU1NzY2NTI5NjY= | 3,152 | BART.generate: possible to reduce time/memory? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"1) Identical to my benchmark for speed. Hadn't tested memory but I'm not surprised that their implementation is less.\r\n\r\nFor both memory and speed, they have a lot of clever tricks that we haven't implemented yet.\r\n\r\n4) Summarization Pipeline will not help, but I will take a longer look at this tomorrow and see if we can improve.\r\n",
"On master, the gap has closed considerably!\r\n<16GB GPU RAM for fp16, bs=32, and timings much closer:\r\n\r\n\r\nMy numbers are a bit lower than yours because I am on an NVIDIA RTX GPU.\r\n",
"I tested again and I have similar results ! Thanks for the fix.\r\n\r\nI now have exact same GPU memory utilization.\r\n\r\n---\r\n\r\nAbout the (now) small difference of inference time between implementations, do you know from where it comes from ?",
"Haven't investigated. So far, I just investigated memory and the speed improvements were a happy side effect.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,591 | 1,591 | CONTRIBUTOR | null | # 🐛 Performance issues
I did a quick benchmark between HuggingFace's implementation of **BART** and FairSeq's implementation.
You can find the benchmark code [here](https://gist.github.com/Colanim/cc418d19e5e107f462bac306f53ba994).
---
Here is my results, on a single GPU GTX 1080 (12 GiB of memory) :
| FP16 - Batch size 16 | s/batch | s/sample |
|----------------------|---------|----------|
| FairSeq | 8.8676 | 0.5664 |
| HuggingFace | 12.3358 | 0.7879 |
| FP16 - Batch size 32 | s/batch | s/sample |
|----------------------|---------|----------|
| FairSeq | 17.1247 | 0.5469 |
| HuggingFace | OOM | OOM |
| FP16 - Batch size 1 | s/sample |
|---------------------|----------|
| FairSeq | 1.6743 |
| HuggingFace | 1.8856 |
| FP32 - Batch size 1 | s/sample |
|---------------------|----------|
| FairSeq | 1.7865 |
| HuggingFace | 2.0670 |
---
**FairSeq is consistently faster than HuggingFace on all my experiments.**
---
This sparks a few questions :
* Do you have similar results on your side ? Did I mess my benchmark ?
* Why HuggingFace's implementation is significantly slower ?
* Why HuggingFace's implementation takes more space in memory (illustrated by `OOM` with batch size of 32) ?
* Is the release of the `Summarization Pipeline` going to improve this ?
@sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3152/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3151/comments | https://api.github.com/repos/huggingface/transformers/issues/3151/events | https://github.com/huggingface/transformers/issues/3151 | 576,640,965 | MDU6SXNzdWU1NzY2NDA5NjU= | 3,151 | How to train a distilled gpt2 | {
"login": "cloudygoose",
"id": 1544039,
"node_id": "MDQ6VXNlcjE1NDQwMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1544039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cloudygoose",
"html_url": "https://github.com/cloudygoose",
"followers_url": "https://api.github.com/users/cloudygoose/followers",
"following_url": "https://api.github.com/users/cloudygoose/following{/other_user}",
"gists_url": "https://api.github.com/users/cloudygoose/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cloudygoose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cloudygoose/subscriptions",
"organizations_url": "https://api.github.com/users/cloudygoose/orgs",
"repos_url": "https://api.github.com/users/cloudygoose/repos",
"events_url": "https://api.github.com/users/cloudygoose/events{/privacy}",
"received_events_url": "https://api.github.com/users/cloudygoose/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"很简单哦。看我的代码:\r\n\r\n\r\n\"\"\"\r\nTraining the distilled model.\r\nSupported architectures include: BERT -> DistilBERT, RoBERTa -> DistilRoBERTa, GPT2 -> DistilGPT2.\r\n\"\"\"\r\nimport argparse\r\nimport json\r\nimport os\r\nimport pickle\r\nimport shutil\r\n\r\nimport numpy as np\r\nimport torch\r\n\r\nfrom distiller import Distiller\r\nfrom lm_seqs_dataset import LmSeqsDataset\r\nfrom transformers import (\r\n BertConfig,\r\n BertForMaskedLM,\r\n BertTokenizer,\r\n DistilBertConfig,\r\n DistilBertForMaskedLM,\r\n DistilBertTokenizer,\r\n GPT2Config,\r\n GPT2LMHeadModel,\r\n GPT2Tokenizer,\r\n RobertaConfig,\r\n RobertaForMaskedLM,\r\n RobertaTokenizer,\r\n)\r\nfrom utils import git_log, init_gpu_params, logger, set_seed\r\n\r\n\r\nMODEL_CLASSES = {\r\n \"distilbert\": (DistilBertConfig, DistilBertForMaskedLM, DistilBertTokenizer),\r\n \"roberta\": (RobertaConfig, RobertaForMaskedLM, RobertaTokenizer),\r\n \"bert\": (BertConfig, BertForMaskedLM, BertTokenizer),\r\n \"gpt2\": (GPT2Config, GPT2LMHeadModel, GPT2Tokenizer),\r\n}\r\n\r\n\r\ndef sanity_checks(args):\r\n \"\"\"\r\n A bunch of args sanity checks to perform even starting...\r\n \"\"\"\r\n assert (args.mlm and args.alpha_mlm > 0.0) or (not args.mlm and args.alpha_mlm == 0.0)\r\n assert (args.alpha_mlm > 0.0 and args.alpha_clm == 0.0) or (args.alpha_mlm == 0.0 and args.alpha_clm > 0.0)\r\n if args.mlm:\r\n assert os.path.isfile(args.token_counts)\r\n assert (args.student_type in [\"roberta\", \"distilbert\"]) and (args.teacher_type in [\"roberta\", \"bert\"])\r\n else:\r\n assert (args.student_type in [\"gpt2\"]) and (args.teacher_type in [\"gpt2\"])\r\n\r\n assert args.teacher_type == args.student_type or (\r\n args.student_type == \"distilbert\" and args.teacher_type == \"bert\"\r\n )\r\n assert os.path.isfile(args.student_config)\r\n if args.student_pretrained_weights is not None:\r\n assert os.path.isfile(args.student_pretrained_weights)\r\n\r\n if args.freeze_token_type_embds:\r\n assert args.student_type in [\"roberta\"]\r\n\r\n assert args.alpha_ce >= 0.0\r\n assert args.alpha_mlm >= 0.0\r\n assert args.alpha_clm >= 0.0\r\n assert args.alpha_mse >= 0.0\r\n assert args.alpha_cos >= 0.0\r\n assert args.alpha_ce + args.alpha_mlm + args.alpha_clm + args.alpha_mse + args.alpha_cos > 0.0\r\n\r\n\r\ndef freeze_pos_embeddings(student, args):\r\n if args.student_type == \"roberta\":\r\n student.roberta.embeddings.position_embeddings.weight.requires_grad = False\r\n elif args.student_type == \"gpt2\":\r\n student.transformer.wpe.weight.requires_grad = False\r\n\r\n\r\ndef freeze_token_type_embeddings(student, args):\r\n if args.student_type == \"roberta\":\r\n student.roberta.embeddings.token_type_embeddings.weight.requires_grad = False\r\n\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description=\"Training\")\r\n parser.add_argument(\"--force\", \r\n action=\"store_true\", \r\n default=True,\r\n help=\"Overwrite dump_path if it already exists.\")\r\n\r\n parser.add_argument(\r\n \"--dump_path\", \r\n type=str, \r\n #required=True, \r\n default=r'D:\\2020.03.02distillgpt2' ,\r\n help=\"The output directory (log, checkpoints, parameters, etc.)\"\r\n )\r\n parser.add_argument(\r\n \"--data_file\",\r\n type=str,\r\n #required=True,\r\n default=r'scripts\\gpt2.pickle' ,\r\n help=\"The binarized file (tokenized + tokens_to_ids) and grouped by sequence.\",\r\n )\r\n\r\n parser.add_argument(\r\n \"--student_type\",\r\n type=str,\r\n choices=[\"distilbert\", \"roberta\", \"gpt2\"],\r\n #required=True,\r\n default='gpt2',\r\n help=\"The student type (DistilBERT, RoBERTa).\",\r\n )\r\n parser.add_argument(\"--student_config\", \r\n type=str, \r\n #required=True,\r\n default=r'training_configs\\distilgpt2.json',\r\n help=\"Path to the student configuration.\")\r\n parser.add_argument(\r\n \"--student_pretrained_weights\", default=None, type=str, help=\"Load student initialization checkpoint.\"\r\n )\r\n\r\n parser.add_argument(\r\n \"--teacher_type\", \r\n choices=[\"bert\", \"roberta\", \"gpt2\"], \r\n #required=True, \r\n default='gpt2',\r\n help=\"Teacher type (BERT, RoBERTa).\"\r\n )\r\n parser.add_argument(\"--teacher_name\", \r\n type=str, \r\n #required=True,\r\n default= r'D:\\checkpoint-652500',\r\n help=\"The teacher model.\")\r\n\r\n parser.add_argument(\"--temperature\", \r\n default=1.5, \r\n type=float, help=\"Temperature for the softmax temperature.\")\r\n \r\n parser.add_argument(\r\n \"--alpha_ce\", \r\n default=0.5,\r\n type=float, \r\n help=\"Linear weight for the distillation loss. Must be >=0.\"\r\n )\r\n parser.add_argument(\r\n \"--alpha_mlm\",\r\n default=0.0,\r\n type=float,\r\n help=\"Linear weight for the MLM loss. Must be >=0. Should be used in coonjunction with `mlm` flag.\",\r\n )\r\n parser.add_argument(\"--alpha_clm\", default=0.5, type=float, help=\"Linear weight for the CLM loss. Must be >=0.\")\r\n parser.add_argument(\"--alpha_mse\", default=0.0, type=float, help=\"Linear weight of the MSE loss. Must be >=0.\")\r\n parser.add_argument(\r\n \"--alpha_cos\", default=0.0, type=float, help=\"Linear weight of the cosine embedding loss. Must be >=0.\"\r\n )\r\n\r\n parser.add_argument(\r\n \"--mlm\", action=\"store_true\", help=\"The LM step: MLM or CLM. If `mlm` is True, the MLM is used over CLM.\"\r\n )\r\n parser.add_argument(\r\n \"--mlm_mask_prop\",\r\n default=0.15,\r\n type=float,\r\n help=\"Proportion of tokens for which we need to make a prediction.\",\r\n )\r\n parser.add_argument(\"--word_mask\", default=0.8, type=float, help=\"Proportion of tokens to mask out.\")\r\n parser.add_argument(\"--word_keep\", default=0.1, type=float, help=\"Proportion of tokens to keep.\")\r\n parser.add_argument(\"--word_rand\", default=0.1, type=float, help=\"Proportion of tokens to randomly replace.\")\r\n parser.add_argument(\r\n \"--mlm_smoothing\",\r\n default=0.7,\r\n type=float,\r\n help=\"Smoothing parameter to emphasize more rare tokens (see XLM, similar to word2vec).\",\r\n )\r\n parser.add_argument(\"--token_counts\", \r\n type=str, \r\n default=r'scripts\\gpt2_token_counts.pickle' ,\r\n help=\"The token counts in the data_file for MLM.\")\r\n\r\n parser.add_argument(\r\n \"--restrict_ce_to_mask\",\r\n action=\"store_true\",\r\n help=\"If true, compute the distilation loss only the [MLM] prediction distribution.\",\r\n )\r\n parser.add_argument(\r\n \"--freeze_pos_embs\",\r\n action=\"store_true\",\r\n help=\"Freeze positional embeddings during distillation. For student_type in ['roberta', 'gpt2'] only.\",\r\n )\r\n parser.add_argument(\r\n \"--freeze_token_type_embds\",\r\n action=\"store_true\",\r\n help=\"Freeze token type embeddings during distillation if existent. For student_type in ['roberta'] only.\",\r\n )\r\n\r\n parser.add_argument(\"--n_epoch\", type=int, default=3, help=\"Number of pass on the whole dataset.\")\r\n parser.add_argument(\"--batch_size\", type=int, default=4, help=\"Batch size (for each process).\")\r\n parser.add_argument(\r\n \"--group_by_size\",\r\n action=\"store_false\",\r\n help=\"If true, group sequences that have similar length into the same batch. Default is true.\",\r\n )\r\n\r\n parser.add_argument(\r\n \"--gradient_accumulation_steps\",\r\n type=int,\r\n default=50,\r\n help=\"Gradient accumulation for larger training batches.\",\r\n )\r\n parser.add_argument(\"--warmup_prop\", default=0.05, type=float, help=\"Linear warmup proportion.\")\r\n parser.add_argument(\"--weight_decay\", default=0.0, type=float, help=\"Weight deay if we apply some.\")\r\n parser.add_argument(\"--learning_rate\", default=5e-4, type=float, help=\"The initial learning rate for Adam.\")\r\n parser.add_argument(\"--adam_epsilon\", default=1e-6, type=float, help=\"Epsilon for Adam optimizer.\")\r\n parser.add_argument(\"--max_grad_norm\", default=5.0, type=float, help=\"Max gradient norm.\")\r\n parser.add_argument(\"--initializer_range\", default=0.02, type=float, help=\"Random initialization range.\")\r\n\r\n parser.add_argument(\r\n \"--fp16\",\r\n action=\"store_true\",\r\n help=\"Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit\",\r\n )\r\n parser.add_argument(\r\n \"--fp16_opt_level\",\r\n type=str,\r\n default=\"O1\",\r\n help=\"For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3'].\"\r\n \"See details at https://nvidia.github.io/apex/amp.html\",\r\n )\r\n parser.add_argument(\"--n_gpu\", type=int, default=1, help=\"Number of GPUs in the node.\")\r\n parser.add_argument(\"--local_rank\", type=int, default=-1, help=\"Distributed training - Local rank\")\r\n parser.add_argument(\"--seed\", type=int, default=2020, help=\"Random seed\")\r\n\r\n parser.add_argument(\"--log_interval\", type=int, default=500, help=\"Tensorboard logging interval.\")\r\n parser.add_argument(\"--checkpoint_interval\", type=int, default=1500, help=\"Checkpoint interval.\")\r\n args = parser.parse_args([])\r\n sanity_checks(args)\r\n\r\n # ARGS #\r\n init_gpu_params(args)\r\n set_seed(args)\r\n if args.is_master:\r\n if os.path.exists(args.dump_path):\r\n if not args.force:\r\n raise ValueError(\r\n f\"Serialization dir {args.dump_path} already exists, but you have not precised wheter to overwrite it\"\r\n \"Use `--force` if you want to overwrite it\"\r\n )\r\n else:\r\n shutil.rmtree(args.dump_path)\r\n\r\n if not os.path.exists(args.dump_path):\r\n os.makedirs(args.dump_path)\r\n logger.info(f\"Experiment will be dumped and logged in {args.dump_path}\")\r\n\r\n # SAVE PARAMS #\r\n logger.info(f\"Param: {args}\")\r\n with open(os.path.join(args.dump_path, \"parameters.json\"), \"w\",encoding = 'utf-8') as f:\r\n json.dump(vars(args), f, indent=4)\r\n #git_log(args.dump_path)\r\n\r\n student_config_class, student_model_class, _ = MODEL_CLASSES[args.student_type]\r\n teacher_config_class, teacher_model_class, teacher_tokenizer_class = MODEL_CLASSES[args.teacher_type]\r\n\r\n # TOKENIZER #\r\n from transformers import BertTokenizer \r\n tokenizer = BertTokenizer(\r\n vocab_file = r\"scripts\\vocab.txt\",\r\n unk_token='<unk>',\r\n sep_token='<sep>',\r\n pad_token='<pad>',\r\n cls_token='</s>',\r\n mask_token='<mask>') \r\n \r\n special_tokens_dict = {\"bos_token\": \"<s>\", \"eos_token\": \"</s>\"}\r\n tokenizer.add_special_tokens(special_tokens_dict)\r\n special_tok_ids = {}\r\n for tok_name, tok_symbol in tokenizer.special_tokens_map.items():\r\n idx = tokenizer.all_special_tokens.index(tok_symbol)\r\n special_tok_ids[tok_name] = tokenizer.all_special_ids[idx]\r\n logger.info(f\"Special tokens {special_tok_ids}\")\r\n args.special_tok_ids = special_tok_ids\r\n args.max_model_input_size = 512\r\n\r\n # DATA LOADER #\r\n logger.info(f\"Loading data from {args.data_file}\")\r\n with open(args.data_file, \"rb\") as fp:\r\n data = pickle.load(fp)\r\n\r\n if args.mlm:\r\n logger.info(f\"Loading token counts from {args.token_counts} (already pre-computed)\")\r\n with open(args.token_counts, \"rb\") as fp:\r\n counts = pickle.load(fp)\r\n\r\n token_probs = np.maximum(counts, 1) ** -args.mlm_smoothing\r\n for idx in special_tok_ids.values():\r\n token_probs[idx] = 0.0 # do not predict special tokens\r\n token_probs = torch.from_numpy(token_probs)\r\n else:\r\n token_probs = None\r\n\r\n train_lm_seq_dataset = LmSeqsDataset(params=args, data=data)\r\n logger.info(f\"Data loader created.\")\r\n\r\n # STUDENT #\r\n logger.info(f\"Loading student config from {args.student_config}\")\r\n stu_architecture_config = student_config_class.from_pretrained(args.student_config)\r\n stu_architecture_config.output_hidden_states = True\r\n\r\n if args.student_pretrained_weights is not None:\r\n logger.info(f\"Loading pretrained weights from {args.student_pretrained_weights}\")\r\n student = student_model_class.from_pretrained(args.student_pretrained_weights, config=stu_architecture_config)\r\n else:\r\n student = student_model_class(stu_architecture_config)\r\n\r\n if args.n_gpu > 0:\r\n student.to(f\"cuda:{args.local_rank}\")\r\n logger.info(f\"Student loaded.\")\r\n\r\n # TEACHER #\r\n teacher = teacher_model_class.from_pretrained(args.teacher_name, output_hidden_states=True)\r\n teacher.resize_token_embeddings(len(tokenizer)) # Update the model embeddings with the new vocabulary size\r\n teacher.to('cuda')\r\n teacher.eval()\r\n if args.n_gpu > 0:\r\n teacher.to(f\"cuda:{args.local_rank}\")\r\n logger.info(f\"Teacher loaded from {args.teacher_name}.\")\r\n\r\n # FREEZING #\r\n if args.freeze_pos_embs:\r\n freeze_pos_embeddings(student, args)\r\n if args.freeze_token_type_embds:\r\n freeze_token_type_embeddings(student, args)\r\n\r\n # SANITY CHECKS #\r\n assert student.config.vocab_size == teacher.config.vocab_size\r\n assert student.config.hidden_size == teacher.config.hidden_size\r\n assert student.config.max_position_embeddings == teacher.config.max_position_embeddings\r\n if args.mlm:\r\n assert token_probs.size(0) == stu_architecture_config.vocab_size\r\n\r\n # DISTILLER #\r\n torch.cuda.empty_cache()\r\n distiller = Distiller(\r\n params=args, dataset=train_lm_seq_dataset, token_probs=token_probs, student=student, teacher=teacher\r\n )\r\n distiller.train()\r\n logger.info(\"Let's go get some drinks.\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n",
"Hi, not sure where to put this but there might be some small errors in the training code provided in this repo.\r\n1. `n_gpus` is used when initializing GPUs but the actual name of this variable is `gpus`\r\n2. It seems that now `return_dict` is default to `True`, so the function`step` fails because the results are unpacked to keys rather than values. The easiest fix I guess is to turn off `return_dict` in `train.py` like the following\r\n ```python\r\n student.config.update(dict(return_dict=False))\r\n teacher.config.update(dict(return_dict=False))\r\n ```",
"I followed the guide in README to train a distill gpt2, but the performance of my distilgpt2 is not good as huggingface, actually, the performance is bad. Did you trained a nice performance distilgpt2?"
] | 1,583 | 1,662 | 1,584 | NONE | null | Hi! Thanks for everything.
I'm interested in training a distilled version of gpt2, because I want it to be even smaller than the distilled gpt2 model.
In https://github.com/huggingface/transformers/tree/master/examples/distillation , there is a tutorial about how to get a distilled bert. Could you give instructions about how to train a distilled gpt2?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3151/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3150/comments | https://api.github.com/repos/huggingface/transformers/issues/3150/events | https://github.com/huggingface/transformers/issues/3150 | 576,599,037 | MDU6SXNzdWU1NzY1OTkwMzc= | 3,150 | Padding changes model outputs (even with attention_mask) | {
"login": "thashim",
"id": 2808358,
"node_id": "MDQ6VXNlcjI4MDgzNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2808358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thashim",
"html_url": "https://github.com/thashim",
"followers_url": "https://api.github.com/users/thashim/followers",
"following_url": "https://api.github.com/users/thashim/following{/other_user}",
"gists_url": "https://api.github.com/users/thashim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thashim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thashim/subscriptions",
"organizations_url": "https://api.github.com/users/thashim/orgs",
"repos_url": "https://api.github.com/users/thashim/repos",
"events_url": "https://api.github.com/users/thashim/events{/privacy}",
"received_events_url": "https://api.github.com/users/thashim/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"I tried to replicate the input as close as I could given the output you gave:\r\n\r\n```py\r\nfrom transformers import BertModel\r\nimport torch\r\n\r\ntoken_id = torch.tensor([[ 101, 5292, 3270, 102, 8638, 2060, 102]])\r\ntoken_types = torch.tensor([[0, 0, 0, 0, 1, 1, 1]])\r\nmask = torch.tensor([[1., 1., 1., 1., 1., 1., 1.]])\r\n\r\npadded_id = torch.tensor([[ 101, 5292, 3270, 102, 8638, 2060, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])\r\npadded_type = torch.tensor([[0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])\r\npadded_mask = torch.tensor([[1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])\r\n\r\nbertModel = BertModel.from_pretrained('bert-base-uncased')\r\nbertModel = bertModel.eval()\r\n\r\noutput = bertModel(token_id, token_type_ids=token_types, attention_mask=mask)\r\npadded_output = bertModel(padded_id, token_type_ids=padded_type, attention_mask=padded_mask)\r\n```\r\n\r\nI then print the maximum difference between the output of the model with the non-padded input and the output with the padded input:\r\n\r\n```py\r\nprint(torch.max(output[0] - padded_output[0][:, :7]))\r\nprint(torch.max(output[1] - padded_output[1]))\r\n```\r\n\r\nWhich outputs the following (negligible) difference:\r\n\r\n```py\r\ntensor(3.6359e-06, grad_fn=<MaxBackward1>)\r\ntensor(5.9605e-07, grad_fn=<MaxBackward1>)\r\n```\r\n\r\nWould it be possible for you to give a completely reproducible script so that I may see where the issue lies?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Consider the following example code which gets a BERT embedding, and compares the BERT embedding on the same input with padding + masking.
```
bertModel=BertModel.from_pretrained('bert-base-uncased')
bertModel=bertModel.eval()
print('token id:'+str(id))
print('token types:'+str(typeout))
bert_out = torch.mean(bertModel(id, token_type_ids = typeout, attention_mask = mask)[0],1)
add_pad = lambda x: torch.cat((x, torch.zeros(1, 10, dtype=x.dtype)),1)
print('mask:'+str(mask))
print('padded id:'+str(add_pad(id)))
print('padded type:'+str(add_pad(typeout)))
print('padded mask:'+str(add_pad(mask)))
bert_out_2 = torch.mean(bertModel(add_pad(id), token_type_ids = add_pad(typeout), attention_mask = add_pad(mask))[0],1)
print(bert_out[0,0:10])
print(bert_out_2[0,0:10])
```
The output here is
```
token id:tensor([[ 101, 5292, 3270, 102, 8638, 2060, 102]])
token types:tensor([[0, 0, 0, 0, 1, 1, 1]])
mask:tensor([[1., 1., 1., 1., 1., 1., 1.]])
padded id:tensor([[ 101, 5292, 3270, 102, 8638, 2060, 102, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0]])
padded type:tensor([[0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
padded mask:tensor([[1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
tensor([ 0.2600, 0.1568, -0.2576, -0.0110, 0.1021, -0.1336, 0.2546, -0.5071,
0.0462, -0.2283], grad_fn=<SliceBackward>)
tensor([ 0.0996, 0.0061, -0.3331, -0.0237, -0.1110, -0.0050, 0.2755, -0.3335,
-0.0565, -0.2542], grad_fn=<SliceBackward>)
```
The two tensors which were generated by the same input but have different padding have drastically different embeddings.
## Expected behavior
The last two lines of the output should be the same
## Environment info
- `transformers` version: 2.1.1.
- Platform: osx
- Python version: 3.7.3
- PyTorch version (GPU?): 1.2.0
- Tensorflow version (GPU?): N/A
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3150/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3149/comments | https://api.github.com/repos/huggingface/transformers/issues/3149/events | https://github.com/huggingface/transformers/pull/3149 | 576,595,391 | MDExOlB1bGxSZXF1ZXN0Mzg0NTUzNTc5 | 3,149 | fix missed BartForMaskedLM renaming | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=h1) Report\n> Merging [#3149](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/857e0a0d3ba39be6259961524a730d3f106cec9c?src=pr&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3149 +/- ##\n==========================================\n- Coverage 78.03% 77.97% -0.06% \n==========================================\n Files 98 98 \n Lines 16588 16588 \n==========================================\n- Hits 12944 12935 -9 \n- Misses 3644 3653 +9\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.08% <0%> (-2.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=footer). Last update [857e0a0...58fc8f9](https://codecov.io/gh/huggingface/transformers/pull/3149?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | Quick fix @sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3149/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3149",
"html_url": "https://github.com/huggingface/transformers/pull/3149",
"diff_url": "https://github.com/huggingface/transformers/pull/3149.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3149.patch",
"merged_at": 1583451908000
} |
https://api.github.com/repos/huggingface/transformers/issues/3148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3148/comments | https://api.github.com/repos/huggingface/transformers/issues/3148/events | https://github.com/huggingface/transformers/pull/3148 | 576,582,792 | MDExOlB1bGxSZXF1ZXN0Mzg0NTQzMTQy | 3,148 | refactored beam search according to torch implementation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"good to merge for me",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=h1) Report\n> Merging [#3148](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0001d056861bb1ec7bd6a825006f578629a101fc?src=pr&el=desc) will **decrease** coverage by `1.04%`.\n> The diff coverage is `95.83%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3148 +/- ##\n==========================================\n- Coverage 78.03% 76.98% -1.05% \n==========================================\n Files 98 98 \n Lines 16573 16583 +10 \n==========================================\n- Hits 12932 12766 -166 \n- Misses 3641 3817 +176\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.47% <95.83%> (-1.95%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=footer). Last update [0001d05...2861c9d](https://codecov.io/gh/huggingface/transformers/pull/3148?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | 1:1 translation of PR #3135 from Pytorch to TF. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3148/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3148",
"html_url": "https://github.com/huggingface/transformers/pull/3148",
"diff_url": "https://github.com/huggingface/transformers/pull/3148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3148.patch",
"merged_at": 1583528507000
} |
https://api.github.com/repos/huggingface/transformers/issues/3147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3147/comments | https://api.github.com/repos/huggingface/transformers/issues/3147/events | https://github.com/huggingface/transformers/pull/3147 | 576,534,941 | MDExOlB1bGxSZXF1ZXN0Mzg0NTAzOTIw | 3,147 | Pass kwargs to configuration | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=h1) Report\n> Merging [#3147](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7ac47bfe69f25fc7381be65870b2f4e5cdb8cb6a?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3147 +/- ##\n==========================================\n+ Coverage 78% 78.01% +<.01% \n==========================================\n Files 98 98 \n Lines 16561 16569 +8 \n==========================================\n+ Hits 12919 12926 +7 \n- Misses 3642 3643 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.72% <100%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.41% <0%> (-0.22%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=footer). Last update [7ac47bf...1159cff](https://codecov.io/gh/huggingface/transformers/pull/3147?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,585 | 1,583 | MEMBER | null | **kwargs were not passed to the PreTrainedConfiguration when using `from_pretrained`
closes #3093 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3147/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3147",
"html_url": "https://github.com/huggingface/transformers/pull/3147",
"diff_url": "https://github.com/huggingface/transformers/pull/3147.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3147.patch",
"merged_at": 1583446617000
} |
https://api.github.com/repos/huggingface/transformers/issues/3146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3146/comments | https://api.github.com/repos/huggingface/transformers/issues/3146/events | https://github.com/huggingface/transformers/pull/3146 | 576,453,011 | MDExOlB1bGxSZXF1ZXN0Mzg0NDM2NDQ5 | 3,146 | Create README.md for mrm8488/bert-multi-uncased-finetuned-xquadv1 | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3146/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3146",
"html_url": "https://github.com/huggingface/transformers/pull/3146",
"diff_url": "https://github.com/huggingface/transformers/pull/3146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3146.patch",
"merged_at": 1583533221000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3145/comments | https://api.github.com/repos/huggingface/transformers/issues/3145/events | https://github.com/huggingface/transformers/pull/3145 | 576,433,210 | MDExOlB1bGxSZXF1ZXN0Mzg0NDIwNzI4 | 3,145 | [Bart] FP16 Support | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=h1) Report\n> Merging [#3145](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7ac47bfe69f25fc7381be65870b2f4e5cdb8cb6a?src=pr&el=desc) will **decrease** coverage by `0.06%`.\n> The diff coverage is `66.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3145 +/- ##\n==========================================\n- Coverage 78% 77.94% -0.07% \n==========================================\n Files 98 98 \n Lines 16561 16560 -1 \n==========================================\n- Hits 12919 12907 -12 \n- Misses 3642 3653 +11\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.37% <66.66%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.29% <0%> (-2.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.22% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=footer). Last update [7ac47bf...1360dac](https://codecov.io/gh/huggingface/transformers/pull/3145?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3145/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3145",
"html_url": "https://github.com/huggingface/transformers/pull/3145",
"diff_url": "https://github.com/huggingface/transformers/pull/3145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3145.patch",
"merged_at": 1583442876000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3144/comments | https://api.github.com/repos/huggingface/transformers/issues/3144/events | https://github.com/huggingface/transformers/issues/3144 | 576,428,433 | MDU6SXNzdWU1NzY0Mjg0MzM= | 3,144 | [Question]: Why does model.__call__ return the loss too? | {
"login": "dclong",
"id": 824507,
"node_id": "MDQ6VXNlcjgyNDUwNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/824507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dclong",
"html_url": "https://github.com/dclong",
"followers_url": "https://api.github.com/users/dclong/followers",
"following_url": "https://api.github.com/users/dclong/following{/other_user}",
"gists_url": "https://api.github.com/users/dclong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dclong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dclong/subscriptions",
"organizations_url": "https://api.github.com/users/dclong/orgs",
"repos_url": "https://api.github.com/users/dclong/repos",
"events_url": "https://api.github.com/users/dclong/events{/privacy}",
"received_events_url": "https://api.github.com/users/dclong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # ❓ Questions & Help
In PyTorch, model.__call__ returns the output tensor and users have to call a loss function to get the loss. I wonder why models in transformers doesn't follow this convention? Any specific reason? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3144/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3143/comments | https://api.github.com/repos/huggingface/transformers/issues/3143/events | https://github.com/huggingface/transformers/pull/3143 | 576,397,594 | MDExOlB1bGxSZXF1ZXN0Mzg0MzkyMDYx | 3,143 | Correct missing keys | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Very good catch"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | closes #3142
The problem came from the fact that `from_pretrained` set the model on which to load the weights to the base model: it detected that the state dict was made for the base, therefore loading only onto the base.
It didn't look for the weights it didn't load. Here, both state dicts are analyzed and the difference (keys present in the state dict with the head and not in the base state dict) are added to the missing keys.
Added a test. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3143/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3143",
"html_url": "https://github.com/huggingface/transformers/pull/3143",
"diff_url": "https://github.com/huggingface/transformers/pull/3143.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3143.patch",
"merged_at": 1583445715000
} |
https://api.github.com/repos/huggingface/transformers/issues/3142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3142/comments | https://api.github.com/repos/huggingface/transformers/issues/3142/events | https://github.com/huggingface/transformers/issues/3142 | 576,357,760 | MDU6SXNzdWU1NzYzNTc3NjA= | 3,142 | Missing `missing_keys` when loading from saved base model checkpoint | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,583 | 1,583 | 1,583 | MEMBER | null | # 🐛 Bug
## Information
If a base model (e.g. `BertModel`, `DistilBertModel`, ...) is saved using `save_pretrained` and a model with an additional head (e.g. `BertForSequenceClassification`, `DistilBertForQuestionAnswering`, ...) is loaded from that checkpoint, it will not detect that it is missing layers.
## To reproduce
Steps to reproduce the behavior:
1. Instantiate base model from configuration or from `from_pretrained`
2. Save model using `save_pretrained`
3. Load checkpoint in model with head
4. No warning is output. Furthermore, if `output_loading_info=True` in step 3), will output `{'missing_keys': [], 'unexpected_keys': [], 'error_msgs': []}`
Here's a reproducible example:
```py
from transformers import BertForSequenceClassification, BertModel, BertConfig
config = BertConfig()
base_model = BertModel(config)
base_model.save_pretrained(directory)
model, loading_info = BertForSequenceClassification.from_pretrained(directory, output_loading_info=True)
print(loading_info)
# {'missing_keys': [], 'unexpected_keys': [], 'error_msgs': []}
# Should output {'missing_keys': ['classifier.weight', 'classifier.bias'], 'unexpected_keys': [], 'error_msgs': []}
```
## Expected behavior
Should detect the missing keys, as it does when loading from a full checkpoint:
```py
from transformers import BertForSequenceClassification
model, loading_info = BertForSequenceClassification.from_pretrained("bert-base-cased", output_loading_info=True)
print(loading_info)
# {'missing_keys': ['classifier.weight', 'classifier.bias'], 'unexpected_keys': ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'], 'error_msgs': []}
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master branch
- Platform: Linux-5.5.7-arch1-1-x86_64-with-arch
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3142/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3141/comments | https://api.github.com/repos/huggingface/transformers/issues/3141/events | https://github.com/huggingface/transformers/issues/3141 | 576,335,370 | MDU6SXNzdWU1NzYzMzUzNzA= | 3,141 | GPU memory getting out of bound | {
"login": "mainulquraishi",
"id": 14335238,
"node_id": "MDQ6VXNlcjE0MzM1MjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/14335238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mainulquraishi",
"html_url": "https://github.com/mainulquraishi",
"followers_url": "https://api.github.com/users/mainulquraishi/followers",
"following_url": "https://api.github.com/users/mainulquraishi/following{/other_user}",
"gists_url": "https://api.github.com/users/mainulquraishi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mainulquraishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mainulquraishi/subscriptions",
"organizations_url": "https://api.github.com/users/mainulquraishi/orgs",
"repos_url": "https://api.github.com/users/mainulquraishi/repos",
"events_url": "https://api.github.com/users/mainulquraishi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mainulquraishi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"Hi, could you provide a reproducible example so that we may test on our side?",
"Thank you for your reply. \r\nHere is the code and my 32GB GPU memory getting out of bound before 500 iteration. \r\n```\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\nimport torch\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\nmodel = model.to(device)\r\nmodel.eval()\r\n\r\ntext=\"The Manhattan Bridge is a suspension bridge that crosses the East River in New York City, connecting Lower Manhattan at Canal Street with Downtown Brooklyn at the Flatbush Avenue Extension. The main span is 1,470 ft (448 m) long, with the suspension cables being 3,224 ft (983 m) long. The bridge's total length is 6,855 ft (2,089 m). It is one of four toll-free vehicular bridges connecting Manhattan Island to Long Island; the nearby Brooklyn Bridge is just slightly further downtown, while the Queensboro and Williamsburg Bridges are to the north.\"\r\n\r\ngenerated1= tokenizer.encode(text)\r\ngenerated2=tokenizer.encode(text)\r\ncontext = torch.tensor([generated1,generated2])\r\ncontext =context.to(device)\r\nprint(context.shape)\r\npast = None\r\n\r\nfor i in range(500):\r\n before=torch.cuda.max_memory_allocated(device=device)\r\n output, past = model(context, past=past)\r\n after=torch.cuda.max_memory_allocated(device=device)\r\n print(after-before)\r\n token = torch.argmax(output[..., -1, :],dim=1)\r\n \r\n context = token.view(2,-1)\r\n\r\n```\r\n\r\nIf I use a small initial context, this can survive. But problem happens when I use a long initial context. Please try with a small initial context and you will see difference in memory allocation in each iteration. \r\n \r\n",
"I guess this is because the past requires a lot of memory to be saved. It speeds up the sequential decoding but requires a lot of memory. Your script crashes for me at iteration 483, but a script that doesn't make use of the past can reach the maximum length of 1024 tokens on my 24GB of VRAM.\r\n\r\nDropping the past when it becomes too large may be a good idea, same as you would do if it were to go over the max sequence length.",
"Hi, Thanks for the reply. \r\nBy \"script that does not make use of past\", you mean in each iteration the input is (previous context+ generated token id)? \r\n\r\nI did the following code. For batch size=8, it does work. No memory out of bound error. But for batch size=16, the error comes back. \r\n\r\n```\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\nimport torch\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\n\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\nn_gpu = torch.cuda.device_count()\r\ntorch.cuda.get_device_name()\r\n\r\nmodel = model.to(device)\r\nmodel.eval() \r\n\r\ntext=\"Construction began on the bridge in 1901 under the instruction of the New York City Department of Bridges commissioner Gustav Lindenthal and the chief engineer R.S. Buck. Just three years later, however, local politicking was responsible for the pair being replaced with George E. Best and Othniel Foster Nichols, respectively. The bridge design was based on deflection theory, a new concept at the time that was developed by Joseph Melan and applied to the bridge by the chief engineer Leon Moisseiff. This design saved in cost, material, and construction time. The bridge was officially opened to traffic on Dec. 31, 1909. Renovations in 1940 revealed significant wear on the structure, with the subway trains partly responsible for the wear. Those trains, upon entering the bridge at the same time from opposite sides, would cause the bridge to shift up to 8 feet (approximately 2.5 metres). Additional renovations were undertaken in 1978. Since then the Manhattan Bridge has been featured in movies, has undergone regular repairs and retrofitting, and remains one of the most graceful bridges in New York City.\"\r\ngenerated1= tokenizer.encode(text)\r\ngenerated2=tokenizer.encode(text)\r\ngenerated3= tokenizer.encode(text)\r\ngenerated4=tokenizer.encode(text)\r\ngenerated5= tokenizer.encode(text)\r\ngenerated6=tokenizer.encode(text)\r\ngenerated7= tokenizer.encode(text)\r\ngenerated8=tokenizer.encode(text)\r\n# generated9= tokenizer.encode(text)\r\n# generated10=tokenizer.encode(text)\r\n# generated11= tokenizer.encode(text)\r\n# generated12=tokenizer.encode(text)\r\n# generated13= tokenizer.encode(text)\r\n# generated14=tokenizer.encode(text)\r\n# generated15= tokenizer.encode(text)\r\n# generated16=tokenizer.encode(text)\r\n\r\ncontext=torch.tensor([generated1,generated2,generated3,generated4,generated5,generated6,generated7,generated8])\r\n# context =generated\r\n# generated =generated.to(device)\r\ncontext =context.to(device)\r\nprint(context.shape)\r\n\r\nimport time\r\nbatch_size=8\r\nstart_time = time.time()\r\nfor i in range(500): \r\n output, past = model(context)\r\n new_tokens = torch.argmax(output[..., -1, :],dim=1)\r\n new_tokens = new_tokens.view(batch_size,-1)\r\n context=torch.cat([context,new_tokens],dim=1)\r\nelapsed_time = time.time() - start_time\r\nprint(\"time\")\r\nprint(elapsed_time)\r\n\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> \r\n\r\nWhat did you mean by dropping the past? Any example?"
] | 1,583 | 1,667 | 1,589 | NONE | null | I am trying to run the pre-trained small GPT model with language head with batch size 16. Problem is, after each iteration about 440MB of memory is allocated and quickly the GPU memory is getting out of bound. I am not running the pre-trained model in training mode.
In my understanding, in each iteration a single word (16 word for batch size 16) is going as input (from the second iteration) and the new attention is calculated and the `past` variable will be updated and increased for 16 word. So, a little bit of memory usage is expected but I don't understand why it is almost half a GB. I ran the following code to measure the memory usage in each iteration:
```
before=torch.cuda.max_memory_allocated(device=device)
output, past = model(b_train_contexts,past=past)
print("memory usage")
after=torch.cuda.max_memory_allocated(device=device)
print(after-before)
```
Output:
```
memory
0
memory
270742528
memory
442328576
memory
443433472
memory
444525056
memory
445629952
memory
446721536
memory
447826432
memory
448918016
.
.
.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3141/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3140/comments | https://api.github.com/repos/huggingface/transformers/issues/3140/events | https://github.com/huggingface/transformers/pull/3140 | 576,327,375 | MDExOlB1bGxSZXF1ZXN0Mzg0MzM1MzEz | 3,140 | Merge bart generate into default generate | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"clarification:\r\n\r\n`bart.generate` doesn't add EOS if `max_length` is hit, or require EOS to pass integration tests. It just \"finalizes\" a hypothesis when the model predicts EOS for a beam.\r\n\r\nExample:\r\n\r\n```python\r\nARTICLE_TO_SUMMARIZE = \"code doesnt generate EOS if max_length is hit\"\r\ninputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], return_tensors='pt')\r\n\r\ngenerated_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_beams=4, max_length=5)\r\nsummary_text = tokenizer.decode(generated_ids[0])\r\nprint(generated_ids[0], summary_text)\r\n# (tensor([ 0, 2387, 964, 32, 3035]), '<s>My friends are cool')\r\n```\r\n",
"> I would thus like to propose the following workflow for the forward pass of all models:\r\n> [...]\r\n> What do you think especially @LysandreJik and @julien-c\r\n\r\nSounds good to me",
"By the way, my workflow proposition actually implied that we should use the same workflow and inputs for the `generate()` method as well (I could have been more explicit)",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=h1) Report\n> Merging [#3140](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6de6423baf02a971d38ee69824104a1f0f85ad2?src=pr&el=desc) will **decrease** coverage by `0.15%`.\n> The diff coverage is `74.71%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3140 +/- ##\n==========================================\n- Coverage 78.14% 77.99% -0.16% \n==========================================\n Files 98 98 \n Lines 16668 16665 -3 \n==========================================\n- Hits 13026 12998 -28 \n- Misses 3642 3667 +25\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100% <ø> (ø)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.82% <100%> (+0.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.27% <100%> (+2.69%)` | :arrow_up: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.55% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.47% <46.96%> (-6.27%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <90.32%> (-0.57%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=footer). Last update [d6de642...bc9d5d9](https://codecov.io/gh/huggingface/transformers/pull/3140?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"UPDATE: This version passes all integration tests now. There are two things which I quite hacky we probably should implement in a cleaner way:\r\n\r\n- the function `prepare_scores_for_generation` is a hacky function to make Bart pass the integration tests. See in code-line comment\r\n- initializing all encoder-decoder models with the EOS token is probably not the right thing to do. See in code-line comment.\r\n\r\n@sshleifer @thomwolf @LysandreJik ",
"@thomwolf, @sshleifer Thanks for your reviews guys. I think we are on the same page except for two things:\r\n\r\n1. Force tokens to be generated. The BOS token is generated at `step=0` and the EOS token is generated at `step=max_length`. This is necessary to reproduce the original fairseq results. Should we keep that as a model specific \"prepare\" function?\r\n2. Use the EOS token for the Bart models. This is also necessary to reproduce the original fairseq results. Should we keep that?\r\n\r\nMy opinion is: \r\n\r\n1. I would remove both statements that force a certain token to be predicted. Even without doing it, Bart produces (in my opinion) good summarization results. \r\n2. Same goes for this point. Would always use BOS token as the starting `decoder_input_ids` for encoder-decoder models and force encoder-decoder models to have a BOS token to be able to do language generation. \r\n\r\nDoing this would means, we would have to change the integration tests and we won't produce 1:1 the same results as fairseq anymore.\r\n\r\n\r\n@sshleifer you can probably estimate the consequences for the Bart summarization quality much better than me! From the examples in the `test_modeling_bart.py`, I ran the summarization and the output looked good to me, but didn't measure ROUGE scores or anything...\r\n\r\nWhat do you guys think? ",
"I ran eval on the full CNN test set with these simplifications, and Rouge decreases from .21072 to .20285.\r\n\r\n\r\nFor context, here are the published Rouge-2 scores of a bunch of different models: \r\n\r\n\r\n\r\nNote: the published bart score is a bit higher (.2128) because there are even [more tricks](https://github.com/pytorch/fairseq/issues/1765#issuecomment-593720522) I didn't implement.\r\n",
"> core is a bit higher (.2128) because the\r\n\r\nAwesome! Thanks a lot. I'm not super familiar with Rouge - is that drop in performance very significant? @sshleifer @thomwolf ",
"> I ran eval on the full CNN test set with these simplifications, and Rouge decreases from .21072 to .20285.\r\n> \r\n> For context, here are the published Rouge-2 scores of a bunch of different models:\r\n> \r\n> \r\n> \r\n> Note: the published bart score is a bit higher (.2128) because there are even [more tricks](https://github.com/pytorch/fairseq/issues/1765#issuecomment-593720522) I didn't implement.\r\n\r\n@sshleifer what is exactly the trick you mentioned you didn't implemented? to \"force the second token to not be bos\"?\r\n\r\nOverall I think I'm fine with have a clean distinction between post filtering method that will be optionally called for a model and will store all these post filtering tricks in specific models and a generic `generate()` that (for now) will be cleaner. The weirdest trick to me is to initialize with the EOS token (can we maybe use two times the BOS token here for instance?), the other tricks are less shocking.",
"Ok let's merge this to be able to move forward with T5 cc @craffel\r\n@julien-c @LysandreJik we think the self-hosted failing tests are not related to this.\r\nCan you have a look later maybe?",
"@patrickvonplaten You mentioned a way to perform batch inference with `GPT2LMHeadModel` using an attention mask here: https://github.com/huggingface/transformers/issues/3021#issuecomment-591418233.\r\n\r\nDoes this PR make this possible by calling `model.generate(input_ids=..., attention_mask=...)`?",
"> @patrickvonplaten You mentioned a way to perform batch inference with `GPT2LMHeadModel` using an attention mask here: [#3021 (comment)](https://github.com/huggingface/transformers/issues/3021#issuecomment-591418233).\r\n> \r\n> Does this PR make this possible by calling `model.generate(input_ids=..., attention_mask=...)`?\r\n\r\nHi @thesamuel, \r\n\r\nNot yet completely. It's one step to make the generation possible as shown in #3021, but there are still two things that are not considered yet in the generate fn:\r\n1. The position embeddings have to be updated - which generate() does not do yet\r\n2. And this is the hard one: If a padded batch is given as an input, it should not be sampled from the last token, but from the last non-padded token and this can be quite hacky. \r\n\r\nWe are currently thinking about how to implement this!",
"@patrickvonplaten Got it, thanks!",
"(Investigating) This PR may introduce a BOS bug that reduces rouge to 15.068 from 21.28",
"Simple bug caused by `do_sample` (which for some reason defaults to True). Anyway, I'm rerunning rouge but it will likely be at a reasonable level.",
"Possible follow-up states are explained in PR: https://github.com/huggingface/transformers/pull/3225"
] | 1,583 | 1,584 | 1,583 | MEMBER | null | This PR is a first version of how the bart generation() code can be merged into the "default" generation function. I think it's actually much better feasible than we originally thought.
Please not that this change also includes all the changes from PR #3135 , so the code changes will be much less cluttered after #3135 merged.
This version was passes the general random language generation tests in found in `test_modeling_common.py` and the easy integration test with the original fairseq model (renamed to `test_cnn_summarization_same_as_fairseq_easy` in `test_modeling_bart`)
There are a couple of things we should discuss:
1. Both Bart generate() and default generate(), encoder-decoder models **must** have a BOS token and an EOS token.
2. Two new parameters were added: `min_length` and `no_repeat_ngram_size` . I think these parameters should be added generally as it is done now.
3. There was one hack which initializes the `decoder_input_ids` to the EOS token and then forces the model to generate the BOS token afterwards (see comment in code line). I changed it to simply start with the BOS token (which makes more sense) and it also passed the "easy integration tests". This hack might be needed to pass the hard integration test though.
4. Fairseq forces the last token of all beam hypotheses to be the EOS token (see comment in line). This is probably necessary to pass the integration tests. It's up for debate whether this the correct way. I would prefer not to do it this way because it will then be impossible to generate unfinished sentences (sentence that end because they hit `max_length`). If one really wants all beam hypotheses to be finished, one could set the `max_length` higher than usual and set the parameter:
`self.early_stopping` in the Beam Hypotheses class to `True`. Up for debate how to handle this.
5. In order to also pass the hard integration tests (which has a padded batch as an input), we will have to add `attention_masks` to the `generate`() function. Here I see three possibilities:
a) add the `attention_mask` as a parameter to the generation() function.
b) automatically calculate the `attention_mask` from the `input_ids` **if** the model has a `pad_token_id` **and** there is a `pad_token_id` in the input_ids.
c) Not allow padded batches for the moment.
I would prefer option b) because some models do have a set `pad_token_id` (such as Bart) so we should be able to allow padded generation.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3140/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3140",
"html_url": "https://github.com/huggingface/transformers/pull/3140",
"diff_url": "https://github.com/huggingface/transformers/pull/3140.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3140.patch",
"merged_at": 1583929314000
} |
https://api.github.com/repos/huggingface/transformers/issues/3139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3139/comments | https://api.github.com/repos/huggingface/transformers/issues/3139/events | https://github.com/huggingface/transformers/pull/3139 | 576,315,567 | MDExOlB1bGxSZXF1ZXN0Mzg0MzI1ODQx | 3,139 | Remove excess line breaks in DeepPavlov model cards | {
"login": "yoptar",
"id": 5615053,
"node_id": "MDQ6VXNlcjU2MTUwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5615053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoptar",
"html_url": "https://github.com/yoptar",
"followers_url": "https://api.github.com/users/yoptar/followers",
"following_url": "https://api.github.com/users/yoptar/following{/other_user}",
"gists_url": "https://api.github.com/users/yoptar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoptar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoptar/subscriptions",
"organizations_url": "https://api.github.com/users/yoptar/orgs",
"repos_url": "https://api.github.com/users/yoptar/repos",
"events_url": "https://api.github.com/users/yoptar/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoptar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=h1) Report\n> Merging [#3139](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a2d9bc9ef38452e80ce872505a5ad5623c12657?src=pr&el=desc) will **decrease** coverage by `0.51%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3139 +/- ##\n==========================================\n- Coverage 78.45% 77.94% -0.52% \n==========================================\n Files 98 98 \n Lines 16561 16561 \n==========================================\n- Hits 12993 12908 -85 \n- Misses 3568 3653 +85\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3139/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0%> (-27.6%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=footer). Last update [8a2d9bc...129d1ab](https://codecov.io/gh/huggingface/transformers/pull/3139?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3139/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3139",
"html_url": "https://github.com/huggingface/transformers/pull/3139",
"diff_url": "https://github.com/huggingface/transformers/pull/3139.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3139.patch",
"merged_at": 1583533177000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3138/comments | https://api.github.com/repos/huggingface/transformers/issues/3138/events | https://github.com/huggingface/transformers/pull/3138 | 576,287,423 | MDExOlB1bGxSZXF1ZXN0Mzg0MzAyOTc3 | 3,138 | Add model cards for DeepPavlov models | {
"login": "yoptar",
"id": 5615053,
"node_id": "MDQ6VXNlcjU2MTUwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5615053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoptar",
"html_url": "https://github.com/yoptar",
"followers_url": "https://api.github.com/users/yoptar/followers",
"following_url": "https://api.github.com/users/yoptar/following{/other_user}",
"gists_url": "https://api.github.com/users/yoptar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoptar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoptar/subscriptions",
"organizations_url": "https://api.github.com/users/yoptar/orgs",
"repos_url": "https://api.github.com/users/yoptar/repos",
"events_url": "https://api.github.com/users/yoptar/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoptar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=h1) Report\n> Merging [#3138](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e9e6efdc452b74947d40a5a2e8af2fc444c63b5b?src=pr&el=desc) will **decrease** coverage by `0.52%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3138 +/- ##\n==========================================\n- Coverage 78.35% 77.83% -0.53% \n==========================================\n Files 98 98 \n Lines 16422 16422 \n==========================================\n- Hits 12868 12782 -86 \n- Misses 3554 3640 +86\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0%> (-27.6%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.29% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=footer). Last update [e9e6efd...fe1854d](https://codecov.io/gh/huggingface/transformers/pull/3138?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"You can also add a link to a thumbnail image from the `thumbnail:` attribute of the YAML front matter metadata, if you want.\r\n\r\nThank you!"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Sorry about the Cyrillic `с` yesterday. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3138/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3138",
"html_url": "https://github.com/huggingface/transformers/pull/3138",
"diff_url": "https://github.com/huggingface/transformers/pull/3138.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3138.patch",
"merged_at": 1583418884000
} |
https://api.github.com/repos/huggingface/transformers/issues/3137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3137/comments | https://api.github.com/repos/huggingface/transformers/issues/3137/events | https://github.com/huggingface/transformers/pull/3137 | 576,267,439 | MDExOlB1bGxSZXF1ZXN0Mzg0Mjg2ODUw | 3,137 | Refactor BartModel so that input checks are handled within enc/dec | {
"login": "tomhosking",
"id": 9419158,
"node_id": "MDQ6VXNlcjk0MTkxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9419158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomhosking",
"html_url": "https://github.com/tomhosking",
"followers_url": "https://api.github.com/users/tomhosking/followers",
"following_url": "https://api.github.com/users/tomhosking/following{/other_user}",
"gists_url": "https://api.github.com/users/tomhosking/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomhosking/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomhosking/subscriptions",
"organizations_url": "https://api.github.com/users/tomhosking/orgs",
"repos_url": "https://api.github.com/users/tomhosking/repos",
"events_url": "https://api.github.com/users/tomhosking/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomhosking/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=h1) Report\n> Merging [#3137](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/30624f7056ae3b607ba1d02f474f2c7986e87dff?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3137 +/- ##\n==========================================\n+ Coverage 77.94% 77.95% +0.01% \n==========================================\n Files 98 98 \n Lines 16561 16565 +4 \n==========================================\n+ Hits 12908 12913 +5 \n+ Misses 3653 3652 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.43% <100%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.22% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3137/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.45% <0%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=footer). Last update [30624f7...31acb8d](https://codecov.io/gh/huggingface/transformers/pull/3137?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"have you checked whether this breaks `RUN_SLOW=1 pytest tests/test_modeling_bart.py`? There is some subtletly with caching+remaking the attention mask everytime.",
"Looks good I think?\r\n\r\n```\r\n========================================================================== test session starts ===========================================================================\r\nplatform darwin -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.13.1\r\nrootdir: /Users/tom/dev/transformers\r\nplugins: xdist-1.31.0, forked-1.1.3\r\ncollected 30 items \r\n\r\ntests/test_modeling_bart.py ...........s.................s [100%]\r\n\r\n============================================================================ warnings summary ============================================================================\r\n.venv/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15\r\n /Users/tom/dev/transformers/.venv/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\n-- Docs: https://docs.pytest.org/en/latest/warnings.html\r\n========================================================== 28 passed, 2 skipped, 1 warning in 393.65s (0:06:33) ==========================================================\r\n```",
"Awesome, LGTM. Will wait for @thomwolf"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Implementing #3133
I've left the code that creates dummy inputs and checks/filters the outputs, this could potentially also be moved to `BartEncoder` and `BartDecoder`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3137/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3137",
"html_url": "https://github.com/huggingface/transformers/pull/3137",
"diff_url": "https://github.com/huggingface/transformers/pull/3137.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3137.patch",
"merged_at": 1583496395000
} |
https://api.github.com/repos/huggingface/transformers/issues/3136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3136/comments | https://api.github.com/repos/huggingface/transformers/issues/3136/events | https://github.com/huggingface/transformers/issues/3136 | 576,245,911 | MDU6SXNzdWU1NzYyNDU5MTE= | 3,136 | links of the model's pretrained weights | {
"login": "hzg0601",
"id": 22924096,
"node_id": "MDQ6VXNlcjIyOTI0MDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/22924096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hzg0601",
"html_url": "https://github.com/hzg0601",
"followers_url": "https://api.github.com/users/hzg0601/followers",
"following_url": "https://api.github.com/users/hzg0601/following{/other_user}",
"gists_url": "https://api.github.com/users/hzg0601/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hzg0601/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hzg0601/subscriptions",
"organizations_url": "https://api.github.com/users/hzg0601/orgs",
"repos_url": "https://api.github.com/users/hzg0601/repos",
"events_url": "https://api.github.com/users/hzg0601/events{/privacy}",
"received_events_url": "https://api.github.com/users/hzg0601/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We have a `HfApi.model_list()` method on this PR https://github.com/huggingface/transformers/pull/3132 that might be of interest to you.\r\n\r\nDo let us know if it solves your use case or not.",
"I use the following script in an interactive enviroment:\r\n\r\n```python\r\n>> from transformers.hf_api import HfApi\r\n\r\n>> HfApi.model_list()\r\n```\r\nBut I got an error:` AttributeError: type object 'HfApi' has no attribute 'model_list'`\r\n\r\nAnd my `transformers.__version__` is 2.5.1\r\n\r\nDid I miss something ?",
"> We have a `HfApi.model_list()` method on this PR #3132 that might be of interest to you.\r\n> \r\n> Do let us know if it solves your use case or not.\r\n\r\nsame problem with @sevenights ,remain unsolved,Did we miss something?\r\n",
"#3132 was just merged on master, you should be able to try on master now.",
"I get it, thank you :)"
] | 1,583 | 1,583 | 1,583 | NONE | null | where can I find all the links of the model's pretrained weights,please?to download them by IDE is too slow. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3136/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3135/comments | https://api.github.com/repos/huggingface/transformers/issues/3135/events | https://github.com/huggingface/transformers/pull/3135 | 576,209,777 | MDExOlB1bGxSZXF1ZXN0Mzg0MjQxMDQw | 3,135 | Refactoring and bug fixing beam search generate | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good to merge for me"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | This PR cleanes the beam_search decoding part of language generation. It simplifies the code and fixes a small bug for do_sample=True (see comments in code).
It was also tested on all language generation slow tests.
### Future PR
- [x] Do the same change for TF 2.0 if ok -> #3148 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3135/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3135",
"html_url": "https://github.com/huggingface/transformers/pull/3135",
"diff_url": "https://github.com/huggingface/transformers/pull/3135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3135.patch",
"merged_at": 1583442777000
} |
https://api.github.com/repos/huggingface/transformers/issues/3134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3134/comments | https://api.github.com/repos/huggingface/transformers/issues/3134/events | https://github.com/huggingface/transformers/issues/3134 | 576,198,549 | MDU6SXNzdWU1NzYxOTg1NDk= | 3,134 | Cant import my pretrained bert model from NVIDIA/DeepLearningExamples/ | {
"login": "Limtle",
"id": 47511735,
"node_id": "MDQ6VXNlcjQ3NTExNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/47511735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Limtle",
"html_url": "https://github.com/Limtle",
"followers_url": "https://api.github.com/users/Limtle/followers",
"following_url": "https://api.github.com/users/Limtle/following{/other_user}",
"gists_url": "https://api.github.com/users/Limtle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Limtle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Limtle/subscriptions",
"organizations_url": "https://api.github.com/users/Limtle/orgs",
"repos_url": "https://api.github.com/users/Limtle/repos",
"events_url": "https://api.github.com/users/Limtle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Limtle/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Try:\r\n\r\n```\r\nmodel.eval()\r\n```\r\n\r\nbefore you access the weights.",
"Its seems not work. \r\n```\r\nembeddings.word_embeddings.weight\r\ntensor([[ 0.0073, 0.0080, 0.0307, ..., -0.0172, 0.0148, -0.0401],\r\n [-0.0271, 0.0110, 0.0011, ..., -0.0079, 0.0236, -0.0037],\r\n [ 0.0005, 0.0066, -0.0009, ..., -0.0065, 0.0167, 0.0301],\r\n ...,\r\n [ 0.0062, -0.0385, -0.0091, ..., -0.0022, 0.0043, 0.0018],\r\n [-0.0188, 0.0154, -0.0023, ..., -0.0049, -0.0108, 0.0393],\r\n [-0.0257, -0.0056, 0.0155, ..., -0.0198, 0.0280, -0.0143]])```\r\n```\r\n```\r\nembeddings.word_embeddings.weight\r\ntensor([[-0.0011, -0.0345, 0.0094, ..., 0.0024, 0.0229, 0.0194],\r\n [-0.0246, 0.0329, 0.0231, ..., 0.0436, 0.0246, -0.0012],\r\n [ 0.0069, 0.0050, -0.0020, ..., -0.0002, 0.0043, 0.0208],\r\n ...,\r\n [-0.0139, -0.0091, 0.0110, ..., -0.0128, -0.0015, -0.0027],\r\n [ 0.0297, 0.0063, -0.0066, ..., 0.0070, 0.0157, 0.0417],\r\n [-0.0341, 0.0458, 0.0054, ..., -0.0525, 0.0003, -0.0122]])\r\n```",
"Solved! I find the layer_name of NVIDIA/DeepLearningExamples/ is dismatch huggingface/transformers",
"@Limtle This seems like it is important information. What exactly do you mean? The layer names are not the same in NVIDIA's examples and the layers here in the HuggingFace repo?",
"I print the weights_name of bertmodel trained with NVIDIA/DeepLearningExamples/\r\n```\r\nbert.embeddings.word_embeddings.weight\r\nbert.embeddings.position_embeddings.weight\r\nbert.embeddings.token_type_embeddings.weight\r\n...\r\nbert.encoder.layer.11.output.dense.weight\r\nbert.encoder.layer.11.output.dense.bias\r\nbert.encoder.layer.11.output.LayerNorm.weight\r\nbert.encoder.layer.11.output.LayerNorm.bias\r\nbert.pooler.dense_act.weight\r\nbert.pooler.dense_act.bias\r\ncls.predictions.bias\r\ncls.predictions.transform.dense_act.weight\r\ncls.predictions.transform.dense_act.bias\r\ncls.predictions.transform.LayerNorm.weight\r\ncls.predictions.transform.LayerNorm.bias\r\ncls.predictions.decoder.weight\r\ncls.seq_relationship.weight\r\ncls.seq_relationship.bias\r\n```\r\nAnd the weights_name of huggingface/transformers/BertModel()\r\n```\r\nembeddings.word_embeddings.weight\r\nembeddings.position_embeddings.weight\r\nembeddings.token_type_embeddings.weight\r\n...\r\nencoder.layer.11.output.dense.weight\r\nencoder.layer.11.output.dense.bias\r\nencoder.layer.11.output.LayerNorm.weight\r\nencoder.layer.11.output.LayerNorm.bias\r\npooler.dense.weight\r\npooler.dense.bias\r\n```\r\nThis is my code \r\nload model from NVIDIA/DeepLearningExamples/ to huggingface/transformers\r\n```\r\nconfiguration = BertConfig.from_json_file(config_path)\r\ntokenizer = BertTokenizer.from_pretrained(vocab_path)\r\nmodel = BertModel(configuration)\r\nstate_dict = {k.replace('bert.','').replace('.dense_act','.dense'):v for k,v in torch.load(os.path.join(pytorch_pretrained_model_path, 'pytorch_model.bin'))['model'].items()}\r\nmodel.load_state_dict(state_dict, strict= False)\r\n#model = model.from_pretrained(pytorch_pretrained_model_path, state_dict= state_dict)\r\n```\r\n\r\n> Is it the right way to solve this problem?\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null |
Trained with **NVIDIA/DeepLearningExamples/** and got my_bert_model/ckpt_1000.pt
**.pt to .bin**
Edit my_bert_model/ckpt_1000.pt to my_bert_model/pytorch_model.bin
**My code**
```configuration = BertConfig(3258, 768, 12, 12, 3072)
model = BertModel(configuration)
model = model.from_pretrained('/home/my_bert_model', state_dict)
for key, weight in model.state_dict().items():
print (weight)
```
**Output**
Every execution of this code produces different ouput
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3134/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3133/comments | https://api.github.com/repos/huggingface/transformers/issues/3133/events | https://github.com/huggingface/transformers/issues/3133 | 576,194,591 | MDU6SXNzdWU1NzYxOTQ1OTE= | 3,133 | BART: move boilerplate code inside encoder/decoder | {
"login": "tomhosking",
"id": 9419158,
"node_id": "MDQ6VXNlcjk0MTkxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9419158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomhosking",
"html_url": "https://github.com/tomhosking",
"followers_url": "https://api.github.com/users/tomhosking/followers",
"following_url": "https://api.github.com/users/tomhosking/following{/other_user}",
"gists_url": "https://api.github.com/users/tomhosking/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomhosking/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomhosking/subscriptions",
"organizations_url": "https://api.github.com/users/tomhosking/orgs",
"repos_url": "https://api.github.com/users/tomhosking/repos",
"events_url": "https://api.github.com/users/tomhosking/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomhosking/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sounds good to me"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | # 🚀 Feature request
Could we move the boilerplate code in the `BartModel` `forward()` method inside the `forward()` methods for the encoder and decoder? That way, the encoder and decoder can be called separately as independent modules more easily.
## Motivation
Currently, there's some cleanup work that is done before calling `BartEncoder.forward()` and `BartDecoder.forward()` (looks like attention mask flipping and preparing dummy outputs). If we want to call the encoder and decoder separately from our own code (eg to use Bart as an encoder, or limit which parts are fine tuned) we currently have to re-implement this code.
If we move this logic inside the encoder and decoder, such that `BartModel.forward()` is a thin wrapper around the encoder+decoder, this kind of work would be much easier.
Example usage:
```
model = BartModel.from_pretrained(...)
inputs = tokenizer.encode('I want to classify this text.')
encoding = model.encoder(inputs)
predictions = my_classifier(encoding)
```
## Your contribution
I could put together a PR for this if you agree?
(cc @sshleifer )
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3133/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3132/comments | https://api.github.com/repos/huggingface/transformers/issues/3132/events | https://github.com/huggingface/transformers/pull/3132 | 575,991,592 | MDExOlB1bGxSZXF1ZXN0Mzg0MDY2Mzc3 | 3,132 | [hf_api] Get the public list of all the models on huggingface | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=h1) Report\n> Merging [#3132](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ff9e79ba3a3dd35c1a7edbd669cf78e082b2f7dc?src=pr&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3132 +/- ##\n==========================================\n- Coverage 78% 77.97% -0.04% \n==========================================\n Files 98 98 \n Lines 16561 16581 +20 \n==========================================\n+ Hits 12919 12929 +10 \n- Misses 3642 3652 +10\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/hf\\_api.py](https://codecov.io/gh/huggingface/transformers/pull/3132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `98% <100%> (+0.5%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.29% <0%> (-2.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.45% <0%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=footer). Last update [ff9e79b...3f067f4](https://codecov.io/gh/huggingface/transformers/pull/3132?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3132/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3132",
"html_url": "https://github.com/huggingface/transformers/pull/3132",
"diff_url": "https://github.com/huggingface/transformers/pull/3132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3132.patch",
"merged_at": 1583496353000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3131/comments | https://api.github.com/repos/huggingface/transformers/issues/3131/events | https://github.com/huggingface/transformers/issues/3131 | 575,950,245 | MDU6SXNzdWU1NzU5NTAyNDU= | 3,131 | Converting tf weights: AttributeError: 'GPT2Model' object has no attribute 'zeLoss' | {
"login": "fabrahman",
"id": 22799593,
"node_id": "MDQ6VXNlcjIyNzk5NTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/22799593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabrahman",
"html_url": "https://github.com/fabrahman",
"followers_url": "https://api.github.com/users/fabrahman/followers",
"following_url": "https://api.github.com/users/fabrahman/following{/other_user}",
"gists_url": "https://api.github.com/users/fabrahman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabrahman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabrahman/subscriptions",
"organizations_url": "https://api.github.com/users/fabrahman/orgs",
"repos_url": "https://api.github.com/users/fabrahman/repos",
"events_url": "https://api.github.com/users/fabrahman/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabrahman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"@LysandreJik Any update on this? Thanks",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,593 | 1,593 | NONE | null | # 🐛 Bug
## Information
Model I am using: gpt-2
I am trying to convert a fine-tuned gpt-2 model on tensroflow to pytorch state_dict. I used the nice script [here](https://github.com/huggingface/transformers/blob/ce50305e5b8c8748b81b0c8f5539a337b6a995b9/src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py).
## To reproduce
Steps to reproduce the behavior:
1. I run the following command:
```
python convert_gpt2_original_tf_checkpoint_to_pytorch.py --gpt2_checkpoint_path /home/finetune/model_best.ckpt --pytorch_dump_folder_path /home/gpt2-fine-tf2torch/
```
after running this command, I see the logs showing that the tf weights are being loaded but suddently I got the following error:
```
INFO:transformers.modeling_gpt2:Loading TF weight transformer_decoder/layer_9/self_attention/multihead_attention/value/bias with shape [1024]
INFO:transformers.modeling_gpt2:Loading TF weight transformer_decoder/layer_9/self_attention/multihead_attention/value/kernel with shape [1024, 1024]
INFO:transformers.modeling_gpt2:Loading TF weight word_embedder/w with shape [50257, 1024]
INFO:transformers.modeling_gpt2:Loading TF weight word_embedder_1/w with shape [50257, 1024]
Traceback (most recent call last):
File "convert_gpt2_original_tf_checkpoint_to_pytorch.py", line 67, in <module>
convert_gpt2_checkpoint_to_pytorch(args.gpt2_checkpoint_path, args.gpt2_config_file, args.pytorch_dump_folder_path)
File "convert_gpt2_original_tf_checkpoint_to_pytorch.py", line 38, in convert_gpt2_checkpoint_to_pytorch
load_tf_weights_in_gpt2(model, config, gpt2_checkpoint_path)
File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 84, in load_tf_weights_in_gpt2
pointer = getattr(pointer, scope_names[0])
File "/home/anaconda3/envs/torch03/lib/python3.6/site-packages/torch/nn/modules/module.py", line 585, in __getattr__
type(self).__name__, name))
AttributeError: 'GPT2Model' object has no attribute 'zeLoss'
```
I tried to load the tf checkpoint and inspect variables, there were no 'zeLoss' but ''OptimizeLoss' which I guess the script mistakenly matched only the 'zeLoss':
```
>>> import tensorflow as tf
>>> import os
>>> tf_path = os.path.abspath('model_best.ckpt')
>>> tf_vars = tf.train.list_variables(tf_path)
```
here is part of the ```tf_vars```:
```
[('OptimizeLoss/beta1_power', []), ('OptimizeLoss/beta2_power', []), ('OptimizeLoss/position_embedder/w/Adam', [1024, 1024]), ('OptimizeLoss/position_embedder/w/Adam_1', [1024, 1024]), ('OptimizeLoss/transformer_decoder/beta/Adam', [1024]), ('OptimizeLoss/transformer_decoder/beta/Adam_1', [1024]), ('OptimizeLoss/transformer_decoder/gamma/Adam', [1024]), ('OptimizeLoss/transformer_decoder/gamma/Adam_1', [1024]), ('OptimizeLoss/transformer_decoder/layer_0/beta/Adam', [1024]), ('OptimizeLoss/transformer_decoder/layer_0/beta/Adam_1', [1024]), ('OptimizeLoss/transformer_decoder/layer_0/ffn/conv1/bias/Adam', [4096]), ('OptimizeLoss/transformer_decoder/layer_0/ffn/conv1/bias/Adam_1', [4096]), ('OptimizeLoss/transformer_decoder/layer_0/ffn/conv1/kernel/Adam', [1024, 4096]),
```
I would appreciate if you can help me fix this .
Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3131/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3130/comments | https://api.github.com/repos/huggingface/transformers/issues/3130/events | https://github.com/huggingface/transformers/issues/3130 | 575,937,199 | MDU6SXNzdWU1NzU5MzcxOTk= | 3,130 | Performance Issue about pretrained bert migration from tensorflow to pytorch | {
"login": "eagle705",
"id": 7252598,
"node_id": "MDQ6VXNlcjcyNTI1OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7252598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eagle705",
"html_url": "https://github.com/eagle705",
"followers_url": "https://api.github.com/users/eagle705/followers",
"following_url": "https://api.github.com/users/eagle705/following{/other_user}",
"gists_url": "https://api.github.com/users/eagle705/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eagle705/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eagle705/subscriptions",
"organizations_url": "https://api.github.com/users/eagle705/orgs",
"repos_url": "https://api.github.com/users/eagle705/repos",
"events_url": "https://api.github.com/users/eagle705/events{/privacy}",
"received_events_url": "https://api.github.com/users/eagle705/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Are you using the latest version? `transformers` and not `pytorch-transformers`. It works with both TF2 and PT. You can test both and compare.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | # ❓ Questions & Help
Hi,
I have a question about migration from tensorflow to pytorch.
if I change pretrained BERT with google tensorflow 1.x version to pytorch transformers BERT through transformers-cli, performance drops a lot. I wonder why.
Is there any similar case? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3130/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3129/comments | https://api.github.com/repos/huggingface/transformers/issues/3129/events | https://github.com/huggingface/transformers/issues/3129 | 575,927,895 | MDU6SXNzdWU1NzU5Mjc4OTU= | 3,129 | Load pretrained roberta model from fairseq? | {
"login": "frankxu2004",
"id": 6738274,
"node_id": "MDQ6VXNlcjY3MzgyNzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6738274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankxu2004",
"html_url": "https://github.com/frankxu2004",
"followers_url": "https://api.github.com/users/frankxu2004/followers",
"following_url": "https://api.github.com/users/frankxu2004/following{/other_user}",
"gists_url": "https://api.github.com/users/frankxu2004/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankxu2004/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankxu2004/subscriptions",
"organizations_url": "https://api.github.com/users/frankxu2004/orgs",
"repos_url": "https://api.github.com/users/frankxu2004/repos",
"events_url": "https://api.github.com/users/frankxu2004/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankxu2004/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Try https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py\r\n\r\nYou might have to make some modifications, I have never tried this. Good luck!\r\n",
"huggingface/transformers can be used to train from scratch. See [how to train a LM from scratch](https://huggingface.co/blog/how-to-train).",
"Feel free to open another issue if more specific ",
"related official blog: \"Porting fairseq wmt19 translation system to transformers\" by @stas00 \r\nhttps://huggingface.co/blog/porting-fsmt\r\n\r\nMight be able to convert fairseq language models following similar steps.",
"> Try https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py\r\n> \r\n> You might have to make some modifications, I have never tried this. Good luck!\r\n\r\nHey :) I'm interested in this but the link doesn't seem to work.",
"when pasting links to a repo one needs to hit `y` to get a fixed link to the current revision which would never break (as long as the repo is in place), here you go:\r\n\r\nhttps://github.com/huggingface/transformers/blob/7c6d63298f27b4a386f4603262a4603a2a6bf057/src/transformers/models/roberta/convert_roberta_original_pytorch_checkpoint_to_pytorch.py\r\n"
] | 1,583 | 1,611 | 1,583 | NONE | null | # ❓ Questions & Help
Would it be possible to load pretrained model of roberta-base that is trained by myself using fairseq, following the instructions in https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.pretraining.md?
Since the current transformers library is not suitable for pretraining from scratch, I think it would be nice to be able to load pre-trained model trained on fairseq. I think it might be possible but I am not sure how the current transformers' roberta pretrained model is translated/loaded?
thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3129/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3128/comments | https://api.github.com/repos/huggingface/transformers/issues/3128/events | https://github.com/huggingface/transformers/pull/3128 | 575,871,362 | MDExOlB1bGxSZXF1ZXN0MzgzOTY1ODAy | 3,128 | Add Summarization to Pipelines | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=h1) Report\n> Merging [#3128](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3814e167d99c4b2e135b250d73deaa3f63ebef0c&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `95.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3128 +/- ##\n==========================================\n+ Coverage 78.02% 78.08% +0.05% \n==========================================\n Files 98 98 \n Lines 16670 16689 +19 \n==========================================\n+ Hits 13007 13031 +24 \n+ Misses 3663 3658 -5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <ø> (ø)` | |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `72.53% <95.00%> (+1.57%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.99% <0.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.37% <0.00%> (+0.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.14% <0.00%> (+0.27%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=footer). Last update [3814e16...a123599](https://codecov.io/gh/huggingface/transformers/pull/3128?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I addressed all comments, and am ready for review @julien-c @thomwolf. ",
"Is this pipeline ready to go? When I tried to run an example it said that the summarization pipeline is not one of the options.",
"Hey, @Weilin37 .\r\nCould you send a snippet of code so that I can reproduce your error?\r\nThanks!",
"@Weilin37 are you running from master?",
"> @Weilin37 are you running from master?\r\n\r\nHi, yes it is resolved now. I thought I upgraded but it didn't"
] | 1,583 | 1,585 | 1,584 | CONTRIBUTOR | null | Choices:
1) This is not TextGenerationPipeline, so it only supports bart-large-cnn.
1) It doesn't return the input back to the caller because it is annoyingly long. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3128/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3128",
"html_url": "https://github.com/huggingface/transformers/pull/3128",
"diff_url": "https://github.com/huggingface/transformers/pull/3128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3128.patch",
"merged_at": 1584482662000
} |
https://api.github.com/repos/huggingface/transformers/issues/3127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3127/comments | https://api.github.com/repos/huggingface/transformers/issues/3127/events | https://github.com/huggingface/transformers/pull/3127 | 575,588,268 | MDExOlB1bGxSZXF1ZXN0MzgzNzE0MDk0 | 3,127 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=h1) Report\n> Merging [#3127](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ec60e0ae7a88e46ac2bfbf6234d14381a01be06a?src=pr&el=desc) will **increase** coverage by `0.04%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3127 +/- ##\n==========================================\n+ Coverage 77.79% 77.84% +0.04% \n==========================================\n Files 98 98 \n Lines 16422 16422 \n==========================================\n+ Hits 12776 12783 +7 \n+ Misses 3646 3639 -7\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3127/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.45% <0%> (+0.15%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3127/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.72% <0%> (+0.85%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=footer). Last update [ec60e0a...5e9f364](https://codecov.io/gh/huggingface/transformers/pull/3127?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3127/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3127",
"html_url": "https://github.com/huggingface/transformers/pull/3127",
"diff_url": "https://github.com/huggingface/transformers/pull/3127.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3127.patch",
"merged_at": 1583348244000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3126/comments | https://api.github.com/repos/huggingface/transformers/issues/3126/events | https://github.com/huggingface/transformers/issues/3126 | 575,567,005 | MDU6SXNzdWU1NzU1NjcwMDU= | 3,126 | BartTokenizer and 'bart-large-cnn' out of sync | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | CONTRIBUTOR | null | tok.encode('<mask>') -> 52064, but BartForMaskedLM.from_pretrained('bart-large-cnn') does not support a mask token. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3126/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3125/comments | https://api.github.com/repos/huggingface/transformers/issues/3125/events | https://github.com/huggingface/transformers/issues/3125 | 575,563,170 | MDU6SXNzdWU1NzU1NjMxNzA= | 3,125 | NER tutorial: run_tf_ner.py reports an entity number not matching the one in the .txt files | {
"login": "ChessMateK",
"id": 48825535,
"node_id": "MDQ6VXNlcjQ4ODI1NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/48825535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChessMateK",
"html_url": "https://github.com/ChessMateK",
"followers_url": "https://api.github.com/users/ChessMateK/followers",
"following_url": "https://api.github.com/users/ChessMateK/following{/other_user}",
"gists_url": "https://api.github.com/users/ChessMateK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChessMateK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChessMateK/subscriptions",
"organizations_url": "https://api.github.com/users/ChessMateK/orgs",
"repos_url": "https://api.github.com/users/ChessMateK/repos",
"events_url": "https://api.github.com/users/ChessMateK/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChessMateK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834060867,
"node_id": "MDU6TGFiZWwxODM0MDYwODY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition",
"name": "Ex: Named Entity Recognition",
"color": "06FFD8",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I'm currently not able to run the TF ner training script - I'm using TF in version 2.0.0b1 and the following error message is thrown:\r\n\r\n```bash\r\n...\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops_v2.py\", line 769, in _assert_float_dtype\r\n raise ValueError(\"Expected floating point type, got %s.\" % dtype)\r\nValueError: Expected floating point type, got <dtype: 'int32'>.\r\n```\r\n\r\n🤔",
"I met the same problem. The script \"run_tf_ner.py\" can not get the same report result, have you solved it?",
"I have solved this issue. Because I evaluate the ner model with train.txt, and when the mode is \"train\", the dataset will be repeated and shuffled, so the support item of metric report is not same with the last report.\r\n\r\nWhen I copy train.txt as dev.txt and change mode from \"train\" to \"dev\" during evaluation in the load_and_cache_examples, the dataset will not be repeated and shuffled and the report is reproductive.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | Hi everyone,
I'm following this tutorial https://github.com/huggingface/transformers/tree/master/examples/ner .
Generated results are the same but I noticed that the reported entity numbers ("support" column) are not the same B-entity numbers I can count through Notepad++ in *.txt files used as source for training, validation or test.
Which is my mistake?
Thank you.
UPDATE: run_ner.py works correctly. I suppose there is a bug in run_tf_ner.py.
I also tag @stefan-it who is the only mentioned in tutorial. Thank you Stefan. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3125/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3124/comments | https://api.github.com/repos/huggingface/transformers/issues/3124/events | https://github.com/huggingface/transformers/pull/3124 | 575,561,287 | MDExOlB1bGxSZXF1ZXN0MzgzNjkxMDg3 | 3,124 | [Broken Proposal] CircleCI runs tests with torch=1.0.0 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"need to avoid the torchscript tests",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,651 | 1,589 | CONTRIBUTOR | null | Goal: maintain backwards compatibility! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3124/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3124",
"html_url": "https://github.com/huggingface/transformers/pull/3124",
"diff_url": "https://github.com/huggingface/transformers/pull/3124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3124.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3123/comments | https://api.github.com/repos/huggingface/transformers/issues/3123/events | https://github.com/huggingface/transformers/pull/3123 | 575,541,208 | MDExOlB1bGxSZXF1ZXN0MzgzNjc0MzU2 | 3,123 | fix sklearn release circle ci [temporary] | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@julien-c this should fix the sklearn problem for the moment",
"👍 \r\n",
"reverted on master as they pushed a fixed release right after that one."
] | 1,583 | 1,585 | 1,583 | MEMBER | null | new sklearn release (https://github.com/scikit-learn/scikit-learn/releases) seems to be broken or leads to errors for PR since this morning - go back to previous version for now to avoid circle ci errors | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3123/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3123",
"html_url": "https://github.com/huggingface/transformers/pull/3123",
"diff_url": "https://github.com/huggingface/transformers/pull/3123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3123.patch",
"merged_at": 1583339124000
} |
https://api.github.com/repos/huggingface/transformers/issues/3122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3122/comments | https://api.github.com/repos/huggingface/transformers/issues/3122/events | https://github.com/huggingface/transformers/pull/3122 | 575,406,308 | MDExOlB1bGxSZXF1ZXN0MzgzNTU5NTg0 | 3,122 | include tf gpt2 tests for attn mask and past variable | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=h1) Report\n> Merging [#3122](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/34de670dbe70a9ead31d0692ad9dc726d3ea4edb?src=pr&el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3122 +/- ##\n=========================================\n- Coverage 77.84% 77.8% -0.05% \n=========================================\n Files 98 98 \n Lines 16422 16422 \n=========================================\n- Hits 12784 12777 -7 \n- Misses 3638 3645 +7\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.87% <0%> (-0.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.45% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=footer). Last update [34de670...1305f35](https://codecov.io/gh/huggingface/transformers/pull/3122?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | Test TF GPT2 for correct behavior regarding the past and attn mask variable. Translated code from torch to TF 2.0.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3122/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3122",
"html_url": "https://github.com/huggingface/transformers/pull/3122",
"diff_url": "https://github.com/huggingface/transformers/pull/3122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3122.patch",
"merged_at": 1583341427000
} |
https://api.github.com/repos/huggingface/transformers/issues/3121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3121/comments | https://api.github.com/repos/huggingface/transformers/issues/3121/events | https://github.com/huggingface/transformers/issues/3121 | 575,348,562 | MDU6SXNzdWU1NzUzNDg1NjI= | 3,121 | A better way to process extended_attention_mask in BertModel.forward() | {
"login": "erikchwang",
"id": 16256959,
"node_id": "MDQ6VXNlcjE2MjU2OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/16256959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erikchwang",
"html_url": "https://github.com/erikchwang",
"followers_url": "https://api.github.com/users/erikchwang/followers",
"following_url": "https://api.github.com/users/erikchwang/following{/other_user}",
"gists_url": "https://api.github.com/users/erikchwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erikchwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erikchwang/subscriptions",
"organizations_url": "https://api.github.com/users/erikchwang/orgs",
"repos_url": "https://api.github.com/users/erikchwang/repos",
"events_url": "https://api.github.com/users/erikchwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/erikchwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions."
] | 1,583 | 1,589 | 1,589 | NONE | null | In the `forward()` method of the `BertModel` (https://huggingface.co/transformers/_modules/transformers/modeling_bert.html#BertModel.forward), the `extended_attention_mask` is processed in the following way:
> extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
I know this is to make an additive mask so that the unmasked positions will be unchanged (by adding 0) and the masked positions will be very small (by subtracting 10000).
But I think it is better to achieve this goal by the following way:
> extended_attention_mask = torch.log(extended_attention_mask) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3121/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3120/comments | https://api.github.com/repos/huggingface/transformers/issues/3120/events | https://github.com/huggingface/transformers/issues/3120 | 575,318,964 | MDU6SXNzdWU1NzUzMTg5NjQ= | 3,120 | Making past and mems variables have batch size as their first output dimension. | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Up for discussion @LysandreJik @thomwolf @julien-c ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | MEMBER | null | # 🚀 Feature request
And the moment, the variables **past** / **mems** have the shape:
`(2, batch_size, num_heads, sequence_length, embed_size_per_head)` and `(mem_len, batch_size, embed_size)` , respectively.
meaning that the variable `batch_size` dim is not on first position. Change the variables structure to have `batch_size` on first position.
## Motivation
This might be confusing as all other variables have the `batch_size` dim no first position.
Being certain that the first dimension is always the `batch_size` would be very helpful for the user.
Normally `mems` and `past` variables are just used to speed up decoding, and not too much changed by the user (or even looked at) I think, but consistency should be good anyways.
## Your contribution
Changing this for GPT2/CTRL is very straightforward (changing three lines of code), but for xlnet and transfo-xl would probably take a slightly bigger change.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3120/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3119/comments | https://api.github.com/repos/huggingface/transformers/issues/3119/events | https://github.com/huggingface/transformers/pull/3119 | 575,263,191 | MDExOlB1bGxSZXF1ZXN0MzgzNDQwNzky | 3,119 | rename variables named 'word' to 'token' in generate fn | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=h1) Report\n> Merging [#3119](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/34de670dbe70a9ead31d0692ad9dc726d3ea4edb?src=pr&el=desc) will **decrease** coverage by `1.01%`.\n> The diff coverage is `90%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3119 +/- ##\n==========================================\n- Coverage 77.84% 76.83% -1.02% \n==========================================\n Files 98 98 \n Lines 16422 16422 \n==========================================\n- Hits 12784 12618 -166 \n- Misses 3638 3804 +166\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.29% <90%> (-0.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.86% <0%> (+0.3%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=footer). Last update [34de670...2caa33f](https://codecov.io/gh/huggingface/transformers/pull/3119?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good to merge for me"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | Rename `word` to `token` in generate() function. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3119/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3119",
"html_url": "https://github.com/huggingface/transformers/pull/3119",
"diff_url": "https://github.com/huggingface/transformers/pull/3119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3119.patch",
"merged_at": 1583341277000
} |
https://api.github.com/repos/huggingface/transformers/issues/3118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3118/comments | https://api.github.com/repos/huggingface/transformers/issues/3118/events | https://github.com/huggingface/transformers/pull/3118 | 575,262,425 | MDExOlB1bGxSZXF1ZXN0MzgzNDQwMTcw | 3,118 | Add beam search to generation tf 2 0 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=h1) Report\n> Merging [#3118](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/34de670dbe70a9ead31d0692ad9dc726d3ea4edb?src=pr&el=desc) will **increase** coverage by `0.1%`.\n> The diff coverage is `84.51%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3118 +/- ##\n=========================================\n+ Coverage 77.84% 77.95% +0.1% \n=========================================\n Files 98 98 \n Lines 16422 16561 +139 \n=========================================\n+ Hits 12784 12910 +126 \n- Misses 3638 3651 +13\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3118/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `96.14% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3118/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `99.57% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3118/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.29% <84.1%> (-0.28%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=footer). Last update [34de670...7a89a3e](https://codecov.io/gh/huggingface/transformers/pull/3118?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good to merge for me! @LysandreJik @thomwolf ",
"This is really cool, thanks a lot @patrickvonplaten \r\n\r\nCc @minimaxir"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | Add beam search to the TF generate function as it is done for torch at the moment. Use same TF syntax that was used in PR #3063
EDIT: Also included a quick fix that `TF GPT2 past.shape == PT GPT2 past.shape` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3118/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3118",
"html_url": "https://github.com/huggingface/transformers/pull/3118",
"diff_url": "https://github.com/huggingface/transformers/pull/3118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3118.patch",
"merged_at": 1583360881000
} |
https://api.github.com/repos/huggingface/transformers/issues/3117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3117/comments | https://api.github.com/repos/huggingface/transformers/issues/3117/events | https://github.com/huggingface/transformers/issues/3117 | 575,101,017 | MDU6SXNzdWU1NzUxMDEwMTc= | 3,117 | BART FP16 | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Not on my roadmap just yet, but I would definitely consider it if there were lots of demand. Since we only have inference code right now, the benefit seems marginal. ",
"@BramVanroy Should this issue be closed ?\r\n\r\nFP16 is not implemented yet. And the `wontfix` label is clear.\r\n\r\nKeeping the issue open may make it easier for people to find it and show their potential interest in FP16.",
"This should not be closed indeed.\r\n\r\n@sshleifer, we intend all the models to be compatible with FP16, this is the direction the field is going and with the Volta-level GPU being widespread now, there is less and less reason not to use mixed-precision fine-tuning (half memory and significantly faster).",
"This can probably be fixed by changing the `torch.float32` casting [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L643) to a cast to the type of `attn_weights` like it's done in the original fairseq code [here](https://github.com/pytorch/fairseq/blob/cb2dc414c692d7de283bec4e4f9c923a66205792/fairseq/modules/multihead_attention.py#L335).\r\n\r\nDo you mind fixing this and testing the failing script posted in the issue @sshleifer?",
"Yep, on it!",
"Hi, @sshleifer. Thank you so much for your effort on BART. I encountered the same fp16 issues today. The current BART code can be trained (without fp16) using the run_glue script in: https://github.com/huggingface/transformers/blob/master/examples/run_glue.py \r\nSo, it will be really nice if the fp16 training can also work out.",
"My bad, I thought @sshleifer's labeling was a note that he isn't planning to change anything `wontfix`, so no future updates would be possible and then I closed it. Will keep that in mind for the future.",
"No bad\r\n\r\n@sshleifer for the moment, please ping me with DM before adding \"wontfix\" labels to issues, thanks."
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | # 🚀 Feature request
I would like to use BART in FP16 mode, but it seems impossible for now :
```
config = BartConfig(vocab_size=50264, output_past=True)
model = AutoModelWithLMHead.from_pretrained('bart-large-cnn', config=config).cuda().half()
tokenizer = AutoTokenizer.from_pretrained('bart-large-cnn')
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')
generated_ids = model.generate(inputs['input_ids'].cuda(), attention_mask=inputs['attention_mask'].cuda(), num_beams=4, max_length=5)
```
> File "/data/user/.venv/bartqg/lib/python3.6/site-packages/transformers/modeling_bart.py", line 647, in forward
attn_output = torch.bmm(attn_probs, v)
RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #2 'mat2' in call to _th_bmm
@sshleifer Do you plan to implement a FP16-friendly version of BART ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3117/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3117/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3116/comments | https://api.github.com/repos/huggingface/transformers/issues/3116/events | https://github.com/huggingface/transformers/pull/3116 | 575,025,964 | MDExOlB1bGxSZXF1ZXN0MzgzMjQ4NDEw | 3,116 | Skipping outputs | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nice.\r\n\r\nOne question: do we want to have a `skip_output` flag or to have a `keep_output` flag.\r\n\r\n`skip_output` seems to me as introducing a dependency to be maintained between all the models (if we add a model with additional output that are processed by encode_plus later, we would have to update all the models to avoid this output)\r\n\r\n`keep_output` is longer to write right now (we have to add it for all the models) but once it's added, all the models are independent from each others.",
"I'm ok with both solutions (by the way, in general terms, a lot of software can accept a combination of whitelist and/or blacklist. When both are present, it's usually \"include the whitelist, and remove the blacklist\")\r\n\r\nIf we do `keep_output`, maybe we name the attribute `return_outputs: List[str]` for consistency with `encode_xxx()` params?",
"I agree with both of you. Furthermore, this approach (deleting from the dict `encode_plus` generated) is not compatible with the `return_xxx` in the `encode_plus` arguments.\r\n\r\nI'm implementing both your proposed changes, looking into fixing the above and into fast tokenizers.\r\n\r\nI'll then move on to the tests.\r\n\r\n- [x] replace the blacklist by a whitelist\r\n- [x] rename to `return_outputs` for consistency with `encode_plus arguments`\r\n- [x] compatibility with all of `encode_plus`'s arguments\r\n- [x] fast tokenizers\r\n- [x] tests",
"I like the solution, 👍 .\r\n\r\nOne question: It requires the user to know / look at the names of the parameters handled by `__call__()` / `forward()`, should we expose a property on PreTrainedModel to give the list of parameter supported by the model ? This one will be overrided in Roberta and Distil.\r\n\r\n```python\r\nmodel = SomeModel(...)\r\ntokenizer = AutoTokenizer.from_pretrained(..., return_outputs=model.input_names)\r\n``` ",
"Indeed, such an attribute would be helpful! I'll add it and move on to the tests.",
"Regarding the suggestion of @mfuntowicz, in the end, this should be in a common configuration for model and tokenizers I guess, so maybe we could actually have this attribute as input to `PretrainedTokenizer.__init__()` already (instead of class attribute) to prepare for the future.",
"That value is currently managed by the `__init__` method, see the examples above\r\n\r\nIt still needs to be a class attribute in my opinion, as it should be overridden by children of `PreTrainedTokenizer` and it should be known by `encode_plus`/`encode`/`batch_encode_plus`.",
"Should be good for review. I reverted the `docs` commit because it made the review harder. I'll recommit the docs at merge time.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=h1) Report\n> Merging [#3116](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/49debe62fdc96e161f866dd8914d5915477bb742?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3116 +/- ##\n==========================================\n+ Coverage 77.98% 77.99% +<.01% \n==========================================\n Files 98 98 \n Lines 16645 16660 +15 \n==========================================\n+ Hits 12981 12994 +13 \n- Misses 3664 3666 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.85% <100%> (+0.12%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <100%> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGlzdGlsYmVydC5weQ==) | `100% <100%> (ø)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68% <0%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.4% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=footer). Last update [49debe6...96b2fa1](https://codecov.io/gh/huggingface/transformers/pull/3116?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Merged after offline review from @thomwolf and @julien-c "
] | 1,583 | 1,585 | 1,583 | MEMBER | null | Currently, `encode_plus` and `batch_encode_plus` return the same outputs for different models.
This is sub-optimal as we can't do the following for each model:
```py
inputs = tokenizer.encode_plus(sequence, return_tensors="pt")
model(**inputs)
```
This will crash for DistilBERT as the tokenizer would return `token_type_ids` which can't be handled by the model.
In order to fix this, each tokenizer has to return model-specific arguments. Usually there are the same default arguments, and some models handle less (e.g. DistilBERT, RoBERTa).
This is a mock PR offering a solution using a ~`skip_outputs`~ `return_outputs` argument to tokenizers.
```py
from transformers import DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased")
print(tokenizer.encode_plus("Hey, how are you?"))
```
Returns a dictionary without the token type ids:
```py
{'input_ids': [101, 4403, 117, 1293, 1132, 1128, 136, 102], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}
```
Specifying a custom ~`skip_outputs`~ `return_outputs` at initialisation works as expected:
```py
from transformers import DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased", return_outputs=["attention_mask", "token_type_ids"])
print(tokenizer.encode_plus("Hey, how are you?"))
```
```py
{'input_ids': [101, 4403, 117, 1293, 1132, 1128, 136, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}
```
or with a custom ~skipped~ output:
```py
from transformers import DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased", return_outputs=["token_type_ids"])
print(tokenizer.encode_plus("Hey, how are you?"))
```
```py
{'input_ids': [101, 4403, 117, 1293, 1132, 1128, 136, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0]}
```
This also works with saving/reloading:
```py
from transformers import DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased", return_outputs=["token_type_ids"])
print(tokenizer.encode_plus("Hey, how are you?"))
tokenizer.save_pretrained("xxx")
tokenizer = DistilBertTokenizer.from_pretrained("xxx")
print(tokenizer.encode_plus("Hey, how are you?"))
```
Returns the following:
```py
{'input_ids': [101, 4403, 117, 1293, 1132, 1128, 136, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0]}
{'input_ids': [101, 4403, 117, 1293, 1132, 1128, 136, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0]}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3116/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3116",
"html_url": "https://github.com/huggingface/transformers/pull/3116",
"diff_url": "https://github.com/huggingface/transformers/pull/3116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3116.patch",
"merged_at": 1583776139000
} |
https://api.github.com/repos/huggingface/transformers/issues/3115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3115/comments | https://api.github.com/repos/huggingface/transformers/issues/3115/events | https://github.com/huggingface/transformers/pull/3115 | 575,018,815 | MDExOlB1bGxSZXF1ZXN0MzgzMjQyNDgx | 3,115 | fix: passing config as Layer trainable param | {
"login": "gthb",
"id": 153580,
"node_id": "MDQ6VXNlcjE1MzU4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/153580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gthb",
"html_url": "https://github.com/gthb",
"followers_url": "https://api.github.com/users/gthb/followers",
"following_url": "https://api.github.com/users/gthb/following{/other_user}",
"gists_url": "https://api.github.com/users/gthb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gthb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gthb/subscriptions",
"organizations_url": "https://api.github.com/users/gthb/orgs",
"repos_url": "https://api.github.com/users/gthb/repos",
"events_url": "https://api.github.com/users/gthb/events{/privacy}",
"received_events_url": "https://api.github.com/users/gthb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's great, thanks a lot @gthb "
] | 1,583 | 1,584 | 1,583 | CONTRIBUTOR | null | Lurking bugs discovered while working on other stuff. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3115/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3115",
"html_url": "https://github.com/huggingface/transformers/pull/3115",
"diff_url": "https://github.com/huggingface/transformers/pull/3115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3115.patch",
"merged_at": 1583355550000
} |
https://api.github.com/repos/huggingface/transformers/issues/3114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3114/comments | https://api.github.com/repos/huggingface/transformers/issues/3114/events | https://github.com/huggingface/transformers/pull/3114 | 575,004,022 | MDExOlB1bGxSZXF1ZXN0MzgzMjI5NTUy | 3,114 | Rename BartForMaskedLM -> BartForConditionalGeneration | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=h1) Report\n> Merging [#3114](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f631e01d2c78614416655a85955f326636f69825?src=pr&el=desc) will **decrease** coverage by `1%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3114 +/- ##\n==========================================\n- Coverage 77.82% 76.82% -1.01% \n==========================================\n Files 98 98 \n Lines 16422 16425 +3 \n==========================================\n- Hits 12781 12618 -163 \n- Misses 3641 3807 +166\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.09% <100%> (+0.03%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3114/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.56% <0%> (-0.31%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=footer). Last update [f631e01...b3e0a1c](https://codecov.io/gh/huggingface/transformers/pull/3114?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Docs:\r\n\r\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3114/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3114",
"html_url": "https://github.com/huggingface/transformers/pull/3114",
"diff_url": "https://github.com/huggingface/transformers/pull/3114.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3114.patch",
"merged_at": 1583448079000
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.