url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/4413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4413/comments | https://api.github.com/repos/huggingface/transformers/issues/4413/events | https://github.com/huggingface/transformers/pull/4413 | 619,729,256 | MDExOlB1bGxSZXF1ZXN0NDE5MTI3ODI4 | 4,413 | Modify example of usage | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | I followed the google example of usage for its electra small model but i have seen it is not meaningful, so i created a better example | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4413/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4413",
"html_url": "https://github.com/huggingface/transformers/pull/4413",
"diff_url": "https://github.com/huggingface/transformers/pull/4413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4413.patch",
"merged_at": 1589840254000
} |
https://api.github.com/repos/huggingface/transformers/issues/4412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4412/comments | https://api.github.com/repos/huggingface/transformers/issues/4412/events | https://github.com/huggingface/transformers/issues/4412 | 619,717,325 | MDU6SXNzdWU2MTk3MTczMjU= | 4,412 | Tensorflow NER Training script Not working | {
"login": "albertnanda",
"id": 20819507,
"node_id": "MDQ6VXNlcjIwODE5NTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/20819507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertnanda",
"html_url": "https://github.com/albertnanda",
"followers_url": "https://api.github.com/users/albertnanda/followers",
"following_url": "https://api.github.com/users/albertnanda/following{/other_user}",
"gists_url": "https://api.github.com/users/albertnanda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertnanda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertnanda/subscriptions",
"organizations_url": "https://api.github.com/users/albertnanda/orgs",
"repos_url": "https://api.github.com/users/albertnanda/repos",
"events_url": "https://api.github.com/users/albertnanda/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertnanda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"- `transformers` version: 2.9.1\r\n- Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core\r\n- Python version: 3.6.10\r\n- PyTorch version (GPU?): not installed (NA)\r\n- Tensorflow version (GPU?): 2.2.0 (True)\r\n- Using GPU in script?: Yes both single gpu and multi-gpu training\r\n- Using distributed or parallel set-up in script?: Yes",
"Error log:\r\n ValueError: Variable <tf.Variable 'tf_bert_for_token_classification/bert/pooler/dense/kernel:0' shape=(768, 768) dtype=float32> has `None` for gradient. Please make sure that all of your ops have\r\n a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.",
"Tried with TF 2.0:\r\nError Message: AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'\r\nTF 2.1: Same as TF 2.2 i.e. ValueError: Variable <tf.Variable 'tf_bert_for_token_classification/bert/pooler/dense/kernel:0' shape=(768, 768) dtype=float32> has `None` for gradient. Please make sure that all of your ops have\r\n a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.",
"I had same issue.\r\nadd ```--mode token-classification``` to the command\r\nreference code https://github.com/huggingface/transformers/blob/18d233d52588b4e08dc785fbfecd77529e9effa6/src/transformers/trainer_tf.py#L380",
"Thanks @linhx13 ..Works fine.."
] | 1,589 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
Tensorflow NER Training script Not working
## Information
I am following the exact guide at https://github.com/huggingface/transformers/tree/master/examples/token-classification
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
python run_tf_ner.py --data_dir ./ \
--labels ./labels.txt \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_device_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_train \
--do_eval \
--do_predict
The tasks I am working on is:
Training NER on germeval data
Steps to reproduce the behavior:
Follow the official guide at https://github.com/huggingface/transformers/tree/master/examples/token-classification
<!-- ValueError: Variable <tf.Variable 'tf_bert_for_token_classification/bert/pooler/dense/kernel:0' shape=(768, 768) dtype=float32> has `None` for gradient. Please make sure that all of your ops have
a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!--
- `transformers` version: 2.9.1
- Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.6.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes, both Single and multi-GPU training
- Using distributed or parallel set-up in script?: Yes-->
- `transformers` version:
The documentation i guess is not updated for Tensorflow training, an additional parameter "logging_dir" is required in case of TF.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4412/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4411/comments | https://api.github.com/repos/huggingface/transformers/issues/4411/events | https://github.com/huggingface/transformers/issues/4411 | 619,686,678 | MDU6SXNzdWU2MTk2ODY2Nzg= | 4,411 | Pipeline for Conditional Generation (T5 type models) | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes having a \"Conditional Generation\" pipeline makes sense given that variety of tasks can be solved using it. We can use T5, BART for these tasks as well as the new Encoder-Decoder. I would like to call it `TextToTextPipeline` though, since we can solve non-generative tasks also as demonstrated in the T5 paper. I think this pipeline will be really useful.",
"Technically, any task using Text-To-Text is generative in nature right? But yeah, agree `TextToTextPipeline` will make the use case clearer :smile:\r\n\r\nHoping to get feedback from @patrickvonplaten before attempting this",
"Yeah. To be honest, I'm not sure whether this is a good idea. The pipelines are supposed to be directly related to a task such as `translation`, `summarization` which are specific cases of `text2text` applications. \r\n\r\nI think for every task we should introduce a new `pipeline` before starting to have different levels of abstractions in `pipelines`. A `TextToTextPipeline could become quite a mess regarding different possible input formats, different prefixes (for T5), etc...For general tasks such as these ones I'd prefer to just implement your own code using the `.generate()` function. \r\n\r\n@LysandreJik - what do you think? ",
"I think from a high level, more than just thinking about `text2text`, I'm foreseeing the future where multi-task learning becomes a standard way of deploying ML models. Having a pipeline to introduce this can be one step to accelerating that future.\r\n\r\nAlthough, I do understand that `text2text` is just one approach to doing this, but in my opinion, it's the most promising one at the moment, so it's a good interface to start with for a multi task model pipeline.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm not sure that T5 is the most promising place to do a multi-task pipeline, since their results in that paper suggested it was hard to significantly beat the baseline of just fine tuning on the target task. \r\n\r\nThe recent AdapterHub library built off of HuggingFace seems a better place for building out multitask systems/pipelines imo. But of course the library designers have more intuition on this.",
"I'm don't think anyone is arguing for the T5 model specifically, just that there is a trend towards `text2text` as a common method of doing multitask learning for NLP (GPT-3 frames tasks like this too for example).",
"> I'm don't think anyone is arguing for the T5 model specifically, just that there is a trend towards `text2text` as a common method of doing multitask learning for NLP (GPT-3 frames tasks like this too for example).\r\n\r\nFair enough. I'm not one to argue against a feature, even if I wouldn't use it much myself. I've been using `text2text` myself for multiple tasks. \r\n\r\nMostly I just meant the multitask part of `text2text` is going to be a little tricky to abstract away conveniently into a pipeline. The main complexity there is mixing the proportion of each task / batch correctly. The T5 paper suggests performance and weights are very specific to the multitask learning, and if its not tuned properly the performance will be hurt by using multitasks. Uniform mixing for example performs quite poorly. I suspect that problem would apply to most `text2text` paradigms.\r\n\r\n What I've been doing myself is using a custom DataLoader class that handles the mixing of batch proportions of each task. A pipeline that can integrate something like that would be terrific to have.",
"Hey everybody, after thinking a bit more about it, I think it does make sense to add a `ConditionalTextGeneration` pipeline which will be the equivalent of `TextGenerationPipeline` for all models in `AutoModelForSeq2Seq`. It should look very similar to the `TextGenerationPipeline` (probably we more or less the same at the moment), but it will give us more freedom in the future (for example when we add `decoder_input_ids` to the generation). \r\n@sshleifer , @yjernite , @LysandreJik - what are your thoughts on this?",
"@patrickvonplaten happy to work on a PR for this if team agrees it makes sense :smile:",
"I think we definitely need something like that.\r\n\r\nI'd probably go with a more explicit name though: e.g. `TextToTextPipeline` or `Text2TextGenerationPipeline`. `ConditionalTextGeneration` might cover other uses in the future (e.g. multiple input texts or multimodal inputs)",
"Such a pipeline would be very welcome, indeed!",
"Awesome, will send a PR in the next week or so :smile:",
"I also want to work on this, @enzoampil let me know if you want to collaborate on the PR :)",
"Sure thing, maybe we can collab on the same fork? :)"
] | 1,589 | 1,599 | 1,599 | CONTRIBUTOR | null | As text-to-text models (like T5) increase the accessibility of multi-task learning, it also makes sense to have a flexible "Conditional Generation" pipeline.
For example, I should be able to use this pipeline for a multitude of tasks depending on how I format the text input (examples in Appendix D of the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf)). As a baseline, this should be able to work on `T5ForConditionalGeneration` and allow for any of the tasks that are learned by the open sourced T5 model.
Since T5 isn't usable for `TextGenerationPipeline`, I propose we add a `ConditionalGenerationPipeline`.
Please do let me know if there is an existing way to perform the above via pipelines, or if adding a pipeline doesn't makes sense for this; otherwise, I can submit a PR for the above `ConditionalGenerationPipeline` 🙂 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4411/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4410/comments | https://api.github.com/repos/huggingface/transformers/issues/4410/events | https://github.com/huggingface/transformers/pull/4410 | 619,685,747 | MDExOlB1bGxSZXF1ZXN0NDE5MTAzNTI3 | 4,410 | Remove pytorch codes in Class TFXLNetMainLayer | {
"login": "ZhuBaohe",
"id": 35796307,
"node_id": "MDQ6VXNlcjM1Nzk2MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/35796307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhuBaohe",
"html_url": "https://github.com/ZhuBaohe",
"followers_url": "https://api.github.com/users/ZhuBaohe/followers",
"following_url": "https://api.github.com/users/ZhuBaohe/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhuBaohe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhuBaohe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhuBaohe/subscriptions",
"organizations_url": "https://api.github.com/users/ZhuBaohe/orgs",
"repos_url": "https://api.github.com/users/ZhuBaohe/repos",
"events_url": "https://api.github.com/users/ZhuBaohe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhuBaohe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Sorry I don't really understand how this PR removes pytorch code in `TFXLNetMainLayer` - can you explain a bit more in-detail?",
"@patrickvonplaten \r\n\r\nThe codes in file modeling_tf_xlnet.py is the XLNet Model of tensorflow implementation. So the parameter **head_mask** should have a type of tf.Tensor or Numpy array (see Line 780 ).\r\n\r\nBut in Lines 643-650, the parameter **head_mask** uses the methods of the pytorch, such as \"expand\" or \"unsqueeze\" which can't apply to tf.Tensor or Numpy array. \r\n\r\nActually these codes are copied from modeling_xlnet.py by mistake and should be removed.",
"Perfect thanks a lot! \r\n\r\n@LysandreJik - the RUN_SLOW=1 tests all poss for TFXLNetMainLayer.\r\n\r\nGood to merge for me!"
] | 1,589 | 1,590 | 1,590 | CONTRIBUTOR | null | This PR removes pytorch codes in Class TFXLNetMainLayer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4410/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4410",
"html_url": "https://github.com/huggingface/transformers/pull/4410",
"diff_url": "https://github.com/huggingface/transformers/pull/4410.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4410.patch",
"merged_at": 1590497488000
} |
https://api.github.com/repos/huggingface/transformers/issues/4409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4409/comments | https://api.github.com/repos/huggingface/transformers/issues/4409/events | https://github.com/huggingface/transformers/pull/4409 | 619,630,368 | MDExOlB1bGxSZXF1ZXN0NDE5MDcyNTE4 | 4,409 | add model card for t5-base-squad | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Nice! cc @patrickvonplaten "
] | 1,589 | 1,590 | 1,589 | MEMBER | null | Model card for https://huggingface.co/valhalla/t5-base-squad | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4409/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4409/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4409",
"html_url": "https://github.com/huggingface/transformers/pull/4409",
"diff_url": "https://github.com/huggingface/transformers/pull/4409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4409.patch",
"merged_at": 1589840234000
} |
https://api.github.com/repos/huggingface/transformers/issues/4408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4408/comments | https://api.github.com/repos/huggingface/transformers/issues/4408/events | https://github.com/huggingface/transformers/issues/4408 | 619,623,571 | MDU6SXNzdWU2MTk2MjM1NzE= | 4,408 | Request to add MobileBert | {
"login": "msahamed",
"id": 8838524,
"node_id": "MDQ6VXNlcjg4Mzg1MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8838524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/msahamed",
"html_url": "https://github.com/msahamed",
"followers_url": "https://api.github.com/users/msahamed/followers",
"following_url": "https://api.github.com/users/msahamed/following{/other_user}",
"gists_url": "https://api.github.com/users/msahamed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/msahamed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msahamed/subscriptions",
"organizations_url": "https://api.github.com/users/msahamed/orgs",
"repos_url": "https://api.github.com/users/msahamed/repos",
"events_url": "https://api.github.com/users/msahamed/events{/privacy}",
"received_events_url": "https://api.github.com/users/msahamed/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Duplicate of #4185"
] | 1,589 | 1,589 | 1,589 | NONE | null | # 🌟 New model addition
MobileBERT
## Model description
MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.
## Open source status
* [ ] the model implementation is available: (give details)
https://github.com/google-research/google-research/tree/master/mobilebert
* [ ] the model weights are available: (give details)
https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz
* [ ] who are the authors: (mention them, if possible by @gh-username)
Google LLC
Xiaodan Song
Zhiqing Sun
Hongkun Yu
Denny Zou
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4408/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4407/comments | https://api.github.com/repos/huggingface/transformers/issues/4407/events | https://github.com/huggingface/transformers/pull/4407 | 619,610,625 | MDExOlB1bGxSZXF1ZXN0NDE5MDYxMDY1 | 4,407 | fix(run_language_modeling): use arg overwrite_cache | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | In `run_language_modeling.py`, arg `overwrite_cache` was unused. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4407/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4407",
"html_url": "https://github.com/huggingface/transformers/pull/4407",
"diff_url": "https://github.com/huggingface/transformers/pull/4407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4407.patch",
"merged_at": 1589816256000
} |
https://api.github.com/repos/huggingface/transformers/issues/4406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4406/comments | https://api.github.com/repos/huggingface/transformers/issues/4406/events | https://github.com/huggingface/transformers/issues/4406 | 619,590,720 | MDU6SXNzdWU2MTk1OTA3MjA= | 4,406 | Summarization Fine Tuning | {
"login": "kevinlu1248",
"id": 26889185,
"node_id": "MDQ6VXNlcjI2ODg5MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/26889185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevinlu1248",
"html_url": "https://github.com/kevinlu1248",
"followers_url": "https://api.github.com/users/kevinlu1248/followers",
"following_url": "https://api.github.com/users/kevinlu1248/following{/other_user}",
"gists_url": "https://api.github.com/users/kevinlu1248/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevinlu1248/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevinlu1248/subscriptions",
"organizations_url": "https://api.github.com/users/kevinlu1248/orgs",
"repos_url": "https://api.github.com/users/kevinlu1248/repos",
"events_url": "https://api.github.com/users/kevinlu1248/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevinlu1248/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"First thing you can try is fine-tune T5/BART for summarization on your corpus and see how it performs.",
"@patil-suraj where can I find a guide to this? I'm a bit confused by the documentation. ",
"[Here's](https://github.com/huggingface/transformers/tree/master/examples/summarization/bart) the official example which fine-tunes BART on CNN/DM, you can just replace the cnn/dm dataset with your own summerization dataset.",
"@patil-suraj Thanks for the example. I'm wondering if there is any simpler way to get started since I'm planning on training it in a Kaggle notebook due to GPU constraints, because otherwise I may need to copy paste entire folder into a Kaggle notebook.",
"@kevinlu1248 \r\nThis [colab](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) shows how to fine-tune T5 with lightening. This is just the self-contained version of official example. You should be able to use the same `Trainer`, just replace the model with BART and use you own dataset.",
"@patil-suraj Thanks, I'll look into it.",
"> [Here's](https://github.com/huggingface/transformers/tree/master/examples/summarization/bart) the official example which fine-tunes BART on CNN/DM, you can just replace the cnn/dm dataset with your own summerization dataset.\r\n\r\nHi @patil-suraj, I am following that example and have my data in that format, and I can see the process using GPU/CPU, but I can't get tensorboard working. Do you have any hints? I am happy to contribute to documentation once I get it working.",
"@sam-qordoba lightning handles logging itself and by default the tensorboard logs are saved in lightning_logs directory. So you should be able see the logs by passing lightning_logs as the logdir to tensorboard command.",
"Thanks @patil-suraj ",
"Hey @patil-suraj, I had OOM issues on Colab, so moved to a VM with 56GB RAM, and the behaviour is the same as on Colab: memory usage grows, until it uses up everything available (I even added 32GB of swap, so, it's a really impressive amount of memory usage), until I get locked out of the machine... and the only time it writes to `lightning_logs` is right when it starts. \r\n\r\n```sh\r\njupyter@pytorch-20200529-155153:~/lightning_logs$ tree\r\n.\r\n└── version_0\r\n ├── events.out.tfevents.1590794134.pytorch-20200529-155753.8733.0\r\n └── hparams.yaml\r\n\r\n1 directory, 2 files\r\n```\r\n\r\n`nvidia-smi` looks like this:\r\n\r\n```\r\njupyter@pytorch-20200529-155753:~$ nvidia-smi \r\nSat May 30 00:07:12 2020 \r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n|===============================+======================+======================|\r\n| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\r\n| N/A 77C P0 35W / 70W | 2579MiB / 15079MiB | 0% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n \r\n+-----------------------------------------------------------------------------+\r\n| Processes: GPU Memory |\r\n| GPU PID Type Process name Usage |\r\n|=============================================================================|\r\n| 0 8733 C /opt/conda/bin/python 2569MiB |\r\n+-----------------------------------------------------------------------------+\r\n```\r\n\r\nThe cell `trainer.fit(model)` outputs the model definition, but no progress bar on anything,\r\n\r\n``` \r\n\r\n | Name | Type | Params\r\n-----------------------------------------------------------------------------------------------------------------\r\n0 | model | T5ForConditionalGeneration | 222 M \r\n1 | model.shared | Embedding | 24 M \r\n2 | model.encoder | T5Stack | 109 M \r\n...\r\n514 | model.decoder.block.11.layer.2.dropout | Dropout | 0 \r\n515 | model.decoder.final_layer_norm | T5LayerNorm | 768 \r\n516 | model.decoder.dropout | Dropout | 0 \r\n517 | model.lm_head | Linear | 24 M \r\nSelected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods.\r\n\r\nDefaults for this optimization level are:\r\nenabled : True\r\nopt_level : O1\r\ncast_model_type : None\r\npatch_torch_functions : True\r\nkeep_batchnorm_fp32 : None\r\nmaster_weights : None\r\nloss_scale : dynamic\r\nProcessing user overrides (additional kwargs that are not None)...\r\nAfter processing overrides, optimization options are:\r\nenabled : True\r\nopt_level : O1\r\ncast_model_type : None\r\npatch_torch_functions : True\r\nkeep_batchnorm_fp32 : None\r\nmaster_weights : None\r\nloss_scale : dynamic\r\n```\r\n\r\nSorry to keep bothering you, but do you have any hints? It's hard to know what's going on because it doesn't seem to log",
"It shouldn't take that much memory, did you try reducing the batch size ?\r\n\r\nAlso seems that you are using fp16 here. I haven't tried it with fp16 yet.\r\n\r\ntagging @sshleifer ",
"Ok, I tried fp16 as a \"maybe this will use less memory\" experiment, I will try without. I tried batch size of 4, could go lower I guess. Should I just double the learning rate each time I halve the batch size, or are other changes needed?",
"Could somebody who has fine-tuned BART give me an estimate of how long it takes / how many epochs until convergence? Also any tricks to speed it up (weight freezing etc)?\r\n\r\n1 epoch takes c. 150 hrs for my dataset so wondering how many I need...",
"Sounds like you have a huge dataset?\r\nIt's tough to know exactly how many you will need, but for xsum and cnn most of the model's I've need have required 4-6 to converged.\r\nThe [original authors] https://github.com/pytorch/fairseq/blob/master/examples/bart/README.summarization.md#4-fine-tuning-on-cnn-dm-summarization-task say 15-20K Steps.\r\n\r\nI have had to go down to batch size=1 or 2 on some occasions. \r\nYou can use `--gradient_accumulation_steps` to keep the \"effective\" batch size (how many examples your model processes per backward pass) consistent.\r\n\r\n@sam-qordoba is your `Dataset/DataLoader` putting all the examples in memory before training? That could be an issue on a large dataset.\r\n\r\n",
"You can also freeze the `BartForConditionalGeneration.model.encoder` using the function below to reduce memory cost.\r\n```\r\ndef freeze_part(model: nn.Module):\r\n for par in model.parameters():\r\n par.requires_grad = False\r\n```\r\n\r\nYou can also use `val_check_interval` in lightning to check validation statistics more frequently, but unfortunately your checkpoints will still be saved at the end of every epoch.",
"@sshleifer thanks for coming back with this- all very helpful.\r\n\r\nYes- essentially I am just trying out using BART to for longer docs (arXiv/PubMed) as a baseline to compare more sophisticated methods against. This means training set has 300k samples and only 1 sample fits on the GPU at once (12Gb- using 1,024 input length).\r\n\r\nLots for me to play around with and see what works well. Thanks for your help.",
"> Yes- essentially I am just trying out using BART to for longer docs (arXiv/PubMed) as a baseline to compare more sophisticated methods against\r\n\r\n@alexgaskell10 If you are interested in using BART for long documents then keep an eye here.\r\nhttps://github.com/patil-suraj/longbart\r\n\r\nI'm trying to convert BART to it's long version using longformer's sliding-window attention.\r\n\r\nI've been able to replace BART encoder's `SelfAttention` with `LongformerSelfAttention` with 4096 max length. Now I'm working on adding gradient checkpointing to allow it to train on smaller GPU's. Hope to finish it soon. \r\n\r\ngradient checkpointing and fp16 with '02' opt level should allow to use larger batch size",
"@patil-suraj thanks for this- adapting BART for LongformerSelfAttention was actually something I was going to start looking into over the next couple of weeks. Thanks for sharing- I'll be sure to give it a go soon.",
"Hey @patil-suraj, any updates on your latest progress on LongBART? Thinking about diving into a similar block of work: expanding BART via Longformer",
"Hi @virattt , I've been able to replace bart encoder's self attention with sliding window attention. Also added gradient checkpoiting in the encoder. \r\n\r\nGradient checkpoiting in decoder is not working so going to remove it for now. Will update the repo this weekend and will put some instructions in the readme.",
"Sounds great, thanks @patil-suraj ",
"Would love to hear `LongBart` experimental results whenever they are available!",
"@sshleifer I have been playing around with `LongBart` recently and have some preliminary experimental results. This is using @patil-suraj 's longbart repo fine-tuned on the PubMed dataset using the hf summarization finetune.py script.\r\n\r\nThe best result so far is ROUGE-1 = 36.8 (for comparison, fine-tuning vanilla `BART` on PubMed and truncating articles at 1024 tokens I got 42.3 ROUGE-1). I have only run a few configs so far and will be running many more so I expect this to improve. Next steps:\r\n- Have been only using a 12Gb GPU so far so have frozen the embeddings and encoder otherwise too large. I have a much larger cluster I can move to so will start running trials on this soon which will give more freedom to try different configs\r\n- I am only fine-tuning at the moment. Might explore doing some pre-training although this may be too expensive.\r\n\r\nLet me know if there is anything you would like to see and I'll try to schedule it in.",
"Hi @alexgaskell10 , did you use the code as it is ? I think we'll need to train the embeddings for few epochs then we can freeze it.\r\nHowever without freezing the embeddings I ran into OOM halfway through the epoch even with bart-base with '02' fp16 on 16GB V100.\r\n\r\n@sshleifer do you have any ideas why this might be happening ? It went well till 60% of first epoch then OOM. Batch size was 1 and max_seq_len 4096 ?\r\n\r\n@alexgaskell10 can you share more details, how many epochs, batch size, fp16 or not ? ",
"Yes, I used the code as is (minor changes to integrate with hf finetune.py script). I agree that the embeddings and encoder should not be frozen from the beginning but I couldn't fit it on my 12Gb GPU. Once I get setup on the cluster I'll try this.\r\n\r\nMore details on all my runs so far can be found in my [wandb project](https://app.wandb.ai/alexgaskell/Covid01-scripts_models/overview?workspace=user-alexgaskell). To answer your question, max a couple epochs so far, batch size between 4 and 16 depending on what fits, not fp16 so far (haven't set up yet but will do soon).",
"Thanks @alexgaskell10 , I think you'll be able to use bart-base with fp16 and max 2048 seq len without frezzing embdddings on 12GB GPU ",
"@patil-suraj: \r\n- 4096 is a very large max_seq_len, but I know that doesn't answer your question. I would guess that the answer is that you got a really big batch. The batches are not all the same size. We trim them to save padding computation. If you are on one GPU you can use `--sortish_sampler` which ensures that the first batch is the largest, so you get OOM at the beginning of the epoch at least. You also get hopefully a further reduction in padding computation. \r\n- I would be interested to know how much `--sortish_sampler` reduces the training cost of 1 epoch with other parameters fixed. \r\n\r\n\r\n@alexgaskell10 : \r\nThanks for sharing your wandb, it makes understanding what you're doing way easier.\r\n\r\n- From [pegasus](https://arxiv.org/pdf/1912.08777.pdf) Table 2, it seems like SOTA for PubMed is around `45.49/19.90/27.69`. (Rouge 1, Rouge 2, Rouge L) So there is still some headroom! (Note we will add pegasus in the library sometime in July).\r\n- From looking at your wandb logs, your models are still getting better when training stops. When you move to a beefier setup you might consider training for longer. \r\n- I think there is general consensus that Rouge2 and Rouge-L are better metrics than Rouge-1. \r\n\r\nSome questions I would love to know the answer to (for any dataset):\r\n\r\n1. which `--model_name_or_path` is the best starting point for finetuning: bart-base vs. bart-large vs. bart-large-xsum vs distilbart-xsum-12-6, for example.\r\n2. How does `LongBart` compare in performance to `BartForConditionalGeneration`?\r\n3. Does increasing `--adam_eps` improve performance? Jeremy Howard at fastai recommended this once, and the default 1e-8 seems to be a fairly low setting.\r\n4. What is the impact of `--freeze-encoder` and `--freeze_embeds` on time per epoch, max batch size, and performance.\r\n\r\n",
"@sshleifer thanks for coming back to me. Several of your questions I can answer immediately, the others I will try to take a look at. If you're interested, I have a [separate wandb project](https://app.wandb.ai/alexgaskell/transformers-examples_summarization?workspace=user-alexgaskell) containing a bunch of fine-tuning runs for `BartForConditionalGeneration` on PubMed to act as a baseline for `Longformer`. All of these runs have frozen embs and enc because of size constraints- only batch size 1 or 2 fit on GPU and that didn't perform well. If I get a bigger setup I'll try with these unfrozen and a larger batch size.\r\n\r\nAddressing your questions:\r\n1. I have been using facebook/bart-large-cnn so far- will investigate if I get time\r\n2. This can be seen in the two wandb repos I've shared here and above. So far my best `BartForConditionalGeneration` is 0.426/0.177/0.263 and my best `Longformer` is 0.367/0.120/0.222 so BART is much better so far. However, both of these have frozen embs and enc (and presumably PEGASUS was fine-tuned without these frozen) so there are more experiments to run\r\n3. Haven't looked at this. Will give it a go\r\n4. Freezing both has a big impact (haven't looked at freezing each separately).\r\n- Time per epoch I think order of 3-4x quicker (8hrs vs 24+hrs per epoch using 12Gb GPU)\r\n- Batch size >8x improvement (2 vs 16)\r\n- Performance seemed much better when frozen. Probably due to small batch size training was unstable when using a small batch size. The img below shows a comparison between frozen (grey, bsz=16) and unfrozen (blue, bsz=2).\r\n\r\n<img width=\"1093\" alt=\"Screenshot 2020-07-01 at 22 06 57\" src=\"https://user-images.githubusercontent.com/51463426/86291868-96cbce80-bbe7-11ea-8427-52619710d2fa.png\">\r\n",
"> BartForConditionalGeneration is 0.426/0.177/0.263 and my best Longformer is 0.367/0.120/0.222 \r\n\r\nThere's a bug related to masking that could be the reason for the performance drop. I started working on `LongformerEncoderDecoder` and have a fix [here](https://github.com/allenai/longformer/blob/encoderdecoder/longformer/longformer_encoder_decoder.py#L66). \r\n",
"Thanks for flagging, I will take a look. \r\n\r\nIn any case, I have much better `LongBart` results now (0.433, 0.189, 0.273). I found that fine-tuning `LongBart` without freezing the embs or enc worked much better, whereas `Bart` performed better when embs and enc were frozen. This probably makes sense given that `LongBart` is using weight transfer so needs more comprehensive training to be effective. Hopefully the bug fix will improve these results even more. "
] | 1,589 | 1,608 | 1,608 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I tried using T5 and Bart but the abstraction summarization on scientific texts does not seem to give the results I want since I think they are both trained on news corpora. I have scraped all of the free PMC articles and I am thinking about fine-tuning a seq2seq model between the articles and their abstracts to make an abstractive summarizer for scientific texts. This Medium article (https://medium.com/huggingface/encoder-decoders-in-transformers-a-hybrid-pre-trained-architecture-for-seq2seq-af4d7bf14bb8) provides a bit of an introduction to how to approach this but does not quite go into detail so I am wondering how to approach this.
I'm not really asking for help being stuck but I just don't really know how to approach this problem.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/61826443/train-custom-seq2seq-transformers-model
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4406/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4405/comments | https://api.github.com/repos/huggingface/transformers/issues/4405/events | https://github.com/huggingface/transformers/pull/4405 | 619,567,654 | MDExOlB1bGxSZXF1ZXN0NDE5MDMxNDEw | 4,405 | add BERT trained from review corpus. | {
"login": "howardhsu",
"id": 10661375,
"node_id": "MDQ6VXNlcjEwNjYxMzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/10661375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howardhsu",
"html_url": "https://github.com/howardhsu",
"followers_url": "https://api.github.com/users/howardhsu/followers",
"following_url": "https://api.github.com/users/howardhsu/following{/other_user}",
"gists_url": "https://api.github.com/users/howardhsu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howardhsu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howardhsu/subscriptions",
"organizations_url": "https://api.github.com/users/howardhsu/orgs",
"repos_url": "https://api.github.com/users/howardhsu/repos",
"events_url": "https://api.github.com/users/howardhsu/events{/privacy}",
"received_events_url": "https://api.github.com/users/howardhsu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Hi @howardhsu, the file path is not correct, it should be something like `model_cards/activebus/BERT_Review/README.md` ",
"I updated paths as suggested, thanks.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=h1) Report\n> Merging [#4405](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e0f06210646a440509efa718b30d18322d6a830&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4405 +/- ##\n==========================================\n+ Coverage 78.16% 78.19% +0.02% \n==========================================\n Files 120 120 \n Lines 20058 20058 \n==========================================\n+ Hits 15679 15684 +5 \n+ Misses 4379 4374 -5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=footer). Last update [3e0f062...466c62e](https://codecov.io/gh/huggingface/transformers/pull/4405?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great: [model pages](https://huggingface.co/activebus)"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | add BERT trained from review corpus. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4405/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4405",
"html_url": "https://github.com/huggingface/transformers/pull/4405",
"diff_url": "https://github.com/huggingface/transformers/pull/4405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4405.patch",
"merged_at": 1589982156000
} |
https://api.github.com/repos/huggingface/transformers/issues/4404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4404/comments | https://api.github.com/repos/huggingface/transformers/issues/4404/events | https://github.com/huggingface/transformers/pull/4404 | 619,556,749 | MDExOlB1bGxSZXF1ZXN0NDE5MDIzNzg2 | 4,404 | feat(wandb): display logger | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It don't think it should really be a warning if you don't use wandb.\r\n\r\nThe root issue here is probably discoverability of wandb for users who don't know it? Then it would probably be better solved in documentation.\r\n\r\nWe will start some documentation on `Trainer`/`TFTrainer` in the coming weeks (cc @LysandreJik) we'll mention wandb there. (if you want to help with this let us know)",
"Makes sense, let me know when it's started and I can help writing the section related to wandb."
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Logger info when `wandb` not installed was set to `info` which does not display by default.
It has been changed to `warning`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4404/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4404",
"html_url": "https://github.com/huggingface/transformers/pull/4404",
"diff_url": "https://github.com/huggingface/transformers/pull/4404.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4404.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4403/comments | https://api.github.com/repos/huggingface/transformers/issues/4403/events | https://github.com/huggingface/transformers/pull/4403 | 619,533,934 | MDExOlB1bGxSZXF1ZXN0NDE5MDA3Nzk5 | 4,403 | Map optimizer to correct device after loading from checkpoint. | {
"login": "shaoyent",
"id": 8154586,
"node_id": "MDQ6VXNlcjgxNTQ1ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8154586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shaoyent",
"html_url": "https://github.com/shaoyent",
"followers_url": "https://api.github.com/users/shaoyent/followers",
"following_url": "https://api.github.com/users/shaoyent/following{/other_user}",
"gists_url": "https://api.github.com/users/shaoyent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shaoyent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaoyent/subscriptions",
"organizations_url": "https://api.github.com/users/shaoyent/orgs",
"repos_url": "https://api.github.com/users/shaoyent/repos",
"events_url": "https://api.github.com/users/shaoyent/events{/privacy}",
"received_events_url": "https://api.github.com/users/shaoyent/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you!"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Loading from `optimizer.pt` causes `optimizer` to be mapped to the same device as the saved `optimizer.pt`. In most cases it's `cuda:0`(saved by local master), which puts all optimizers on
gpu0, causing OOM more easily in multi-gpu training.
Might fix issues like [#3730](https://github.com/huggingface/transformers/issues/3730). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4403/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4403",
"html_url": "https://github.com/huggingface/transformers/pull/4403",
"diff_url": "https://github.com/huggingface/transformers/pull/4403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4403.patch",
"merged_at": 1589858166000
} |
https://api.github.com/repos/huggingface/transformers/issues/4402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4402/comments | https://api.github.com/repos/huggingface/transformers/issues/4402/events | https://github.com/huggingface/transformers/issues/4402 | 619,457,013 | MDU6SXNzdWU2MTk0NTcwMTM= | 4,402 | Run Language Modeling on 8 TPU cores doesn't seem to terminate | {
"login": "jcblaisecruz02",
"id": 24757547,
"node_id": "MDQ6VXNlcjI0NzU3NTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/24757547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcblaisecruz02",
"html_url": "https://github.com/jcblaisecruz02",
"followers_url": "https://api.github.com/users/jcblaisecruz02/followers",
"following_url": "https://api.github.com/users/jcblaisecruz02/following{/other_user}",
"gists_url": "https://api.github.com/users/jcblaisecruz02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcblaisecruz02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcblaisecruz02/subscriptions",
"organizations_url": "https://api.github.com/users/jcblaisecruz02/orgs",
"repos_url": "https://api.github.com/users/jcblaisecruz02/repos",
"events_url": "https://api.github.com/users/jcblaisecruz02/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcblaisecruz02/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@jcblaisecruz02 \r\nYes, there's a bug in version 2.91 which hangs the trainer. It's been fixed in master branch. Install from master branch for TPU training.\r\n\r\nSee this pull request #4339",
"Is it possible to run `run_language_modeling.py` on more than 8 cores when using pytorch and `xls_spawn`?\r\nAnd what about tensorflow?"
] | 1,589 | 1,592 | 1,589 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): DistilGPT2 & GPT2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I'm trying to test ```run_language_modeling.py``` on DistilGPT2 using all 8 TPU cores. Running on 1 core executes fine, but when I attempt to run on all 8 cores, it finishes finetuning then gets stuck on "Training completed. Do not forget to share your model on huggingface.co/models =)" and doesn't terminate.
When I check the output directory, I only see two files: config.json and pytorch_model.bin. There should be seven files in the output directory.
I'm running this on a Colab TPU Notebook.
## To reproduce
Steps to reproduce the behavior:
```
VERSION = "nightly" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
!wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip
!unzip wikitext-2-v1.zip && rm wikitext-2-v1.zip
!pip install transformers
!git clone https://github.com/huggingface/transformers.git
!python transformers/examples/xla_spawn.py --num_cores 8 \
transformers/examples/language-modeling/run_language_modeling.py \
--output_dir=output \
--model_type=distilgpt2 \
--model_name_or_path=distilgpt2 \
--train_data_file=wikitext-2/wiki.train.tokens \
--do_train \
--overwrite_output_dir
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Script terminates and 7 files are in the output folder:
* config.json
* pytorch_model.bin
* tokenizer_config.json
* vocab.json
* merges.txt
* special_tokens_map.json
* training_args.bin
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0a0+83df3be (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4402/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4401/comments | https://api.github.com/repos/huggingface/transformers/issues/4401/events | https://github.com/huggingface/transformers/pull/4401 | 619,442,786 | MDExOlB1bGxSZXF1ZXN0NDE4OTUwODgz | 4,401 | [TF T5] More coherent naming for inputs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=h1) Report\n> Merging [#4401](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2d184cb553ee20943b03b253f44300e466357871&el=desc) will **increase** coverage by `0.85%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4401 +/- ##\n==========================================\n+ Coverage 77.30% 78.15% +0.85% \n==========================================\n Files 120 120 \n Lines 20027 20027 \n==========================================\n+ Hits 15481 15652 +171 \n+ Misses 4546 4375 -171 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `95.16% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.25% <0.00%> (+1.10%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <0.00%> (+2.62%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=footer). Last update [2d184cb...fa80cf3](https://codecov.io/gh/huggingface/transformers/pull/4401?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | In TF we have to name the first argument of the `call` function "inputs", due to some inner keras logic (I think), see: https://github.com/huggingface/transformers/pull/3547 . Having both names `inputs` and `input_ids` can thus lead to confusion, see #3626 .
This PR adopts the consistent naming `inputs` in the whole file. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4401/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4401",
"html_url": "https://github.com/huggingface/transformers/pull/4401",
"diff_url": "https://github.com/huggingface/transformers/pull/4401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4401.patch",
"merged_at": 1589816041000
} |
https://api.github.com/repos/huggingface/transformers/issues/4400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4400/comments | https://api.github.com/repos/huggingface/transformers/issues/4400/events | https://github.com/huggingface/transformers/issues/4400 | 619,384,301 | MDU6SXNzdWU2MTkzODQzMDE= | 4,400 | BertWordPieceTokenizer cannot be pickled | {
"login": "Sriharsha-hatwar",
"id": 14826535,
"node_id": "MDQ6VXNlcjE0ODI2NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/14826535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sriharsha-hatwar",
"html_url": "https://github.com/Sriharsha-hatwar",
"followers_url": "https://api.github.com/users/Sriharsha-hatwar/followers",
"following_url": "https://api.github.com/users/Sriharsha-hatwar/following{/other_user}",
"gists_url": "https://api.github.com/users/Sriharsha-hatwar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sriharsha-hatwar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sriharsha-hatwar/subscriptions",
"organizations_url": "https://api.github.com/users/Sriharsha-hatwar/orgs",
"repos_url": "https://api.github.com/users/Sriharsha-hatwar/repos",
"events_url": "https://api.github.com/users/Sriharsha-hatwar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sriharsha-hatwar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If needed I can even provide the dataset (Did not want to clutter): \r\nthe error stacktrace : \r\n```\r\nTraceback (most recent call last):\r\n File \"error.py\", line 129, in <module>\r\n a = enumerate(train_data_loader)\r\n File \"C:\\Users\\admin\\miniconda3\\envs\\machine_learning\\lib\\site-packages\\torch\\utils\\data\\dataloader.py\", line 279, in __iter__\r\n return _MultiProcessingDataLoaderIter(self)\r\n File \"C:\\Users\\admin\\miniconda3\\envs\\machine_learning\\lib\\site-packages\\torch\\utils\\data\\dataloader.py\", line 719, in __init__\r\n w.start()\r\n File \"C:\\Users\\admin\\miniconda3\\envs\\machine_learning\\lib\\multiprocessing\\process.py\", line 121, in start\r\n self._popen = self._Popen(self)\r\n File \"C:\\Users\\admin\\miniconda3\\envs\\machine_learning\\lib\\multiprocessing\\context.py\", line 224, in _Popen\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n return _default_context.get_context().Process._Popen(process_obj)\r\n File \"C:\\Users\\admin\\miniconda3\\envs\\machine_learning\\lib\\multiprocessing\\context.py\", line 326, in _Popen\r\n File \"C:\\Users\\admin\\miniconda3\\envs\\machine_learning\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n return Popen(process_obj)\r\nexitcode = _main(fd, parent_sentinel) File \"C:\\Users\\admin\\miniconda3\\envs\\machine_learning\\lib\\multiprocessing\\popen_spawn_win32.py\", line 93, in __init__\r\n\r\n File \"C:\\Users\\admin\\miniconda3\\envs\\machine_learning\\lib\\multiprocessing\\spawn.py\", line 126, in _main\r\n reduction.dump(process_obj, to_child)\r\nself = reduction.pickle.load(from_parent) File \"C:\\Users\\admin\\miniconda3\\envs\\machine_learning\\lib\\multiprocessing\\reduction.py\", line 60, in dump\r\n\r\nEOFError: Ran out of input\r\n ForkingPickler(file, protocol).dump(obj)\r\nTypeError: cannot pickle 'Tokenizer' object\r\n```\r\nBut when run in kaggle notebook this works perfectly well. (The same script having same tokenizer and transformers version)\r\n@julien-c @sshleifer any help here?",
"~Yes, this is fixed by PR #4389 , so you could `pip install -e .` off of that branch.~",
"Hi @sshleifer I did these steps : \r\n1. git fetch origin pull/4389/head:temp_fix\r\n2. git checkout temp_fix\r\n3. pip install -e .\r\nstill the above fix doesn't seem to work. \r\n\r\nBy looking into the PR , I am guessing that it is fixed for `MarianTokenizer ` and not for `BertWordPieceTokenizer` that I am using in the above script.\r\n ",
"Tested with another environment with python = 3.7.7, same issue is observed.",
"Just wanted to mention that providing `num_workers = 0` bypasses the problem. So it only fails when multiprocessing is involved. This issue is not only in `BertWordPieceTokenizer`, It also fails with `ByteLevelBPETokenizer` .",
"And this probably should be moved to the tokenizer repo @sshleifer to confirm.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
**Bert**
Language I am using the model on (English, Chinese ...):
**English**
The problem arises when using:
* [X] my own modified scripts:
The tasks I am working on is:
* [X] my own task or dataset:
## To reproduce
``` python
import torch
import tokenizers
import pandas as pd
from torch.utils import data
class config:
MAX_LEN = 128
TRAIN_BATCH_SIZE = 64
VALID_BATCH_SIZE = 16
EPOCHS = 5
BERT_PATH = "../input/bert-base-uncased/"
MODEL_PATH = "model.bin"
TRAINING_FILE = "../input/tweet-sentiment-extraction/train_folds.csv"
TOKENIZER = tokenizers.BertWordPieceTokenizer(
f"{BERT_PATH}/vocab.txt",
lowercase=True
)
def process_data(tweet, selected_text, sentiment, tokenizer, max_len):
len_st = len(selected_text)
idx0 = -1
idx1 = -1
for ind in (i for i, e in enumerate(tweet) if e == selected_text[0]):
if tweet[ind: ind+len_st] == selected_text:
idx0 = ind
idx1 = ind + len_st - 1
break
char_targets = [0] * len(tweet)
if idx0 != -1 and idx1 != -1 :
for ct in range(idx0, idx1 + 1):
char_targets[ct] = 1
tok_tweet = tokenizer.encode(tweet)
input_ids_orig = tok_tweet.ids[1:-1]
tweet_offsets = tok_tweet.offsets[1:-1]
target_idx = []
for j, (offset1, offset2) in enumerate(tweet_offsets):
if sum(char_targets[offset1: offset2]) > 0:
target_idx.append(j)
targets_start = target_idx[0]
targets_end = target_idx[-1]
sentiment_id = {
'positive': 3893,
'negative': 4997,
'neutral': 8699
}
input_ids = [101] + [sentiment_id[sentiment]] + [102] + input_ids_orig + [102]
token_type_ids = [0, 0, 0] + [1] * (len(input_ids_orig) + 1)
mask = [1] * len(token_type_ids)
tweet_offsets = [(0, 0)] * 3 + tweet_offsets + [(0, 0)]
targets_start += 3
targets_end += 3
padding_length = max_len - len(input_ids)
if padding_length > 0:
input_ids = input_ids + ([0] * padding_length)
mask = mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
tweet_offsets = tweet_offsets + ([(0, 0)] * padding_length)
return {
'ids': input_ids,
'mask': mask,
'token_type_ids': token_type_ids,
'targets_start': targets_start,
'targets_end': targets_end,
'orig_tweet': tweet,
'orig_selected': selected_text,
'sentiment': sentiment,
'offsets': tweet_offsets
}
class TweetDataset(data.Dataset):
def __init__(self, tweet, sentiment, selected_text):
self.tweet = tweet
self.sentiment = sentiment
self.selected_text = selected_text
self.tokenizer = config.TOKENIZER
self.max_len = config.MAX_LEN
def __len__(self):
return len(self.tweet)
def __getitem__(self, item):
data = process_data(
self.tweet[item],
self.selected_text[item],
self.sentiment[item],
self.tokenizer,
self.max_len
)
return {
'ids': torch.tensor(data["ids"], dtype=torch.long),
'mask': torch.tensor(data["mask"], dtype=torch.long),
'token_type_ids': torch.tensor(data["token_type_ids"], dtype=torch.long),
'targets_start': torch.tensor(data["targets_start"], dtype=torch.long),
'targets_end': torch.tensor(data["targets_end"], dtype=torch.long),
'orig_tweet': data["orig_tweet"],
'orig_selected': data["orig_selected"],
'sentiment': data["sentiment"],
'offsets': torch.tensor(data["offsets"], dtype=torch.long)
}
dfx = pd.read_csv(config.TRAINING_FILE)
fold = 4
df_train = dfx[dfx.kfold != fold].reset_index(drop=True)
df_valid = dfx[dfx.kfold == fold].reset_index(drop=True)
train_dataset = TweetDataset(
tweet=dfx.text.values,
sentiment=dfx.sentiment.values,
selected_text=dfx.selected_text.values
)
train_data_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=config.TRAIN_BATCH_SIZE,
num_workers=1
)
if __name__ =='__main__':
a = enumerate(train_data_loader)
```
## Expected behavior
The enumerate should return the iterable.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Output of `transformers-cli env`
transformers version: 2.9.1
Platform: Windows-10-10.0.18362-SP0
Python version: 3.8.2
PyTorch version (GPU?): 1.5.0 (True)
Tensorflow version (GPU?): not installed (NA)
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4400/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4400/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4399/comments | https://api.github.com/repos/huggingface/transformers/issues/4399/events | https://github.com/huggingface/transformers/issues/4399 | 619,379,507 | MDU6SXNzdWU2MTkzNzk1MDc= | 4,399 | Pipeline for question generation | {
"login": "danielduckworth",
"id": 18698360,
"node_id": "MDQ6VXNlcjE4Njk4MzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/18698360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielduckworth",
"html_url": "https://github.com/danielduckworth",
"followers_url": "https://api.github.com/users/danielduckworth/followers",
"following_url": "https://api.github.com/users/danielduckworth/following{/other_user}",
"gists_url": "https://api.github.com/users/danielduckworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielduckworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielduckworth/subscriptions",
"organizations_url": "https://api.github.com/users/danielduckworth/orgs",
"repos_url": "https://api.github.com/users/danielduckworth/repos",
"events_url": "https://api.github.com/users/danielduckworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielduckworth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"That would be an interesting project.",
"I have worked on question generation using T5. I've trained answer aware question generator on SQuAD 2.2 which achieves 41.4641 BLUE1 score and 41.0823 ROUGE_L on the dev set given gold answers.\r\n\r\nI've also trained T5 for extracting answers from the text, and written a simple pipeline where the answer generator generates answers and then the answer-aware que generator generates questions with those answers. You can check the demo [here](https://colab.research.google.com/drive/1_2_mS5l29QHI1pXaqa4YLzAO5xm-HmH9?usp=sharing)\r\n\r\nI've also trained T5 for direct question generation on Yahoo questions dataset. It generates single question given a context.\r\n\r\nI would be happy to contribute to this project.",
"@julien-c \r\nAny update on this ?",
"We don't have any immediate plan to work on this ourselves but feel free to take a stab",
"@julien-c \r\nOkay, I just need your feedback on one thing. I'm not sure if adding a pipeline here will make sense since there are multiple ways to generate questions\r\n1. ans aware\r\n2. generate 1 question directly\r\n3. generate multiple questions simultaneously\r\n\r\nAlso ans aware model will need an ans extractor/generator or the user will need to supply answers. And different models could process inputs differently. \r\nSo does adding pipeline makes sense here ? If not I can just upload the models and provide a inference script and decide next steps with community feedback. \r\n\r\nThank you!",
"@patil-suraj I think it makes sense to generate questions that are answer aware as this has more use cases. I also think that questions should not be so narrow that a single word from the context is the answer. Artit's [Text2Text ](https://github.com/artitw/text2text) did a pretty good job. I used this to generate 1,000 random questions from a random context and plan to have them judged by human raters. One of the main issues I saw in these questions is that sometimes the answer is in the question text. \r\n\r\nI'll take a look at your demo.\r\n\r\nI also think it should generate multiple questions that can be so that some evaluation metric (BLUE1, ROUGE_L and METEOR) can be produced and have them ranked. Then the user can decide what to do with the highest metric questions. There's some research that shows the METEOR correlates better with human judgement, but this was for evaluating machine translation tasks and might not apply here.",
"@danielduckworth \r\nYes answer aware seems to be the best chose right now. And we can generate multiple question if we have multiple answers. If you find the above demo interesting then I'll share the models so you can play with it and then we can decide how to proceed from there. \r\n\r\nAlso the METEOR score on dev set for demo model is 26.0676",
"@patil-suraj Yes that would be great. How do you want to share the models?",
"@danielduckworth \r\nI've setup everything in the same [colab](https://colab.research.google.com/drive/1_2_mS5l29QHI1pXaqa4YLzAO5xm-HmH9?usp=sharing). Please have a look.",
"Great, thanks. I'll have a play over the next few days and get back to you.\r\n\r\nSo just to confirm, where do the models come from? Is the base model T5 and it has been tuned using the SQUAD data reference questions, contexts and answers?",
"@patil-suraj I've had a quick look, it's very impressive! I've only tried two passages, but the questions are sensible and the answers are more than one word which is better than the Text2Text pipeline. I definitely want to do some more work with this.\r\n\r\nFirst, I'll generate a large set of questions I can get human raters to score to validate whether the quantitative metrics (METEOR etc) correlate with human judgement.\r\n\r\nThen I would like to work on some architecture experiments to investigate the following:\r\n\r\n1. Can questions be generated where the corresponding generated answer is not explicitly stated in the text? This would require the reader to make connections between information in the text (implied information) and make inferences. I think this could be achieved with some NLTK work but is not ideal as I think the semantics of these questions are best learned from text data rather than expert-system rules.\r\n\r\n2. Can additional tuning of the que_gen model be done with other question/answer/context datasets that are of a different text type. For example, Wikipedia is primary factual information texts. But what about discursive texts? Or narrative texts?\r\n\r\nAnyway, I'll continue to explore what you have with a developer I work with and maybe we can form repository we can work in with the goal of creating a pipeline for inclusion in the Transformer package.\r\n\r\nWhat do you think?",
"> Great, thanks. I'll have a play over the next few days and get back to you.\r\n> \r\n> So just to confirm, where do the models come from? Is the base model T5 and it has been tuned using the SQUAD data reference questions, contexts and answers?\r\n\r\nYes both of the models are t5-base trained on SQuAD",
"> @patil-suraj I've had a quick look, it's very impressive! I've only tried two passages, but the questions are sensible and the answers are more than one word which is better than the Text2Text pipeline. I definitely want to do some more work with this.\r\n> \r\n> First, I'll generate a large set of questions I can get human raters to score to validate whether the quantitative metrics (METEOR etc) correlate with human judgement.\r\n> \r\n> Then I would like to work on some architecture experiments to investigate the following:\r\n> \r\n> 1. Can questions be generated where the corresponding generated answer is not explicitly stated in the text? This would require the reader to make connections between information in the text (implied information) and make inferences. I think this could be achieved with some NLTK work but is not ideal as I think the semantics of these questions are best learned from text data rather than expert-system rules.\r\n> 2. Can additional tuning of the que_gen model be done with other question/answer/context datasets that are of a different text type. For example, Wikipedia is primary factual information texts. But what about discursive texts? Or narrative texts?\r\n> \r\n> Anyway, I'll continue to explore what you have with a developer I work with and maybe we can form repository we can work in with the goal of creating a pipeline for inclusion in the Transformer package.\r\n> \r\n> What do you think?\r\n\r\n@danielduckworth \r\nI'm not sure about the first, we will need to run small experiments and see if it can be achieved.\r\n\r\n2) Yes, I do think additional fine-tuning on more diverse datasets should improve the results. My goal is to first get factual questions correct and then move to narrative texts.\r\n\r\nAnd sure we can create different repo and take this forward.",
"@patil-suraj Thanks for the examples. Would you mind sharing your fine-tuning code used to train the model as well? ",
"Sure. I'm planning to release model as well as the fine-tuning code. I'll comment here once I do that.",
"@danai-antoniou Thanks for the wonderful suggestion. \r\n\r\n@patil-suraj, I also played your colab code, and it looks super great. I look forward to the release.",
"@danielduckworth, I am looking into recent works in Question Generation especially using Transformer based architecture leveraging fine-tuning for small data sets. This thread is very interesting to me.\r\n\r\n> Can questions be generated where the corresponding generated answer is not explicitly stated in the text? This would require the reader to make connections between information in the text (implied information) and make inferences. I think this could be achieved with some NLTK work but is not ideal as I think the semantics of these questions are best learned from text data rather than expert-system rules.\r\n\r\nI have some previous experience in generating non-factoid questions from a text, with the goal of having descriptive answers rather than quiz-like. In my project, the data was not sufficient for DL models, even for fine-tuning.\r\n\r\nWe had done some user studies on generated questions as well, and found out METEOR is better correlated with human judgment on how reasonable or well-formed the questions are rather than BLEU or ROUGE scores.\r\n\r\nFor extracting relations in text, to capture more complex answers, Semantic Role Labeling (SRL) + Dependency parse tree of the text might be useful for extracting some descriptive answers. I used a tool called ClearNLP to do that.\r\n\r\n@patil-suraj, I also checked out your Collab demo and the questions generated are looking good, much better than other models I worked with so far. Great job. Definitely looking forward to the release of the models and knowing more about the fine-tuning process.\r\n\r\nFor non-factoid questions, there is room for improvement.",
"\r\nHi @emadg, @hunkim Thank you for your interest! :)\r\n\r\n@emadg\r\n\r\n>For extracting relations in text, to capture more complex answers, Semantic Role Labeling (SRL) + Dependency parse tree of the text might be useful for extracting some descriptive answers. I used a tool called ClearNLP to do that.\r\n \r\nThis sure sounds like a good idea. My current goal is to have a end-to-end model for generating questions and answers.",
"> For non-factoid questions, there is room for improvement.\r\n\r\nFor those looking for less factual questions, I was actually able to get some reasonable results with a T5 pre-trained for query prediction, but there's definitely room for improvement. \r\n\r\nhttps://github.com/castorini/docTTTTTquery",
"Hey people, I've setup few experiments for question generation. Let me know if anyone wants to collaborate on this, I would really appreciate some help and maybe some multi-gpu compute. Everything will be open sourced after the experiments are finished.\r\nThank you! ",
"This is very interesting to me. I'm writing a master's thesis over the summer, working on transfer learning for question generation. I don't have much experience with contributing to large pre-existing frameworks like this but would definitely be happy to contribute wherever I can. \r\n\r\n@patil-suraj \r\nI had a look at your notebook. This is very impressive! Looking forward to seeing the fine-tuning process to get an idea of how this can be done using the transformer framework. ",
"@patil-suraj \r\nI would like to help. Let me know if we can collaborate. Although I have somewhat limited time to contribute.",
"Hi @emadg and @vegarab, thank you for your interest,\r\nMy goal is to do open source study on que generation. Here's what I have planned \r\nFor ans aware que generation we usually need 3 models \r\nfirst which will extract ans like spans\r\nsecond model will generate question on that answer\r\nand third will be a QA model which will take the question and produce an answer, \r\nthen we can compare the two answers to see if the generated question is correct or not.\r\n\r\nHaving 3 models for single task is lot of complexity, so goal is to create a multi-task model which can do all of these 3 tasks\r\n1. extract ans like spans\r\n2. generate question based on the answer \r\n3. QA\r\n\r\nAlso I want to see if we can generate multiple questions end-to-end without answers.\r\n\r\nAnother experiment is generating non-factoid questions. First we need to find a right dataset for this.\r\n\r\nI've trained t5-small model in multi-task way and its giving really good results, so now I want to train more models (t5-base, t5-large, bart-base, bart-large, bert-2-bert) and see if they improve the results.\r\n\r\nI've also trained t5-small and t5-base for end-2-end QG and that too is giving interesting results. \r\n\r\nSo regarding help I'm looking for some compute to train large models, multitask t5-small took 10hrs on single V100 GPU. I also want someone to provide rigorous feedback on the work (find out mistakes, asses quality of questions etc) and help with creating a write-up for study.",
"Hi all! @patil-suraj really great work! I am using your approach and seems to be working very well.\r\n\r\nI have one question though, when fine-tuning the models, did you use a case or an uncased model? Because when giving too much uppercase text as context, it's generating questions partially or totally in uppercase and are much worse. It seems to be a cased model because when lowercasing the text, the questions are much better.\r\n\r\nThanks in advance!",
"@patil-suraj,\r\n\r\n> and third will be a QA model which will take the question and produce an answer\r\n\r\nI think using a QA model to evaluate the generated Question is an interesting approach. Why should do this instead of evaluating with BLEU or METEOR score?\r\n\r\n> So regarding help I'm looking for some compute to train large models, multitask t5-small took 10hrs on single V100 GPU.\r\n\r\nCan we use GCP for the compute? if it is not going to cost a lot. I don't have GPUs myself.\r\n\r\n> I also want someone to provide rigorous feedback on the work (find out mistakes, asses quality of questions etc) and help with creating a write-up for study.\r\n\r\nI think I can spend some time and provide feedback about the work. Although, I need to catch up with the details related to T5 model and the fine-tuning method used here.",
"Interesting thread @patil-suraj I had the similar thoughts on multi task training \r\n\r\n- Finetune the model combining the data for both question generation & answering(one example is **context:c1 answer: a1 ---> question : q1** & another example context:c1 question : q1 ----> answer:a1)\r\n- Way to generate multiple questions is either using topk and topp sampling or using multiple beams.\r\n\r\n\r\n",
"Hey everyone,\r\nhere's a sneak peek of whats coming, everything will be available by the end of this week. stay tuned !\r\n",
"@santhoshkolloju \r\n> * Way to generate multiple questions is either using topk and topp sampling or using multiple beams.\r\n\r\nYes, this is what I have tried in another model.\r\n\r\n\r\nHi @emadg \r\n\r\n> Why should do this instead of evaluating with BLEU or METEOR score?\r\n\r\nBLEU and METEOR can be used to evaluate the model when you have the original reference questions, but at inference time how can we decide if the generated question is correct (makes sense or not, has answer or not) or not without original question ? Which is why the QA model.\r\n\r\n>I think I can spend some time and provide feedback about the work. Although, I need to catch up with the details related to T5 model and the fine-tuning method used here.\r\n\r\nYou can start your analysis once I make it available. Human feedbacks will be most valuable. Thanks ! \r\n\r\n",
"Hi all, tagging everyone for notification \r\n@emadg , @vegarab , @gabisurita , @hunkim , @ugmSorcero , @santhoshkolloju .\r\n\r\nHappy to finally release the project. You can find everything in [this repo](https://github.com/patil-suraj/question_generation).\r\n\r\n[All models](https://huggingface.co/models?filter=question-generation) are available on hub with configured inference API. You can search using question-generation tag.\r\n\r\nHere’s a [colab](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb) if anyone wants to play more with it.",
"@patil-suraj Thanks!!"
] | 1,589 | 1,607 | 1,607 | NONE | null | # 🚀 Feature request
I can see there are pipelines for question answering, text summarisation and text generation. In my field I'm researching how question generation can be used in education research. I would love to see this pipeline added. I imagine it's a variation of question answering and text summarisation.
The paper 'Question Generation by Transformers' by Kettip Kriangchaivech, Artit Wangperawong and provides a good overview of using the SQUAD data set using questions as the output sequence with reference questions and contexts.
Artit also has an implementation called text2text https://github.com/artitw/text2text
## Motivation
It would be useful to have an official pipeline part of the hunggingface library for this use case.
## Your contribution
I'm happy to contribute some funds to pay some developers if need be but I don't have enough Python technical expertise to contribute an PRs myself.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4399/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4399/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4398/comments | https://api.github.com/repos/huggingface/transformers/issues/4398/events | https://github.com/huggingface/transformers/issues/4398 | 619,346,742 | MDU6SXNzdWU2MTkzNDY3NDI= | 4,398 | Trainer is missing sampler.set_epoch for distributed mode | {
"login": "shaoyent",
"id": 8154586,
"node_id": "MDQ6VXNlcjgxNTQ1ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8154586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shaoyent",
"html_url": "https://github.com/shaoyent",
"followers_url": "https://api.github.com/users/shaoyent/followers",
"following_url": "https://api.github.com/users/shaoyent/following{/other_user}",
"gists_url": "https://api.github.com/users/shaoyent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shaoyent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaoyent/subscriptions",
"organizations_url": "https://api.github.com/users/shaoyent/orgs",
"repos_url": "https://api.github.com/users/shaoyent/repos",
"events_url": "https://api.github.com/users/shaoyent/events{/privacy}",
"received_events_url": "https://api.github.com/users/shaoyent/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good spot! I think @julien-c's latest PR (https://github.com/huggingface/transformers/pull/4243) for distributed eval will also take care of this! "
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | # 🐛 Bug
## Information
`train_dataloader.sampler.set_epoch(epoch)` is missing before the start of each epoch in [trainer.py](https://github.com/huggingface/transformers/blob/3e0f06210646a440509efa718b30d18322d6a830/src/transformers/trainer.py#L406).
According to [here](https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler) :
> In distributed mode, calling the `set_epoch` method is needed to make shuffling work; each process will use the same random seed otherwise.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4398/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4397/comments | https://api.github.com/repos/huggingface/transformers/issues/4397/events | https://github.com/huggingface/transformers/issues/4397 | 619,344,069 | MDU6SXNzdWU2MTkzNDQwNjk= | 4,397 | Training TFBertForQuestionAnswering on custom SquadV1 data | {
"login": "yonatanbitton",
"id": 26148975,
"node_id": "MDQ6VXNlcjI2MTQ4OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/26148975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonatanbitton",
"html_url": "https://github.com/yonatanbitton",
"followers_url": "https://api.github.com/users/yonatanbitton/followers",
"following_url": "https://api.github.com/users/yonatanbitton/following{/other_user}",
"gists_url": "https://api.github.com/users/yonatanbitton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonatanbitton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonatanbitton/subscriptions",
"organizations_url": "https://api.github.com/users/yonatanbitton/orgs",
"repos_url": "https://api.github.com/users/yonatanbitton/repos",
"events_url": "https://api.github.com/users/yonatanbitton/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonatanbitton/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I succeeded to do it somehow, but i'm sure it's not the way it should work, and it won't scale well for large datasets. I would be happy to know if there is a better way. \r\n\r\nWhat worked: \r\n1. squad_convert_examples_to_features ( return_dataset = False) - getting the features\r\n2. Creating a dictionary of features and labels, where each item is list of tensorflow vectors obtained by `tf.convert_to_tensor`\r\n3. Constructing the dataset with `tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)`\r\n4. Training with `fit_generator` method (`fit` fails)\r\n\r\nFull code: \r\n```python\r\n tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\n processor = SquadV1Processor()\r\n # processor = SquadV2Processor()\r\n examples = processor.get_train_examples(args.data_dir, filename=args.train_file)\r\n train_dataset = squad_convert_examples_to_features(\r\n examples=examples,\r\n tokenizer=tokenizer,\r\n max_seq_length=args.max_seq_length,\r\n doc_stride=args.doc_stride,\r\n max_query_length=args.max_query_length,\r\n is_training=True\r\n )\r\n\r\n def create_features_and_labels_tf_tensors_from_dataset(train_dataset):\r\n all_input_ids = []\r\n all_token_type_ids = []\r\n all_attention_mask = []\r\n all_start_pos = []\r\n all_end_pos = []\r\n ex: SquadFeatures\r\n for ex in train_dataset:\r\n all_input_ids.append(ex.input_ids)\r\n all_token_type_ids.append(ex.token_type_ids)\r\n all_attention_mask.append(ex.attention_mask)\r\n all_start_pos.append(ex.start_position)\r\n all_end_pos.append(ex.end_position)\r\n all_input_ids_tensor = tf.convert_to_tensor(all_input_ids)\r\n all_token_type_ids_tensor = tf.convert_to_tensor(all_token_type_ids)\r\n all_attention_mask_tensor = tf.convert_to_tensor(all_attention_mask)\r\n all_start_pos_tensor = tf.convert_to_tensor(all_start_pos)\r\n all_end_pos_tensor = tf.convert_to_tensor(all_end_pos)\r\n features = {'input_ids': all_input_ids_tensor, 'token_type_ids': all_token_type_ids_tensor,\r\n 'attention_mask': all_attention_mask_tensor}\r\n labels = {\"output_1\": all_start_pos_tensor, 'output_2': all_end_pos_tensor}\r\n return features, labels\r\n\r\n features, labels = create_features_and_labels_tf_tensors_from_dataset(train_dataset)\r\n tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)\r\n\r\n model = TFBertForQuestionAnswering.from_pretrained(\"bert-base-cased\")\r\n loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)\r\n opt = tf.keras.optimizers.Adam(learning_rate=3e-5)\r\n\r\n model.compile(optimizer=opt,\r\n loss={'output_1': loss_fn, 'output_2': loss_fn},\r\n loss_weights={'output_1': 1., 'output_2': 1.},\r\n metrics=['accuracy'])\r\n\r\n # Now let's train our model\r\n try:\r\n history = model.fit(tfdataset, epochs=1, steps_per_epoch=3)\r\n print(f'Success with fit')\r\n except Exception as ex:\r\n traceback.print_exc()\r\n print(f\"Failed using fit, {ex}\")\r\n history = model.fit_generator(tfdataset, epochs=1, steps_per_epoch=3)\r\n print(f'Success with fit_generator')\r\n print(\"Done\")\r\n```\r\nError message for `fit`:\r\n ```python\r\nFile \"/home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/minimal_example_for_git.py\", line 73, in main\r\n history = model.fit(tfdataset, epochs=1, steps_per_epoch=3)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py\", line 819, in fit\r\n use_multiprocessing=use_multiprocessing)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py\", line 235, in fit\r\n use_multiprocessing=use_multiprocessing)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py\", line 593, in _process_training_inputs\r\n use_multiprocessing=use_multiprocessing)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py\", line 706, in _process_inputs\r\n use_multiprocessing=use_multiprocessing)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py\", line 702, in __init__\r\n x = standardize_function(x)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py\", line 660, in standardize_function\r\n standardize(dataset, extract_tensors_from_dataset=False)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py\", line 2360, in _standardize_user_data\r\n self._compile_from_inputs(all_inputs, y_input, x, y)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py\", line 2580, in _compile_from_inputs\r\n target, self.outputs)\r\n File \"/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py\", line 1341, in cast_if_floating_dtype_and_mismatch\r\n if target.dtype != out.dtype:\r\nAttributeError: 'str' object has no attribute 'dtype'\r\nFailed using fit, 'str' object has no attribute 'dtype'\r\nWARNING:tensorflow:From /home/ec2-user/yonatab/ZeroShot/transformers_experiments/src/minimal_example_for_git.py:78: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nPlease use Model.fit, which supports generators.\r\n```\r\n\r\nIt also fails when trying to add `validation_data` to the `fit` function",
"I think it's a bug, i'm closing & opening another bug issue. "
] | 1,589 | 1,589 | 1,589 | NONE | null | Hello.
TLDR: If there is any minimal code that trains a TFBertForQuestionAnswering on custom squad-v1 (not from `nlp.load_dataset`)
I've tried in several ways and encountered some problems.
This is the minimal code i'm trying to activate:
```python
args = argparse.Namespace(**bert_config)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
processor = SquadV1Processor()
# processor = SquadV2Processor()
examples = processor.get_train_examples(args.data_dir, filename=args.train_file)
train_dataset = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=True,
return_dataset="tf"
)
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'start_position': loss_fn, 'end_position': loss_fn},
loss_weights={'start_position': 1., 'end_position': 1.},
metrics=['accuracy'])
# Now let's train our model
try:
history = model.fit(train_dataset, epochs=1, steps_per_epoch=3)
except Exception as ex:
print(f"Failed using fit, {ex}")
history = model.fit_generator(train_dataset, epochs=1, steps_per_epoch=3)
```
The current errors are:
with fit:
```python
x = standardize_function(x)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 660, in standardize_function
standardize(dataset, extract_tensors_from_dataset=False)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2580, in _compile_from_inputs
target, self.outputs)
File "/home/ec2-user/anaconda3/envs/yonatan_env_tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1341, in cast_if_floating_dtype_and_mismatch
if target.dtype != out.dtype:
AttributeError: 'str' object has no attribute 'dtype'
```
with fit_generator:
```python
ValueError: Unknown entries in loss dictionary: ['start_position', 'end_position']. Only expected following keys: ['output_1', 'output_2']
```
The dataset that returns from squad_convert_examples_to_features is of type- `tensorflow.python.data.ops.dataset_ops.FlatMapDataset` and i'm not sure how to change it's columns from start_position to output_1 and end_position to output_2. I've also asked it on stackoverflow: https://stackoverflow.com/questions/61830361/how-the-change-column-name-in-tensorflow-flatmapdataset
I've seen the colab tutorial of the nlp package. It has simple code:
```python
train_tf_dataset = nlp.load_dataset('squad', split="train")
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
def convert_to_tf_features(example_batch):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = list(zip(example_batch['context'], example_batch['question']))
encodings = tokenizer.batch_encode_plus(input_pairs, pad_to_max_length=True)
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methods.
start_positions, end_positions = [], []
for i, (context, answer) in enumerate(zip(example_batch['context'], example_batch['answers'])):
start_idx, end_idx = get_correct_alignement(context, answer)
start_positions.append([encodings.char_to_token(i, start_idx)])
end_positions.append([encodings.char_to_token(i, end_idx-1)])
if start_positions and end_positions:
encodings.update({'start_positions': start_positions,
'end_positions': end_positions})
return encodings
train_tf_dataset = train_tf_dataset.map(convert_to_tf_features, batched=True)
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_tf_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_tf_dataset[x] for x in columns[:3]}
labels = {"output_1": train_tf_dataset["start_positions"]}
labels["output_2"] = train_tf_dataset["end_positions"]
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
# Let's load a pretrained TF2 Bert model and a simple optimizer
from transformers import TFBertForQuestionAnswering
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'output_1': loss_fn, 'output_2': loss_fn},
loss_weights={'output_1': 1., 'output_2': 1.},
metrics=['accuracy'])
# Now let's train our model
model.fit(tfdataset, epochs=1, steps_per_epoch=3)
```
I can't do the same as this code, because the dataset here is of type - `nlp.arrow_dataset.Dataset`.
I've tried to convert my `tensorflow.python.data.ops.dataset_ops.FlatMapDataset` to `nlp.arrow_dataset.Dataset` (and then mimic the last code here) but didn't find suitable way.
Edit:
I've succeeded to change the names of the output in the `FlatMapDataset` to output_1 and output_2, and now I receive the following error:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: logits and labels must have the same first dimension, got logits shape [384,1] and labels shape [1]
[[node loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at /yonatab/ZeroShot/transformers_experiments/src/minimal_example_for_git.py:53) ]]
[[Reshape_820/_546]]
(1) Invalid argument: logits and labels must have the same first dimension, got logits shape [384,1] and labels shape [1]
[[node loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at /yonatab/ZeroShot/transformers_experiments/src/minimal_example_for_git.py:53) ]]
```
How can I create a tf dataset with `squad_convert_examples_to_features` (and return type `tf`) and train a TF model on it?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4397/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4396/comments | https://api.github.com/repos/huggingface/transformers/issues/4396/events | https://github.com/huggingface/transformers/issues/4396 | 619,343,364 | MDU6SXNzdWU2MTkzNDMzNjQ= | 4,396 | Wrong model or tokenizer for MarianMT | {
"login": "NonaryR",
"id": 8309465,
"node_id": "MDQ6VXNlcjgzMDk0NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8309465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NonaryR",
"html_url": "https://github.com/NonaryR",
"followers_url": "https://api.github.com/users/NonaryR/followers",
"following_url": "https://api.github.com/users/NonaryR/following{/other_user}",
"gists_url": "https://api.github.com/users/NonaryR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NonaryR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NonaryR/subscriptions",
"organizations_url": "https://api.github.com/users/NonaryR/orgs",
"repos_url": "https://api.github.com/users/NonaryR/repos",
"events_url": "https://api.github.com/users/NonaryR/events{/privacy}",
"received_events_url": "https://api.github.com/users/NonaryR/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Never mind, my mistake",
"What was your mistake?\r\n\r\nIs your system Windows?\r\nI am trying to reproduce the colab tutorial from https://blogs.helsinki.fi/language-technology/2020/05/14/helsinkinlp-in-huggingface/ on Windows but I get errors.\r\n",
"@R4ZZ3 Hello! I just appended `>>tag<<` for text in cycle, and my text quickly became a mess of tags. And no Windows, sorry, can't help.",
"Can anyone help with this issue: #5040 ?"
] | 1,589 | 1,592 | 1,589 | NONE | null | While we [can't save](https://github.com/huggingface/transformers/issues/4371) `MarianTokenizer` to local directory, I found model weights and configs on [this page](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE), in `List all files in model ` link.
I downloaded these files, but I think this is wrong configs, because I can't reproduce even simplest example from [here](https://huggingface.co/transformers/model_doc/marian.html)
This text
```
1) '>>fr<< this is a sentence in english that we want to translate to french',
2) '>>pt<< This should go to portuguese',
3) '>>es<< And this to Spanish'
should became
1) "c'est une phrase en anglais que nous voulons traduire en français",
2) 'Isto deve ir para o português.',
3) 'Y esto al español'
```
with model and configs downloaded from the link above produces these results:
```
1) "c'est une phrase en anglais que nous voulons traduire en français" (as expected)
2) "Questo deve ir in portughese"
3) "E questo a spagnol"
```
This is definitely wrong, can you help? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4396/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4395/comments | https://api.github.com/repos/huggingface/transformers/issues/4395/events | https://github.com/huggingface/transformers/issues/4395 | 619,321,561 | MDU6SXNzdWU2MTkzMjE1NjE= | 4,395 | MarianMT = How to return 5 best candidates for a translation. | {
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052129,
"node_id": "MDU6TGFiZWwxODM0MDUyMTI5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature",
"name": "High-Level feature",
"color": "f7c9a3",
"default": false,
"description": ""
},
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"That is available on this page - https://marian-nmt.github.io/faq:\r\n\r\nCan I generate n-best lists?\r\n Yes. Just use --n-best and the set --beam-size 6 for an n-best list size of 6.\r\n\r\nI do not know how to apply it here.",
"That feature is not supported in our implementation, unfortunately.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"You can try passing `num_return_sequences=5` to generate, but that behavior is untested, and I have never used it.",
"> You can try passing `num_return_sequences=5` to generate, but that behavior is untested, and I have never used it.\r\n\r\nHowever, do not forget to set the beams accordingly. The beams must be defined at least as large as the desired number of alternative results."
] | 1,589 | 1,608 | 1,595 | NONE | null | This is a code for normal translation. This is the most probable translation. How can I return lets says 5 best candidates for translation for every single word (beam size would be 1)?
This model only return just the best word, which gives us a better translation. But I want to use it as a language model.
Is this even possible. I was looking at classes and code but I am not sure how would I do it.
from transformers import MarianMTModel, MarianTokenizer
src_text = [
'>>fr<< this is a sentence in english that we want to translate to french',
]
model_name = 'Helsinki-NLP/opus-mt-en-ROMANCE'
tokenizer = MarianTokenizer.from_pretrained(model_name)
print(tokenizer.supported_language_codes)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer.prepare_translation_batch(src_text))
tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4395/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4394/comments | https://api.github.com/repos/huggingface/transformers/issues/4394/events | https://github.com/huggingface/transformers/issues/4394 | 619,281,020 | MDU6SXNzdWU2MTkyODEwMjA= | 4,394 | the special token of XLNet | {
"login": "lytum",
"id": 38668257,
"node_id": "MDQ6VXNlcjM4NjY4MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/38668257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lytum",
"html_url": "https://github.com/lytum",
"followers_url": "https://api.github.com/users/lytum/followers",
"following_url": "https://api.github.com/users/lytum/following{/other_user}",
"gists_url": "https://api.github.com/users/lytum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lytum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lytum/subscriptions",
"organizations_url": "https://api.github.com/users/lytum/orgs",
"repos_url": "https://api.github.com/users/lytum/repos",
"events_url": "https://api.github.com/users/lytum/events{/privacy}",
"received_events_url": "https://api.github.com/users/lytum/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | hello,
may i ask you the special tokens of XLNet are same as BERT, in which they are '[CLS]' and '[SEP]'? Because i found the special tokens of XLNet are '<cls>' and '<sep>' in the original code, however, many public introduction about Xlnet still use the same tokens '[CLS]' and '[SEP]' as BERT. is it ok? are they same and don't matter? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4394/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4393/comments | https://api.github.com/repos/huggingface/transformers/issues/4393/events | https://github.com/huggingface/transformers/issues/4393 | 619,258,852 | MDU6SXNzdWU2MTkyNTg4NTI= | 4,393 | BertTokenizerFast does not load custom vocab created from Tokenizer library | {
"login": "questpavan",
"id": 63842917,
"node_id": "MDQ6VXNlcjYzODQyOTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/63842917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/questpavan",
"html_url": "https://github.com/questpavan",
"followers_url": "https://api.github.com/users/questpavan/followers",
"following_url": "https://api.github.com/users/questpavan/following{/other_user}",
"gists_url": "https://api.github.com/users/questpavan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/questpavan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/questpavan/subscriptions",
"organizations_url": "https://api.github.com/users/questpavan/orgs",
"repos_url": "https://api.github.com/users/questpavan/repos",
"events_url": "https://api.github.com/users/questpavan/events{/privacy}",
"received_events_url": "https://api.github.com/users/questpavan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi,\r\nCan anyone from Huggingface team look into this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Similar to #6025 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,602 | 1,602 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Bert ["iuliaturc/bert_uncased_L-2_H-128_A-2"]
https://huggingface.co/iuliaturc/bert_uncased_L-2_H-128_A-2
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
my own modified scripts: (give details below)
The tasks I am working on is:
my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
see comments in the code
1. Train model and tokenizer on new tokenizers library
2. try to add new tokens to tokenizer
3. Resize token embeddings for model
### Problem 1 :
If Loading with AutoTokenizer, can not use BertTokenizerFast but able to use custom vocabs
4. save tokenizer using save_tokenizer
5. Load using AutoTokenizer
### Problem 2:
If Loading with BertTokenizerFast, can use the functioanlity of BertTokenizerFast but losed the custom vocabs
4. save tokenizer using save_tokenizer
5. Load using BertTokenizerFast
```python
# Loaded Model and Tokenizer Default
tokenizer = BertTokenizerFast.from_pretrained("""iuliaturc/bert_uncased_L-2_H-128_A-2""")
model = AutoModel.from_pretrained("""iuliaturc/bert_uncased_L-2_H-128_A-2""")
# Added Custom vocab to the tokenizer
def add_vocab_to_model(df, model, tokenizer, old_vocab, vocab_size=30000):
"""Adds new vocab to tokenizer and randomly initialises rows for new vocab in the model"""
PATH = Path('./lm_data')
PATH.mkdir(exist_ok=True)
df.dropna(inplace=True)
lm_text = ' '.join(df['text'])
lm_text = lm_text.replace("■","")
lm_text = re.sub(r'[^\x00-\x7F]+',' ', lm_text)
lm_text = re.sub(r'[^\u0000-\u007F]+',' ', lm_text)
with open('./lm_data/data.txt', mode='w') as f:
f.write(lm_text)
paths = [str(x) for x in Path("./lm_data/").glob("**/*.txt")]
# Initialize a tokenizer
tokenizer_new = BertWordPieceTokenizer(old_vocab, lowercase=True)
# Customize training
tokenizer_new.train(files=paths, vocab_size=vocab_size, min_frequency=20)
tokenizer_new.save(".", "./lm_data/new")
new_vocab = open('./lm_data/new-vocab.txt', 'r').read().split('\n')
new_vocab.remove('')
print('Adding new tokens to vocab')
n_orig_tokens = len(tokenizer)
tokenizer.add_tokens(new_vocab)
print('Original no. of tokens: %s'%n_orig_tokens)
print('Final no. of tokens: %s'%len(tokenizer))
print('Initialised random emb for new tokens')
model.resize_token_embeddings(len(tokenizer))
return model, tokenizer
# Just needs to pass a dataframe which is having a column with name "text"
model1, tokenizer1 = add_vocab_to_model(dataframe, model, tokenizer, 'bert-base-uncased-vocab.txt', vocab_size=10000)
#Adding new tokens to vocab
#Original no. of tokens: 30522
#Final no. of tokens: 34340
#Initialised random emb for new tokens
print(len(tokenizer1))
# Got the result : 34340
#Finalized Result
print(type(tokenizer1))
# transformers.tokenization_bert.BertTokenizerFast
# Final Tokenizer
# Problem 1 :
# If Loading with AutoTokenizer, can not use BertTokenizerFast but able to use custom vocabs
# Saved and Loaded Model Again
tokenizer1.save_pretrained("./tokenizer")
tokenizer2 = AutoTokenizer.from_pretrained("./tokenizer")
print(len(tokenizer2)
# Got 34340
# Which is Correct.
print(type(tokenizer2))
# transformers.tokenization_bert.BertTokenizer
# Not able to load BertTokenizerFast defaulting back to BertTokenizer
# Problem 2:
# If Loading with BertTokenizerFast, can use the functioanlity of BertTokenizerFast but losed the custom vocabs
# Saved and Loaded Model Again
tokenizer1.save_pretrained("./tokenizer")
tokenizer3 = BertTokenizerFast.from_pretrained("./tokenizer")
print(len(tokenizer3)
# Got 30522
# Losing the added custom vocabs
print(type(tokenizer3))
# transformers.tokenization_bert.BertTokenizerFast
#which is correct
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
1. If Loading tokenizer using AutoTokenizer it should load the BertTokenizerFast
2. If Loading tokenizer using BertTokenizerFast, it should maintain custom vocabs
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Windows
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?): NA
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4393/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4393/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4392/comments | https://api.github.com/repos/huggingface/transformers/issues/4392/events | https://github.com/huggingface/transformers/issues/4392 | 619,167,903 | MDU6SXNzdWU2MTkxNjc5MDM= | 4,392 | MarianMTModel translate {tgt}-{src} | {
"login": "NonaryR",
"id": 8309465,
"node_id": "MDQ6VXNlcjgzMDk0NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8309465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NonaryR",
"html_url": "https://github.com/NonaryR",
"followers_url": "https://api.github.com/users/NonaryR/followers",
"following_url": "https://api.github.com/users/NonaryR/following{/other_user}",
"gists_url": "https://api.github.com/users/NonaryR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NonaryR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NonaryR/subscriptions",
"organizations_url": "https://api.github.com/users/NonaryR/orgs",
"repos_url": "https://api.github.com/users/NonaryR/repos",
"events_url": "https://api.github.com/users/NonaryR/events{/privacy}",
"received_events_url": "https://api.github.com/users/NonaryR/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Nope. Often the reverse model exists, however, e.g. `en-fr` and `fr-en`.",
"> Nope. Often the reverse model exists, however, e.g. `en-fr` and `fr-en`.\r\n\r\nWhat if reverse model does not exist for my langauge? ",
"What's your language? \r\nMaybe it's part of a multilingual group (below) or has been [trained](http://opus.nlpl.eu/Opus-MT/) since we did the conversion?\r\nOtherwise, you're out of luck unfortunately.\r\n\r\n```\r\nGROUP_MEMBERS = {\r\n 'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'],\r\n 'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'],\r\n 'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],\r\n 'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],\r\n 'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'],\r\n 'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'],\r\n 'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv']\r\n}\r\n```",
"> What's your language?\r\n> Maybe it's part of a multilingual group (below) or has been [trained](http://opus.nlpl.eu/Opus-MT/) since we did the conversion?\r\n> Otherwise, you're out of luck unfortunately.\r\n> \r\n> ```\r\n> GROUP_MEMBERS = {\r\n> 'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'],\r\n> 'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'],\r\n> 'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],\r\n> 'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'],\r\n> 'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'],\r\n> 'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'],\r\n> 'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv']\r\n> }\r\n> ```\r\n\r\nSwahili, oooh!"
] | 1,589 | 1,589 | 1,589 | NONE | null | Hello!
It is possible to use `MarianMTModels` for translation from the target language to source, i.e. backward? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4392/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4391/comments | https://api.github.com/repos/huggingface/transformers/issues/4391/events | https://github.com/huggingface/transformers/issues/4391 | 619,159,631 | MDU6SXNzdWU2MTkxNTk2MzE= | 4,391 | [PretrainedTokenizer] is <unk> a special token? | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"@mfuntowicz @LysandreJik might understand the best!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | CONTRIBUTOR | null | This confused me a lot as I was trying to add common tests for Marian.
I believe special_tokens_mask [must](
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L445) return 0 for positions with unk_token_id, but `<unk>` is in `SpecialTokensMixin.all_special_ids`.
Relatedly, this [RobertaTokenizer](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_roberta.py#L204) code suggests there should be a relationship between `all_special_ids` and `special_token_mask` logic.
Maybe `<unk>` shouldnt be in `SpecialTokensMixin.all_special_ids`?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4391/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4390/comments | https://api.github.com/repos/huggingface/transformers/issues/4390/events | https://github.com/huggingface/transformers/pull/4390 | 619,149,437 | MDExOlB1bGxSZXF1ZXN0NDE4NzMwMDM4 | 4,390 | [cleanup] test_tokenization_common.py | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null |
- TesterMixin implements some oft repeated functions
- split long tests into multiple tests cases for better tracebacks
- misc typos
- type hints | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4390/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4390",
"html_url": "https://github.com/huggingface/transformers/pull/4390",
"diff_url": "https://github.com/huggingface/transformers/pull/4390.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4390.patch",
"merged_at": 1589899616000
} |
https://api.github.com/repos/huggingface/transformers/issues/4389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4389/comments | https://api.github.com/repos/huggingface/transformers/issues/4389/events | https://github.com/huggingface/transformers/pull/4389 | 619,142,846 | MDExOlB1bGxSZXF1ZXN0NDE4NzI0NjM0 | 4,389 | [MarianTokenizer] implement save_vocabulary and other common methods | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=h1) Report\n> Merging [#4389](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/48c3a70b4eaedab1dd9ad49990cfaa4d6cb8f6a0&el=desc) will **decrease** coverage by `0.37%`.\n> The diff coverage is `96.22%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4389 +/- ##\n==========================================\n- Coverage 78.41% 78.03% -0.38% \n==========================================\n Files 123 123 \n Lines 20432 20477 +45 \n==========================================\n- Hits 16021 15980 -41 \n- Misses 4411 4497 +86 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.10% <96.22%> (+5.77%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=footer). Last update [48c3a70...1bd2df5](https://codecov.io/gh/huggingface/transformers/pull/4389?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Can anyone help with this issue: #5040 ?"
] | 1,589 | 1,592 | 1,589 | CONTRIBUTOR | null | - adds `test_tokenization_marian.py` which runs the common tokenizer tests.
- adds `save_vocabulary` and other methods to `MarianTokenizer` to make the common tests pass.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4389/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4389",
"html_url": "https://github.com/huggingface/transformers/pull/4389",
"diff_url": "https://github.com/huggingface/transformers/pull/4389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4389.patch",
"merged_at": 1589931949000
} |
https://api.github.com/repos/huggingface/transformers/issues/4388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4388/comments | https://api.github.com/repos/huggingface/transformers/issues/4388/events | https://github.com/huggingface/transformers/issues/4388 | 619,065,422 | MDU6SXNzdWU2MTkwNjU0MjI= | 4,388 | [docs] AutoModelWithLMHead(model_name, **kwargs) | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"unstale?",
"This was fixed with https://github.com/huggingface/transformers/pull/5665!"
] | 1,589 | 1,595 | 1,595 | CONTRIBUTOR | null | `AutoModelWithLMHead.from_pretrained` cannot accept `output_attentions=True`
This [docstring](https://huggingface.co/transformers/model_doc/auto.html?highlight=transformers%20automodelwithlmhead#transformers.AutoModelWithLMHead) suggests that it can.
Thanks @jrvc for discovering!
```python
import transformers
modelname='Helsinki-NLP/opus-mt-en-de'
config_overrider={'output_attentions':True, 'output_hidden_states':True}
self.model = transformers.AutoModelWithLMHead.from_pretrained(modelname, **config_overrider)
=>
*** TypeError: __init__() got an unexpected keyword argument 'output_attentions'
```
I will dig deeper! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4388/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4387/comments | https://api.github.com/repos/huggingface/transformers/issues/4387/events | https://github.com/huggingface/transformers/issues/4387 | 619,052,256 | MDU6SXNzdWU2MTkwNTIyNTY= | 4,387 | [Bart/Marian] ignore output_attentions when invoked through AutoModel | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"this works for me:\r\n```python\r\nmodelname='Helsinki-NLP/opus-mt-en-de'\r\nmodel = MarianMTModel.from_pretrained(modelname,output_attentions=True, output_hidden_states=True)\r\noutput_tuple = model(**model.dummy_inputs)\r\nassert len(output_tuple) == 6\r\n```",
"#4388 is the real issue."
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | [ ] should mimic behavior of other models
[ ] backwards compat? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4387/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4386/comments | https://api.github.com/repos/huggingface/transformers/issues/4386/events | https://github.com/huggingface/transformers/issues/4386 | 618,994,051 | MDU6SXNzdWU2MTg5OTQwNTE= | 4,386 | Finetuning BERT classifier on a non-GLUE dataset in GLUE format | {
"login": "shauli-ravfogel",
"id": 14981791,
"node_id": "MDQ6VXNlcjE0OTgxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/14981791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shauli-ravfogel",
"html_url": "https://github.com/shauli-ravfogel",
"followers_url": "https://api.github.com/users/shauli-ravfogel/followers",
"following_url": "https://api.github.com/users/shauli-ravfogel/following{/other_user}",
"gists_url": "https://api.github.com/users/shauli-ravfogel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shauli-ravfogel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shauli-ravfogel/subscriptions",
"organizations_url": "https://api.github.com/users/shauli-ravfogel/orgs",
"repos_url": "https://api.github.com/users/shauli-ravfogel/repos",
"events_url": "https://api.github.com/users/shauli-ravfogel/events{/privacy}",
"received_events_url": "https://api.github.com/users/shauli-ravfogel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The (new) run_glue.py is just a short template of code that you can copy/paste/customize (e.g. to plug your own Dataset), thanks to the [Trainer](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py).",
"Thanks! which class would replace `GlueDataset` in that case? and what argument should I pass instead of `task_name`? and how to set the number of labels, which in `run_glue.py` is defined by `glue_tasks_num_labels`? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
Hello,
I have a dataset of my own, formatted in GLUE format. Is it possible to use the script `run_glue.py` to finetune BERT on this dataset? If that's not possible, is there an alternative script for finetuning a classifier on arbitrary GLUE-formatted dataset?
Thanks!
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4386/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4385/comments | https://api.github.com/repos/huggingface/transformers/issues/4385/events | https://github.com/huggingface/transformers/pull/4385 | 618,979,665 | MDExOlB1bGxSZXF1ZXN0NDE4NjAyNjYx | 4,385 | Should return overflowing information for the log | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | closes #4380 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4385/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4385",
"html_url": "https://github.com/huggingface/transformers/pull/4385",
"diff_url": "https://github.com/huggingface/transformers/pull/4385.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4385.patch",
"merged_at": 1589550552000
} |
https://api.github.com/repos/huggingface/transformers/issues/4384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4384/comments | https://api.github.com/repos/huggingface/transformers/issues/4384/events | https://github.com/huggingface/transformers/pull/4384 | 618,975,178 | MDExOlB1bGxSZXF1ZXN0NDE4NTk5MDA2 | 4,384 | Attempt to unpin torch version for Github Action. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4384/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4384",
"html_url": "https://github.com/huggingface/transformers/pull/4384",
"diff_url": "https://github.com/huggingface/transformers/pull/4384.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4384.patch",
"merged_at": 1589550436000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4383/comments | https://api.github.com/repos/huggingface/transformers/issues/4383/events | https://github.com/huggingface/transformers/issues/4383 | 618,931,472 | MDU6SXNzdWU2MTg5MzE0NzI= | 4,383 | Issue with lr during training from scratch | {
"login": "bokertof",
"id": 43165697,
"node_id": "MDQ6VXNlcjQzMTY1Njk3",
"avatar_url": "https://avatars.githubusercontent.com/u/43165697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bokertof",
"html_url": "https://github.com/bokertof",
"followers_url": "https://api.github.com/users/bokertof/followers",
"following_url": "https://api.github.com/users/bokertof/following{/other_user}",
"gists_url": "https://api.github.com/users/bokertof/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bokertof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bokertof/subscriptions",
"organizations_url": "https://api.github.com/users/bokertof/orgs",
"repos_url": "https://api.github.com/users/bokertof/repos",
"events_url": "https://api.github.com/users/bokertof/events{/privacy}",
"received_events_url": "https://api.github.com/users/bokertof/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Are you restarting from your checkpoint with the exact same set of script parameters?",
"@julien-c Actually, no. And I guess I undestood what the reason of that behavior is. Originally I used to train with number of epochs = 5. After that I selected 10 epochs and hoped that it'll continue pretraining from 5'th epoch up to 10'th. However as far as I understand lr-value is calculated based on the number of epochs. The current value of lr is the same as I got at 15k iteration (30k/2). So I have should selected 10 (or whatever number) epochs originally",
"Yes. You can always implement this behaviour yourself if you need it."
] | 1,589 | 1,589 | 1,589 | NONE | null | I'm trying to resume my pretraining of RoBERTa model and I noticed that everything is OK except learning rate. I thought that it should continue to decrease from last checkpoint (lr = 7e-7) but lr is much bigger (lr = 5e-5) than it has to be.
this is my parameters to run script:
- --model_name_or_path ./ROBERTA-small-v1/checkpoint-30000
- --train_data_file TEXT.txt
- --output_dir ./ROBERTA-small-v2
- --mlm
- --config_name ./ROBERTA_SMILES-v1/checkpoint-30000
- --tokenizer_name ./ROBERTA
- --do_train
- --line_by_line
- --num_train_epochs 10
- --save_total_limit 1
- --save_steps 2000
- --per_gpu_train_batch_size 16
- --seed 42
- --logging_steps 250
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4383/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4382/comments | https://api.github.com/repos/huggingface/transformers/issues/4382/events | https://github.com/huggingface/transformers/issues/4382 | 618,921,897 | MDU6SXNzdWU2MTg5MjE4OTc= | 4,382 | Need clarity on training Albert from scratch | {
"login": "008karan",
"id": 18630864,
"node_id": "MDQ6VXNlcjE4NjMwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18630864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/008karan",
"html_url": "https://github.com/008karan",
"followers_url": "https://api.github.com/users/008karan/followers",
"following_url": "https://api.github.com/users/008karan/following{/other_user}",
"gists_url": "https://api.github.com/users/008karan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/008karan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/008karan/subscriptions",
"organizations_url": "https://api.github.com/users/008karan/orgs",
"repos_url": "https://api.github.com/users/008karan/repos",
"events_url": "https://api.github.com/users/008karan/events{/privacy}",
"received_events_url": "https://api.github.com/users/008karan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The Albert original model has a SentencePiece tokenizer, so the `AlbertTokenizer` can only handle SentencePiece vocabularies right now. The `tokenizers` library doesn't handle these vocabularies as of now.\r\n\r\nWe're still thinking of how we should proceed when training a model from scratch with a tokenizer different to the original, as right now the model-tokenizer pairs have a fixed tokenizer backend. cc @julien-c @thomwolf ",
"@LysandreJik But when I added spiece.model it started training. So is it taking vocab created by tokenizer library along with spiece.model?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Does this issue have been solved? When can we train the lm from scratch?",
"You can already train an LM from scratch, there are examples in `examples/language-modeling`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Are there any updates on training Albert from scratch? Perhaps via the Trainer?",
"Hi, a `DataCollatorForSOP` has been added to the library, which you can indeed use with the `Trainer` object. You can see it's implementation here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/6303b5a7185fba43830db0cbb06c61861f57ddff/src/transformers/data/data_collator.py#L201-L206",
"> Hi, a `DataCollatorForSOP` has been added to the library, which you can indeed use with the `Trainer` object. You can see it's implementation here:\r\n> \r\n> https://github.com/huggingface/transformers/blob/6303b5a7185fba43830db0cbb06c61861f57ddff/src/transformers/data/data_collator.py#L201-L206\r\n\r\nWhen i try to use this `DataCollatorForSOP` it says that it is deprecated and that i should use `DataCollatorForLanguageModeling`. It is however not clear whether this data collator also produces the objective sentence order prediction as in the original ALBERT paper. Can anyone verify whether it does this or not?"
] | 1,589 | 1,637 | 1,601 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
Using transformers 2.9.1
I am following EsperBERTo tutorial for Training Albert LM from scratch. I am able to start training for roberta but for Albert getting tokenizer issue:
```
05/15/2020 16:23:29 - INFO - transformers.training_args - PyTorch: setting up devices
05/15/2020 16:23:29 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: False
05/15/2020 16:23:29 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='./EsperBERTo-small-v1', overwrite_output_dir=False, do_train=True, do_eval=False, do_predict=False, evaluate_during_training=False, per_gpu_train_batch_size=16, per_gpu_eval_batch_size=8, gradient_accumulation_steps=1, learning_rate=0.0001, weight_decay=0.0, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, warmup_steps=0, logging_dir=None, logging_first_step=False, logging_steps=500, save_steps=2000, save_total_limit=2, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False)
05/15/2020 16:23:29 - INFO - transformers.configuration_utils - loading configuration file ./EsperBERTo/config.json
05/15/2020 16:23:29 - INFO - transformers.configuration_utils - Model config AlbertConfig {
"architectures": [
"AlbertForMaskedLM"
],
"attention_probs_dropout_prob": 0,
"bos_token_id": 2,
"classifier_dropout_prob": 0.1,
"down_scale_factor": 1,
"embedding_size": 128,
"eos_token_id": 3,
"gap_size": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"hidden_size": 768,
"initializer_range": 0.02,
"inner_group_num": 1,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "albert",
"net_structure_type": 0,
"num_attention_heads": 12,
"num_hidden_groups": 1,
"num_hidden_layers": 12,
"num_memory_blocks": 0,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 52000
}
05/15/2020 16:23:29 - INFO - transformers.configuration_utils - loading configuration file ./EsperBERTo/config.json
05/15/2020 16:23:29 - INFO - transformers.configuration_utils - Model config AlbertConfig {
"architectures": [
"AlbertForMaskedLM"
],
"attention_probs_dropout_prob": 0,
"bos_token_id": 2,
"classifier_dropout_prob": 0.1,
"down_scale_factor": 1,
"embedding_size": 128,
"eos_token_id": 3,
"gap_size": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"hidden_size": 768,
"initializer_range": 0.02,
"inner_group_num": 1,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "albert",
"net_structure_type": 0,
"num_attention_heads": 12,
"num_hidden_groups": 1,
"num_hidden_layers": 12,
"num_memory_blocks": 0,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 52000
}
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - Model name './EsperBERTo' not found in model shortcut name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). Assuming './EsperBERTo' is a path, a model identifier, or url to a directory containing tokenizer files.
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - Didn't find file ./EsperBERTo/spiece.model. We won't load it.
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - Didn't find file ./EsperBERTo/added_tokens.json. We won't load it.
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - Didn't find file ./EsperBERTo/special_tokens_map.json. We won't load it.
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - loading file None
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - loading file None
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - loading file None
05/15/2020 16:23:29 - INFO - transformers.tokenization_utils - loading file ./EsperBERTo/tokenizer_config.json
Traceback (most recent call last):
File "/home/gamut/Downloads/transformers_source/examples/language-modeling/run_language_modeling.py", line 289, in <module>
main()
File "/home/gamut/Downloads/transformers_source/examples/language-modeling/run_language_modeling.py", line 188, in main
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/transformers/tokenization_auto.py", line 203, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 902, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 1055, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/transformers/tokenization_albert.py", line 155, in __init__
self.sp_model.Load(vocab_file)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/sentencepiece.py", line 275, in _sentencepiece_processor_load
return self._Load_native(model_file)
File "/home/gamut/anaconda2/envs/xyz/hugging_source/lib/python3.6/site-packages/sentencepiece.py", line 75, in Load
return _sentencepiece.SentencePieceProcessor_Load(self, filename)
TypeError: not a string
CPU times: user 13.6 ms, sys: 27.4 ms, total: 41.1 ms
Wall time: 1.12 s
```
I guess it was looking for spiece model. So I added spice.model in EsperBERTo folder and it started training.
Now I confuse as vocab it is using now is generated using Huggingface tokenizer and taking spice.model also... can any one clarify whats going on here?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4382/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4382/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4381/comments | https://api.github.com/repos/huggingface/transformers/issues/4381/events | https://github.com/huggingface/transformers/issues/4381 | 618,836,983 | MDU6SXNzdWU2MTg4MzY5ODM= | 4,381 | Unknown task fill-mask | {
"login": "DesiKeki",
"id": 48218899,
"node_id": "MDQ6VXNlcjQ4MjE4ODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/48218899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DesiKeki",
"html_url": "https://github.com/DesiKeki",
"followers_url": "https://api.github.com/users/DesiKeki/followers",
"following_url": "https://api.github.com/users/DesiKeki/following{/other_user}",
"gists_url": "https://api.github.com/users/DesiKeki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DesiKeki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DesiKeki/subscriptions",
"organizations_url": "https://api.github.com/users/DesiKeki/orgs",
"repos_url": "https://api.github.com/users/DesiKeki/repos",
"events_url": "https://api.github.com/users/DesiKeki/events{/privacy}",
"received_events_url": "https://api.github.com/users/DesiKeki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you upgrade your version of transformers?",
"Same issue I am not able to run the command `transformer-cli env` even after installing from source @julien-c .\r\n\r\n`transformers-cli is not recognized as an internal or external command operable program or batch file` \r\n\r\nJust wondering if this is due to the fact of using `conda` as in latest versions it is not preferring to modify the PYTHONPATH. hence the non visibility of the command?\r\n\r\n",
"Upgrading the transformers version to 2.9.1 works!"
] | 1,589 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
## Information
I am trying to create a "Pipeline" for fill-mask task
Language I am using the model on is English
The problem arises when using:
My own modified scripts: (give details below)
```
import transformers
transformers.__version__
```
>>'2.3.0'
```
from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill('Hugging Face is a French company based in <mask>')
```
>>---------------------------------------------------------------------------
>>KeyError Traceback (most recent call last)
>><ipython-input-22-d415d13f7528> in <module>
>>----> 1 nlp_fill = pipeline('fill-mask')
>> 2 nlp_fill('Hugging Face is a French company based in <mask>')
>>
>>~\Anaconda3\lib\site-packages\transformers\pipelines.py in pipeline(task, model, config, tokenizer, modelcard, **kwargs)
>> 848 # Retrieve the task
>> 849 if task not in SUPPORTED_TASKS:
>>--> 850 raise KeyError("Unknown task {}, available tasks are {}".format(task, list(SUPPORTED_TASKS.keys())))
>> 851
>> 852 framework = get_framework(model)
>> **KeyError**: "Unknown task fill-mask, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering']"
The tasks I am working on is:
My own task or dataset. Trying using Pipeline.
## To reproduce
Steps to reproduce the behavior:
from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill('Hugging Face is a French company based in <mask>')
## Expected behavior
Should have got a filled output with a score.
## Environment info
'transformers-cli' is not recognized as an internal or external command, operable program or batch file.
- `transformers` version: 2.3.0
- Platform: Windows Jupyter notebook
- Python version: 3.7.4
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4381/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4380/comments | https://api.github.com/repos/huggingface/transformers/issues/4380/events | https://github.com/huggingface/transformers/issues/4380 | 618,824,821 | MDU6SXNzdWU2MTg4MjQ4MjE= | 4,380 | max_qa_length is needed for funetune on multiple-choice problems | {
"login": "MagicFrogSJTU",
"id": 8948386,
"node_id": "MDQ6VXNlcjg5NDgzODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8948386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MagicFrogSJTU",
"html_url": "https://github.com/MagicFrogSJTU",
"followers_url": "https://api.github.com/users/MagicFrogSJTU/followers",
"following_url": "https://api.github.com/users/MagicFrogSJTU/following{/other_user}",
"gists_url": "https://api.github.com/users/MagicFrogSJTU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MagicFrogSJTU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MagicFrogSJTU/subscriptions",
"organizations_url": "https://api.github.com/users/MagicFrogSJTU/orgs",
"repos_url": "https://api.github.com/users/MagicFrogSJTU/repos",
"events_url": "https://api.github.com/users/MagicFrogSJTU/events{/privacy}",
"received_events_url": "https://api.github.com/users/MagicFrogSJTU/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, you're totally correct. Thank you for reporting the issue, I'm fixing it in #4385."
] | 1,589 | 1,589 | 1,589 | NONE | null | # 🚀 Feature request
In examples/multiple-choice/run_multiple_choice.py, the max_qa_length is not implemented as that in google/Albert, which would cause trouble when funetuning RACE.
This should be easy becauce there are similar codes in codes for run_squad.py.
### Your contribution
I have done the script and get similar RACE performance compared to google/Albert.
# Current Bug
In examples/multiple-choice/utils_multiple_choice.py
```python
# From 537 lines:
# It should be =>
# inputs = tokenizer.encode_plus(
# text_a, text_b, add_special_tokens=True, max_length=max_length, pad_to_max_length=True, return_overflowing_tokens=True
# )
# Or the log info below would never be activated.
inputs = tokenizer.encode_plus(
text_a, text_b, add_special_tokens=True, max_length=max_length, pad_to_max_length=True,
)
if "num_truncated_tokens" in inputs and inputs["num_truncated_tokens"] > 0:
logger.info(
"Attention! you are cropping tokens (swag task is ok). "
"If you are training ARC and RACE and you are poping question + options,"
"you need to try to use a bigger max seq length!"
)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4380/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4380/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4379/comments | https://api.github.com/repos/huggingface/transformers/issues/4379/events | https://github.com/huggingface/transformers/issues/4379 | 618,747,514 | MDU6SXNzdWU2MTg3NDc1MTQ= | 4,379 | MNLI finetuning results affected by values of max_steps | {
"login": "leo-liuzy",
"id": 11146950,
"node_id": "MDQ6VXNlcjExMTQ2OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11146950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leo-liuzy",
"html_url": "https://github.com/leo-liuzy",
"followers_url": "https://api.github.com/users/leo-liuzy/followers",
"following_url": "https://api.github.com/users/leo-liuzy/following{/other_user}",
"gists_url": "https://api.github.com/users/leo-liuzy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leo-liuzy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leo-liuzy/subscriptions",
"organizations_url": "https://api.github.com/users/leo-liuzy/orgs",
"repos_url": "https://api.github.com/users/leo-liuzy/repos",
"events_url": "https://api.github.com/users/leo-liuzy/events{/privacy}",
"received_events_url": "https://api.github.com/users/leo-liuzy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
If you follow my bash script and change MAX from 40000 to 50000, the performance is dramatically affected. However, 50000 may not be the smallest number that have this behavior and I am not sure if this is a bug or something expected; the performance is severely affected, so I decided to report.
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Set MAX=40000, Run my provided code
2. Set MAX=50000, Run my provided code
3.
```
export GLUE_DIR=downstream_datasets/glue
export TASK_NAME=QQP
SAVING_PERIOD=1000
MAX=40000
BS=32
SEED=42
#EPOCH=1
export CUDA_VISIBLE_DEVICES=0
for TASK_NAME in MNLI # QQP SST-2
do
echo $TASK_NAME
python run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name $TASK_NAME \
--do_train \
--save_steps $SAVING_PERIOD \
--logging_steps $SAVING_PERIOD \
--max_steps $MAX \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_train_batch_size $BS \
--per_gpu_eval_batch_size $BS \
--learning_rate 2e-5 \
--seed $SEED \
--output_dir model_output/"$TASK_NAME"_maxlen128_seed"$SEED"_savesteps"$SAVING_PERIOD"_maxsteps"$MAX"/ \
--logging_dir runs/"$TASK_NAME"_maxlen128_seed"$SEED"_savesteps"$SAVING_PERIOD"_maxsteps"$MAX"/ \
--logging_first_step \
--evaluate_during_training
done
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
From my limited experiments, in the tensorboard dashboard:
MAX=40000: the acc will keep climbing.
MAX=50000: the acc will fluctuate throughout training. It can not exceed 40%.
I suspect MNLI's performance is affecteed by learning rate set by optimizer and scheduler, which are affected by the max_steps. However, this behaviour **does not** hold on other similar-sized datasets like QQP and SST-2. This inconsistency is confusing.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-5.3.0-46-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: single gpu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4379/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4378/comments | https://api.github.com/repos/huggingface/transformers/issues/4378/events | https://github.com/huggingface/transformers/pull/4378 | 618,690,315 | MDExOlB1bGxSZXF1ZXN0NDE4MzcwMTQ1 | 4,378 | Implement SuperGLUE tasks and baselines | {
"login": "W4ngatang",
"id": 5520155,
"node_id": "MDQ6VXNlcjU1MjAxNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5520155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/W4ngatang",
"html_url": "https://github.com/W4ngatang",
"followers_url": "https://api.github.com/users/W4ngatang/followers",
"following_url": "https://api.github.com/users/W4ngatang/following{/other_user}",
"gists_url": "https://api.github.com/users/W4ngatang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/W4ngatang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/W4ngatang/subscriptions",
"organizations_url": "https://api.github.com/users/W4ngatang/orgs",
"repos_url": "https://api.github.com/users/W4ngatang/repos",
"events_url": "https://api.github.com/users/W4ngatang/events{/privacy}",
"received_events_url": "https://api.github.com/users/W4ngatang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @W4ngatang did you see that we have a new `Trainer` class in the library? It's basically a refactor of the previous version of `run_glue.py` so should be pretty easy to adopt.\r\n\r\nLet us know if we can help, and very excited to have an easy path to SuperGLUE tasks in the repo :)",
"Hi Julien,\r\n\r\nSaw it! I started writing this a while ago, but you all move very fast. We're preparing this as some reference code for a competition, so we may keep the training code as is for a bit and then switch to the trainer when we're ready to actually merge into the main repo.",
"@thomwolf ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=h1) Report\n> Merging [#4378](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90f4b2452077ac3bac9453bdc63e0359aa4fe4d2&el=desc) will **decrease** coverage by `1.47%`.\n> The diff coverage is `22.83%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4378 +/- ##\n==========================================\n- Coverage 78.00% 76.52% -1.48% \n==========================================\n Files 138 139 +1 \n Lines 23766 24408 +642 \n==========================================\n+ Hits 18539 18679 +140 \n- Misses 5227 5729 +502 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `83.36% <12.12%> (-4.14%)` | :arrow_down: |\n| [src/transformers/data/metrics/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3MvX19pbml0X18ucHk=) | `16.66% <12.35%> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `83.80% <19.44%> (-10.99%)` | :arrow_down: |\n| [src/transformers/data/processors/superglue.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3VwZXJnbHVlLnB5) | `21.73% <21.73%> (ø)` | |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `35.59% <84.61%> (+7.96%)` | :arrow_up: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.21% <100.00%> (ø)` | |\n| [src/transformers/data/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/data/processors/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvX19pbml0X18ucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.44% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/4378/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=footer). Last update [90f4b24...576ecaf](https://codecov.io/gh/huggingface/transformers/pull/4378?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks super-cool, especially with SustaiNLP just around the corner! Is there any help that could be given that would accelerate this PR?",
"hi @W4ngatang @thomwolf \r\n\r\nany help you need for this PR? I think pushing this PR through can help many researchers with their experiments. ",
"Hi Tianyi,\n\nYes, help would be great! The code is correct from my end, but uses an old\nversion of Transformers. At this point, it's just a matter of merging the\nmost recent version of Transformers in and adhering to the Transformers\ncontributor guidelines.\n\nBest,\nAlex\n\nOn Thu, Oct 8, 2020 at 5:17 PM Tianyi <[email protected]> wrote:\n\n> hi @W4ngatang <https://github.com/W4ngatang> @thomwolf\n> <https://github.com/thomwolf>\n>\n> any help you need for this PR? I think pushing this PR through can help\n> many researchers with their experiments.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/4378#issuecomment-705828560>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABKDWG7FSDKTCVL4MI6TXEDSJYT7DANCNFSM4NBH5EJA>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,589 | 1,619 | 1,619 | NONE | null | WIP implementation of the SuperGLUE tasks, following the style of the GLUE data processors. This PR includes:
- classification with marked spans: most tasks are classification and straightforward, but WiC and WSC have special spans. This PR implements `*SpanClassification` models (currently for just BERT and RoBERTa) and relevant data classes
- scripts for using and extracting results from the [experiment impact tracker](https://github.com/Breakend/experiment-impact-tracker) library, which tracks energy usage and is being used for the SustaiNLP 2020 competition
TODO
- clean up some of my development scripts and cruft | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4378/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/huggingface/transformers/issues/4378/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4378",
"html_url": "https://github.com/huggingface/transformers/pull/4378",
"diff_url": "https://github.com/huggingface/transformers/pull/4378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4378.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4377/comments | https://api.github.com/repos/huggingface/transformers/issues/4377/events | https://github.com/huggingface/transformers/pull/4377 | 618,686,752 | MDExOlB1bGxSZXF1ZXN0NDE4MzY3MjI3 | 4,377 | Added README huseinzol05/t5-base-bahasa-cased | {
"login": "huseinzol05",
"id": 19810909,
"node_id": "MDQ6VXNlcjE5ODEwOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19810909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huseinzol05",
"html_url": "https://github.com/huseinzol05",
"followers_url": "https://api.github.com/users/huseinzol05/followers",
"following_url": "https://api.github.com/users/huseinzol05/following{/other_user}",
"gists_url": "https://api.github.com/users/huseinzol05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huseinzol05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huseinzol05/subscriptions",
"organizations_url": "https://api.github.com/users/huseinzol05/orgs",
"repos_url": "https://api.github.com/users/huseinzol05/repos",
"events_url": "https://api.github.com/users/huseinzol05/events{/privacy}",
"received_events_url": "https://api.github.com/users/huseinzol05/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4377/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4377",
"html_url": "https://github.com/huggingface/transformers/pull/4377",
"diff_url": "https://github.com/huggingface/transformers/pull/4377.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4377.patch",
"merged_at": 1589839824000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4376/comments | https://api.github.com/repos/huggingface/transformers/issues/4376/events | https://github.com/huggingface/transformers/issues/4376 | 618,653,563 | MDU6SXNzdWU2MTg2NTM1NjM= | 4,376 | Error on instantiating pipeline.summarizer | {
"login": "amgsharma",
"id": 8289118,
"node_id": "MDQ6VXNlcjgyODkxMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8289118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amgsharma",
"html_url": "https://github.com/amgsharma",
"followers_url": "https://api.github.com/users/amgsharma/followers",
"following_url": "https://api.github.com/users/amgsharma/following{/other_user}",
"gists_url": "https://api.github.com/users/amgsharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amgsharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amgsharma/subscriptions",
"organizations_url": "https://api.github.com/users/amgsharma/orgs",
"repos_url": "https://api.github.com/users/amgsharma/repos",
"events_url": "https://api.github.com/users/amgsharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/amgsharma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I am also facing the same problem",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
The problem arises when using:
I'm just testing out the different modules in the package. and following the examples on the [usage page](https://huggingface.co/transformers/usage.html)
## To reproduce
Run the following snippet with the environment set up as. described in the relevant section below
`from transformers import pipeline`
`summarizer = pipeline("summarization")`
<!-->
This is the traceback:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
----> 1 summarizer = pipeline("summarization")
~/Projects/amlg/playground/venv/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs)
1759 model = model_class.from_pretrained(model, config=config, **model_kwargs)
1760
-> 1761 return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)
~/Projects/amlg/playground/venv/lib/python3.7/site-packages/transformers/pipelines.py in __init__(self, model, tokenizer, modelcard, framework, task, args_parser, device, binary_output)
392
393 # Update config with task specific parameters
--> 394 task_specific_params = self.model.config.task_specific_params
395 if task_specific_params is not None and task in task_specific_params:
396 self.model.config.update(task_specific_params.get(task))
AttributeError: 'NoneType' object has no attribute 'config'
```
## Environment info
- `transformers` version: 2.9.1
- Platform: Darwin-19.3.0-x86_64-i386-64bit
- Python version: 3.7.3
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: Nope
- Using distributed or parallel set-up in script?: Nope
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4376/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4376/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4375/comments | https://api.github.com/repos/huggingface/transformers/issues/4375/events | https://github.com/huggingface/transformers/pull/4375 | 618,619,827 | MDExOlB1bGxSZXF1ZXN0NDE4MzE1Njcx | 4,375 | [docs] Restore examples.md symlink | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Doesn't work right now as links are relative links. We should update them to global links before, I can take care of it."
] | 1,589 | 1,590 | 1,590 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4375/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4375",
"html_url": "https://github.com/huggingface/transformers/pull/4375",
"diff_url": "https://github.com/huggingface/transformers/pull/4375.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4375.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4374/comments | https://api.github.com/repos/huggingface/transformers/issues/4374/events | https://github.com/huggingface/transformers/issues/4374 | 618,565,845 | MDU6SXNzdWU2MTg1NjU4NDU= | 4,374 | Speed on various cards | {
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | This is not a bug, but I wonder if someone know what is the speed difference between different cards.
I now run bert base on 1080.
How much speedup will I get using 1080 ti and 2080. Which one is faster for Transformers?
I know 2080 ti would be the best but probably the best one I can get would be 1080 ti or 2080. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4374/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4373/comments | https://api.github.com/repos/huggingface/transformers/issues/4373/events | https://github.com/huggingface/transformers/pull/4373 | 618,560,942 | MDExOlB1bGxSZXF1ZXN0NDE4MjY3NDg5 | 4,373 | Specify tensorboard logging in BART finetuning | {
"login": "isabelcachola",
"id": 15042219,
"node_id": "MDQ6VXNlcjE1MDQyMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/15042219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isabelcachola",
"html_url": "https://github.com/isabelcachola",
"followers_url": "https://api.github.com/users/isabelcachola/followers",
"following_url": "https://api.github.com/users/isabelcachola/following{/other_user}",
"gists_url": "https://api.github.com/users/isabelcachola/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isabelcachola/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isabelcachola/subscriptions",
"organizations_url": "https://api.github.com/users/isabelcachola/orgs",
"repos_url": "https://api.github.com/users/isabelcachola/repos",
"events_url": "https://api.github.com/users/isabelcachola/events{/privacy}",
"received_events_url": "https://api.github.com/users/isabelcachola/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"You may also need to rebase to fix `check_code_quality`.\r\nAlso `run_examples_torch` will only pass if it doesn't add new dependencies.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,599 | 1,599 | NONE | null | Allows you to specify the directory in which to store tensorboard logs when running the BART finetuning script.
Issue: #4349
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4373/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4373",
"html_url": "https://github.com/huggingface/transformers/pull/4373",
"diff_url": "https://github.com/huggingface/transformers/pull/4373.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4373.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4372/comments | https://api.github.com/repos/huggingface/transformers/issues/4372/events | https://github.com/huggingface/transformers/pull/4372 | 618,559,446 | MDExOlB1bGxSZXF1ZXN0NDE4MjY2MjU2 | 4,372 | Allow for None gradients in GradientAccumulator. | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | The [TFTrainer commit](https://github.com/huggingface/transformers/commit/aad50151f35b934039c455242c433a24f011cc93#diff-2a765415189cf1edda4414e1758b0e79) seems to have reverted the ability for gradients to be None in GradientAccumulator. This behavior is necessary when pretraining a model, such as AlbertForPreTraining, and using only one loss (only SOP or only MLM).
The [OpenNMT repository](https://github.com/OpenNMT/OpenNMT-tf/blob/master/opennmt/optimizers/utils.py) the code was taken from lacks this ability, but it's important. This commit restores the ability for gradients to be None. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4372/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4372",
"html_url": "https://github.com/huggingface/transformers/pull/4372",
"diff_url": "https://github.com/huggingface/transformers/pull/4372.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4372.patch",
"merged_at": 1589550721000
} |
https://api.github.com/repos/huggingface/transformers/issues/4371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4371/comments | https://api.github.com/repos/huggingface/transformers/issues/4371/events | https://github.com/huggingface/transformers/issues/4371 | 618,480,561 | MDU6SXNzdWU2MTg0ODA1NjE= | 4,371 | Save pretrained MarianTokenizer | {
"login": "NonaryR",
"id": 8309465,
"node_id": "MDQ6VXNlcjgzMDk0NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8309465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NonaryR",
"html_url": "https://github.com/NonaryR",
"followers_url": "https://api.github.com/users/NonaryR/followers",
"following_url": "https://api.github.com/users/NonaryR/following{/other_user}",
"gists_url": "https://api.github.com/users/NonaryR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NonaryR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NonaryR/subscriptions",
"organizations_url": "https://api.github.com/users/NonaryR/orgs",
"repos_url": "https://api.github.com/users/NonaryR/repos",
"events_url": "https://api.github.com/users/NonaryR/events{/privacy}",
"received_events_url": "https://api.github.com/users/NonaryR/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Also, how can I transfer to cuda `BatchEncoding` object which return after call `prepare_translation_batch` method on `MarianTokenizer`",
"This could be fixed by specifiying a `save_vocabulary` function, like it is used in:\r\n\r\nhttps://github.com/huggingface/transformers/blob/7defc6670fa76e857109e1b99f3e919da8d11f42/src/transformers/tokenization_xlm_roberta.py#L292-L311\r\n\r\n(both source and target vocab needs to be written),\r\n\r\nAdditionally, it would be awesome if the `MarianTokenizer` class would be `pickle`-able using `__getstate__` and `__setstate__` methods :)\r\n\r\n/cc @sshleifer ",
"@NonaryR `BatchEncoding.to(device:str)` should work.\r\nWill fix vocab saving, great catch!\r\n",
"Thank you, `.to(device)` helped!\r\nI can close an issue if you want it. \r\nAlso, many thanks for this new release with all these new awesome translation models, they will bring significant changes to the community!!",
"But how can you load the saved MarianTokenizer from your local directory? \r\nCan anyone give a complete example about how to save and load MarianTokenizer?",
"Can anyone help with this issue: #5040 ?"
] | 1,589 | 1,592 | 1,589 | NONE | null | Hello!
I can't save pretrained MarianTokenizer, it's raising NotImplementedError as a subclass of PretrainedTokenizer.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4371/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4370/comments | https://api.github.com/repos/huggingface/transformers/issues/4370/events | https://github.com/huggingface/transformers/issues/4370 | 618,470,967 | MDU6SXNzdWU2MTg0NzA5Njc= | 4,370 | run_squad with early stopping on a validation set | {
"login": "yonatanbitton",
"id": 26148975,
"node_id": "MDQ6VXNlcjI2MTQ4OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/26148975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonatanbitton",
"html_url": "https://github.com/yonatanbitton",
"followers_url": "https://api.github.com/users/yonatanbitton/followers",
"following_url": "https://api.github.com/users/yonatanbitton/following{/other_user}",
"gists_url": "https://api.github.com/users/yonatanbitton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonatanbitton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonatanbitton/subscriptions",
"organizations_url": "https://api.github.com/users/yonatanbitton/orgs",
"repos_url": "https://api.github.com/users/yonatanbitton/repos",
"events_url": "https://api.github.com/users/yonatanbitton/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonatanbitton/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, this would be something you would need to build yourself.\r\n\r\nAlternatively you could use our Layers as Keras Layers?"
] | 1,589 | 1,589 | 1,589 | NONE | null | Hello.
Is there a way to use run_squad with early stopping as a validation set?
I have 3 files: train-v1.1.json, dev-v1.1.json, and test-v1.1.json.
I want to train on the train file, stop the training when the loss on the dev file starts to increase, and then do the final prediction and answers output on the test set.
At Keras it's pretty straight forward:
`history = model.fit(X_train, y_train, validation_data=(X_dev, y_dev), epochs=4000, verbose=0, callbacks=[EarlyStopping(monitor='dev_loss', mode='min'])`
How can I have a similar functionallity in huggingface's run_squad? I'm asking whether is there a built in way before implementing it from scratch.
I thought of changing the code in the point of:
`--evaluate_during_training` - it activates the evaluation during training - I can change the fie to the dev-v1.1.json file in this point, and stop the training if the validation loss increases.
I would like to know if there is a better way.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4370/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4369/comments | https://api.github.com/repos/huggingface/transformers/issues/4369/events | https://github.com/huggingface/transformers/issues/4369 | 618,444,753 | MDU6SXNzdWU2MTg0NDQ3NTM= | 4,369 | demo website: i info icon should link to resource about parameters | {
"login": "rugk",
"id": 11966684,
"node_id": "MDQ6VXNlcjExOTY2Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/11966684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rugk",
"html_url": "https://github.com/rugk",
"followers_url": "https://api.github.com/users/rugk/followers",
"following_url": "https://api.github.com/users/rugk/following{/other_user}",
"gists_url": "https://api.github.com/users/rugk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rugk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rugk/subscriptions",
"organizations_url": "https://api.github.com/users/rugk/orgs",
"repos_url": "https://api.github.com/users/rugk/repos",
"events_url": "https://api.github.com/users/rugk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rugk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1565794707,
"node_id": "MDU6TGFiZWwxNTY1Nzk0NzA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Write%20With%20Transformer",
"name": "Write With Transformer",
"color": "a84bf4",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @rugk, open to any suggestion for a better documentation link – e.g. should we link to https://huggingface.co/blog/how-to-generate ?\r\n\r\nMaybe https://medium.com/huggingface/how-to-write-with-transformer-5ee58d6f51fa ?\r\n\r\nOr to https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313#79c5 like on https://convai.huggingface.co/\r\n\r\ncc @patrickvonplaten @sgugger ",
"I dunno. My complaint was just an UX complaint from a user with less scientific background, so linking to very in-deep document is likely bad.\r\n\r\nHow you solve this hmm?\r\nMaybe just write a new doc/small wiki entry here on GitHub or so?\r\n\r\nThough if I check your links the \"Advanced settings\" section in https://medium.com/huggingface/how-to-write-with-transformer-5ee58d6f51fa It does what I want: explains the parameter.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"shh #badbot :no_bell::robot::no_bell: This is still a TODO.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,606 | 1,606 | NONE | null | The website/demo at https://transformer.huggingface.co does have a "Model & decoder settings" box at the bottom left.
Next to it, at the right, there is an "(i)" sign.
STR: Click on it.
**What happens:** It links to this repo.
**What should happen:** It should link to somewhere, where the parameters are actually explained.
Reason:
The generic link to this repo is way to – well – generic. I've searched the Readme for the parameter names and could not find anything useful/an explanation of what they actually do.
The reason I click on that icon is to know what I can adjust there/what happens if I change an option, I don't just want to see the general project.
There should be a page that just explains, what each option means (in simple terms):
* Model size: what model to use (from small to big), the bigger the more useful/accurate(?) it may be
* Top p:
* Temperature:
* Max time: how long the model should – at most – compute to search for results | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4369/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4368/comments | https://api.github.com/repos/huggingface/transformers/issues/4368/events | https://github.com/huggingface/transformers/issues/4368 | 618,444,157 | MDU6SXNzdWU2MTg0NDQxNTc= | 4,368 | past functionality broken in release 2.9.0 and 2.9.1 | {
"login": "Damiox",
"id": 599804,
"node_id": "MDQ6VXNlcjU5OTgwNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/599804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Damiox",
"html_url": "https://github.com/Damiox",
"followers_url": "https://api.github.com/users/Damiox/followers",
"following_url": "https://api.github.com/users/Damiox/following{/other_user}",
"gists_url": "https://api.github.com/users/Damiox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Damiox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Damiox/subscriptions",
"organizations_url": "https://api.github.com/users/Damiox/orgs",
"repos_url": "https://api.github.com/users/Damiox/repos",
"events_url": "https://api.github.com/users/Damiox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Damiox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @Damiox,\r\n\r\nI think I see the problem. That's quite an edge case here. Since `2.8.0` (here the PR: https://github.com/huggingface/transformers/pull/3734), the `input_ids` will always be cut to just the last token when using `past` to have a consistent API with other models and allow to keep putting in the whole `input_ids` when speeding up decoding. Also `past` was never supposed to be used together with `input_ids` longer than 1, so I did not think of this edge case :-/. I'm quite surprised that this even worked before, but I'm also not sure whether the `causal mask` in GPT2 was correct for this case before.\r\n\r\nI think a simple fix in your case would be to generate the output incrementally and only use `input_ids` of length 1 or not use the `past` variable at all here.\r\n\r\n@LysandreJik, looking at this script, I think I did introduce a breaking change in the PR: https://github.com/huggingface/transformers/pull/3734 for this edge case when:\r\n`past` is given to the model **and** `input_ids` is longer than 1. It's quite an edge case here and I'm not sure whether it's worth reversing the PR. What do you think? \r\n\r\n\r\n",
"Hi @patrickvonplaten - I'm a bit worried about this... I cannot apply the simple fix because I'm unable to get rid of `past` variable or run more inferences at this point because the performance would be affected for my use case which is over a realtime bigdata dataset. The current results are accurate and are based on discussions in https://github.com/huggingface/transformers/issues/3095 - this is already deployed.\r\n\r\nJust to be clear... Will this not be fixed?",
"That's an interesting use-case indeed. I don't believe we ever intended the `past` to be used with `input_ids` larger than one, and I'm quite surprised as well that it did work.\r\n\r\n@thomwolf, do you have any opinion on this?",
"I'm really looking forward to knowing if this can be fixed so to continue working as it was working up to v2.8.0. Any new thoughts on this? Thanks",
"Hi @Damiox,\r\n\r\nSorry to answer so late. We had some discussion internally about it. And you are correct, we should revert the PR I merged earlier. This is done in #4581 and should be included in the next version. @thomwolf @LysandreJik "
] | 1,589 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model: gpt2
## To reproduce
Steps to reproduce the behavior:
```
from transformers.tokenization_gpt2 import GPT2Tokenizer
from transformers.modeling_gpt2 import GPT2LMHeadModel
import torch
# Remember to run transformers with latest master (not release 2.5.1)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<|endoftext|>')
model = GPT2LMHeadModel.from_pretrained('gpt2')
# Complete phrases are: "I like to drink soda without sugar" and "Go watch TV alone, I am not going"
doc = "I like to"
# note: comment the above line and uncomment the following line to make it work with 1 document
docs_tensors = tokenizer.batch_encode_plus([doc], pad_to_max_length=True, return_tensors='pt')
docs_next = [" soda and ", " with this"]
# note: comment the above line and uncomment the following line to make it work with 1 document
docs_next_tensors = tokenizer.batch_encode_plus(
[d for d in docs_next], pad_to_max_length=True, return_tensors='pt')
# predicting the first part of each phrase
_, past = model(docs_tensors['input_ids'], attention_mask=docs_tensors['attention_mask'])
# manipulating the past
past_expanded = [torch.repeat_interleave(layer, torch.LongTensor([2]), dim=1) for layer in past]
past_attention_mask = torch.ones(docs_next_tensors['attention_mask'].shape[0], len(docs_tensors['input_ids'][0]), dtype=torch.int64)
attn_mask = torch.cat([past_attention_mask, docs_next_tensors['attention_mask']], dim=-1)
# predicting the rest of the phrase with past
logits, _ = model(docs_next_tensors['input_ids'], attention_mask=attn_mask, past=past_expanded)
logits = logits[:, -1]
_, top_indices_results = logits.topk(50)
words = [tokenizer.decode([idx.item()]) for tir in top_indices_results for idx in tir]
print("Predictions for:", [doc + n for n in docs_next])
print("Results with past:", words)
#####################
docs_full_tensors = tokenizer.batch_encode_plus(
[doc + n for n in docs_next], pad_to_max_length=True, return_tensors='pt')
logits, _ = model(docs_full_tensors['input_ids'], attention_mask=docs_full_tensors['attention_mask'])
logits = logits[:, -1]
_, top_indices_results = logits.topk(50)
words = [tokenizer.decode([idx.item()]) for tir in top_indices_results for idx in tir]
print("Predictions for:", [doc + n for n in docs_next])
print("Results without past:", words)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I am expecting the past functionality to work. But I'm getting the following error:
```
RuntimeError: The size of tensor a (4) must match the size of tensor b (5) at non-singleton dimension 3
```
This was working correctly up to release 2.8.0
## Environment info
- `transformers` version:
- Platform: Macos
- Python version: 1.4.0 / 1.5.0
- PyTorch version (GPU?): 2.9.0 cpu / 2.9.1 cpu
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4368/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4367/comments | https://api.github.com/repos/huggingface/transformers/issues/4367/events | https://github.com/huggingface/transformers/pull/4367 | 618,372,141 | MDExOlB1bGxSZXF1ZXN0NDE4MTEyMDMx | 4,367 | Fix: unpin flake8 and fix cs errors | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4367/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4367",
"html_url": "https://github.com/huggingface/transformers/pull/4367",
"diff_url": "https://github.com/huggingface/transformers/pull/4367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4367.patch",
"merged_at": 1589476467000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4366/comments | https://api.github.com/repos/huggingface/transformers/issues/4366/events | https://github.com/huggingface/transformers/issues/4366 | 618,368,082 | MDU6SXNzdWU2MTgzNjgwODI= | 4,366 | [pipelines] Failing @slow test for TF Summarization | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Command:
```bash
RUN_SLOW=1 pytest tests/test_pipelines.py --durations 0 -s -k tf_defaults
```
Issue caused by trying to use Bart in TF.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4366/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4365/comments | https://api.github.com/repos/huggingface/transformers/issues/4365/events | https://github.com/huggingface/transformers/issues/4365 | 618,354,312 | MDU6SXNzdWU2MTgzNTQzMTI= | 4,365 | Has anyone successfully used TF2.0 to load pre-trained transformer-XL (wt103) weights and reproduce their sota results | {
"login": "menghuanlater",
"id": 19285180,
"node_id": "MDQ6VXNlcjE5Mjg1MTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/19285180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/menghuanlater",
"html_url": "https://github.com/menghuanlater",
"followers_url": "https://api.github.com/users/menghuanlater/followers",
"following_url": "https://api.github.com/users/menghuanlater/following{/other_user}",
"gists_url": "https://api.github.com/users/menghuanlater/gists{/gist_id}",
"starred_url": "https://api.github.com/users/menghuanlater/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/menghuanlater/subscriptions",
"organizations_url": "https://api.github.com/users/menghuanlater/orgs",
"repos_url": "https://api.github.com/users/menghuanlater/repos",
"events_url": "https://api.github.com/users/menghuanlater/events{/privacy}",
"received_events_url": "https://api.github.com/users/menghuanlater/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,594 | 1,594 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I tried to reproduce, but the final result was 35.6 (vs paper 18.3), I carefully checked and found that the matrix multiplication result of TF2.x and TF1.12 (paper use) have some differences, about 0.0001, and then Many layers of linear transformation and multiplication, the difference in output characterization of the last layer has reached about 0.1, which is enough to affect the results, and if the mem_len and tgt_len values in the experiment are swapped, ppl actually drops to 19.5, I can ensure that all loaded The parameters are correct, the code is completely referenced to the author's source code and changed to 2.0 version, so I want to ask if anyone has reproduced it with TF2.0
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4365/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4364/comments | https://api.github.com/repos/huggingface/transformers/issues/4364/events | https://github.com/huggingface/transformers/issues/4364 | 618,327,197 | MDU6SXNzdWU2MTgzMjcxOTc= | 4,364 | Can not reproduce article numbers | {
"login": "vladgets",
"id": 8079846,
"node_id": "MDQ6VXNlcjgwNzk4NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8079846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vladgets",
"html_url": "https://github.com/vladgets",
"followers_url": "https://api.github.com/users/vladgets/followers",
"following_url": "https://api.github.com/users/vladgets/following{/other_user}",
"gists_url": "https://api.github.com/users/vladgets/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vladgets/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vladgets/subscriptions",
"organizations_url": "https://api.github.com/users/vladgets/orgs",
"repos_url": "https://api.github.com/users/vladgets/repos",
"events_url": "https://api.github.com/users/vladgets/events{/privacy}",
"received_events_url": "https://api.github.com/users/vladgets/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"These are numbers that I am getting when running the examples (in the article or here: https://github.com/nlpyang/PreSumm) they mention much higher numbers: \r\n\r\n****** ROUGE SCORES ******\r\n\r\n** ROUGE 1\r\nF1 >> 0.195\r\nPrecision >> 0.209\r\nRecall >> 0.186\r\n\r\n** ROUGE 2\r\nF1 >> 0.094\r\nPrecision >> 0.104\r\nRecall >> 0.089\r\n\r\n** ROUGE L\r\nF1 >> 0.221\r\nPrecision >> 0.233\r\nRecall >> 0.212",
"same here. Also getting ROUGE scores in the same ball-park",
"Well, I already switched to Bart model. I do not know what is the reasons for low numbers here, possibly some bugs in the porting, but in general BertAbs model does not seem the best approach for it.",
"Alex, by the way I see that you are currently working on Summarization topic too. If you like you can write me directly to [email protected] (I am currently conversational AI researcher at Nvidia) and we can have more detailed discussion on this topic.",
"Will do, thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,598 | 1,598 | NONE | null | # 🐛 Bug
https://github.com/huggingface/transformers/tree/master/examples/summarization/bertabs
## Information
Bertabs Rouge1/2 F1 evaluation numbers that I am getting are much less than in their article. About twice less!
Model I am using (Bert, XLNet ...):
Bertabs
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Summarization
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4364/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4363/comments | https://api.github.com/repos/huggingface/transformers/issues/4363/events | https://github.com/huggingface/transformers/pull/4363 | 618,286,263 | MDExOlB1bGxSZXF1ZXN0NDE4MDQxNTM0 | 4,363 | Fix trainer evaluation | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for catching this, @patil-suraj!"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | Fix eval loss calculation in Trainer for models with `lm_labels` parameter, related to issue #4361
Fix trainer crash at eval time on TPU, related to issue #4362
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4363/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4363",
"html_url": "https://github.com/huggingface/transformers/pull/4363",
"diff_url": "https://github.com/huggingface/transformers/pull/4363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4363.patch",
"merged_at": 1589481585000
} |
https://api.github.com/repos/huggingface/transformers/issues/4362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4362/comments | https://api.github.com/repos/huggingface/transformers/issues/4362/events | https://github.com/huggingface/transformers/issues/4362 | 618,279,677 | MDU6SXNzdWU2MTgyNzk2Nzc= | 4,362 | Trainer crashes on TPU at eval time when prediction_loss_only is True | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"By the way we're hoping to soon have TPU-specific CI in place, cc @LysandreJik "
] | 1,589 | 1,589 | 1,589 | MEMBER | null | # 🐛 Bug
The trainer crashes on TPU at eval time when `prediction_loss_only` is `True`. Here's the stack trace
```
ValueError: zero-dimensional arrays cannot be concatenated
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 630, in evaluate
output = self._prediction_loop(eval_dataloader, description="Evaluation")
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 630, in evaluate
output = self._prediction_loop(eval_dataloader, description="Evaluation")
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 707, in _prediction_loop
preds = xm.mesh_reduce("eval_preds", preds, np.concatenate)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 707, in _prediction_loop
preds = xm.mesh_reduce("eval_preds", preds, np.concatenate)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 670, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 670, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
File "<__array_function__ internals>", line 6, in concatenate
File "<__array_function__ internals>", line 6, in concatenate
```
This is because `preds` and `label_ids` are `None` when `prediction_loss_only` is `True`. But its not checked here https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L706
```
if is_tpu_available():
# tpu-comment: Get all predictions and labels from all worker shards of eval dataset
preds = xm.mesh_reduce("eval_preds", preds, np.concatenate)
label_ids = xm.mesh_reduce("eval_out_label_ids", label_ids, np.concatenate)
```
This only checks of TPU is available and doesn't check if `preds` and `label_ids` are `None`
## Information
Model I am using (Bert, XLNet ...): T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Set `prediction_loss_only` only to `True` while initialising `Trainer` or in `.evaluate` method
2. call `trainer.evaluate`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The trainer should not crash at eval time on TPU when `prediction_loss_only` is `True`. `preds`
and `label_ids` should be checked for null.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1, master branch
- Platform: Colab TPU
- Python version: 3.6.9
- PyTorch version (GPU?): '1.6.0a0+96885f7' (false)
- Tensorflow version (GPU?): Not applicable
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes, colab TPU | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4362/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4361/comments | https://api.github.com/repos/huggingface/transformers/issues/4361/events | https://github.com/huggingface/transformers/issues/4361 | 618,279,603 | MDU6SXNzdWU2MTgyNzk2MDM= | 4,361 | Trainer doesn't calculate eval loss for models with lm_labels parameter | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | # 🐛 Bug
The trainer doesn't calculate evaluation loss for models with `lm_labels` parameters in the `forward` method. This line https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L682 is the source of the bug
`has_labels = any(inputs.get(k) is not None for k in ["labels", "masked_lm_labels"]) `
loss is only calculated when `has_labels` is `True` and its only `True` for models with `labels` and `masked_lm_labels` parameters and not for `lm_labels`
## Information
Model I am using (Bert, XLNet ...): T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Use any model with `lm_labels` parameter in the `forward` method, ex T5 or BART and evaluate using trainer
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Evaluation loss should be calculated for models with `lm_labels` parameter as well.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: Python 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101
- Tensorflow version (GPU?): Not applicable
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4361/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4360/comments | https://api.github.com/repos/huggingface/transformers/issues/4360/events | https://github.com/huggingface/transformers/issues/4360 | 618,229,840 | MDU6SXNzdWU2MTgyMjk4NDA= | 4,360 | LayerNorm not excluded from weight decay in TF | {
"login": "oliverastrand",
"id": 24825393,
"node_id": "MDQ6VXNlcjI0ODI1Mzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/24825393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverastrand",
"html_url": "https://github.com/oliverastrand",
"followers_url": "https://api.github.com/users/oliverastrand/followers",
"following_url": "https://api.github.com/users/oliverastrand/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverastrand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverastrand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverastrand/subscriptions",
"organizations_url": "https://api.github.com/users/oliverastrand/orgs",
"repos_url": "https://api.github.com/users/oliverastrand/repos",
"events_url": "https://api.github.com/users/oliverastrand/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverastrand/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for the issue dude! Feel free to open a PR if you have a branch that fixed that lying around haha ;-) "
] | 1,589 | 1,590 | 1,590 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
bert-base-cased
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Add a print statement to `_do_use_weight_decay` in [AdamWeightDecay](https://github.com/huggingface/transformers/blob/master/src/transformers/optimization_tf.py) to see which parameters are actually excluded:
```python
def _do_use_weight_decay(self, param_name):
"""Whether to use L2 weight decay for `param_name`."""
if self.weight_decay_rate == 0:
return False
if self._include_in_weight_decay:
for r in self._include_in_weight_decay:
if re.search(r, param_name) is not None:
return True
if self._exclude_from_weight_decay:
for r in self._exclude_from_weight_decay:
if re.search(r, param_name) is not None:
print(f"Found: {param_name}")
return False
return True
```
2. run `python examples/text-classification/run_tf_glue.py --model_name_or_path bert-base-cased --task_name mrpc --output_dir temp --logging_dir temp --do_train --overwrite_output_dir --optimizer_name adamw`.
3. Observe that no weights related to layer norms are printed.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The weights of the layer norms (and the biases) should be printed.
See for example: https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/optimization.py.
Based on the fact that no layer norm weights are printed with "layer_norm" simply switching "layer_norm" to "LayerNorm" seems like the easiest change.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: Darwin-19.4.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4360/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4359/comments | https://api.github.com/repos/huggingface/transformers/issues/4359/events | https://github.com/huggingface/transformers/pull/4359 | 618,200,827 | MDExOlB1bGxSZXF1ZXN0NDE3OTcyMjY1 | 4,359 | Create README.md | {
"login": "savasy",
"id": 6584825,
"node_id": "MDQ6VXNlcjY1ODQ4MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6584825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/savasy",
"html_url": "https://github.com/savasy",
"followers_url": "https://api.github.com/users/savasy/followers",
"following_url": "https://api.github.com/users/savasy/following{/other_user}",
"gists_url": "https://api.github.com/users/savasy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/savasy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/savasy/subscriptions",
"organizations_url": "https://api.github.com/users/savasy/orgs",
"repos_url": "https://api.github.com/users/savasy/repos",
"events_url": "https://api.github.com/users/savasy/events{/privacy}",
"received_events_url": "https://api.github.com/users/savasy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Added language metadata for the model to appear on https://huggingface.co/models?filter=turkish"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4359/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4359",
"html_url": "https://github.com/huggingface/transformers/pull/4359",
"diff_url": "https://github.com/huggingface/transformers/pull/4359.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4359.patch",
"merged_at": 1589479653000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4358/comments | https://api.github.com/repos/huggingface/transformers/issues/4358/events | https://github.com/huggingface/transformers/issues/4358 | 618,191,961 | MDU6SXNzdWU2MTgxOTE5NjE= | 4,358 | Trainer and Colab TPU: Training loss isn't declining | {
"login": "gustavscholin",
"id": 35476152,
"node_id": "MDQ6VXNlcjM1NDc2MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/35476152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gustavscholin",
"html_url": "https://github.com/gustavscholin",
"followers_url": "https://api.github.com/users/gustavscholin/followers",
"following_url": "https://api.github.com/users/gustavscholin/following{/other_user}",
"gists_url": "https://api.github.com/users/gustavscholin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gustavscholin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gustavscholin/subscriptions",
"organizations_url": "https://api.github.com/users/gustavscholin/orgs",
"repos_url": "https://api.github.com/users/gustavscholin/repos",
"events_url": "https://api.github.com/users/gustavscholin/events{/privacy}",
"received_events_url": "https://api.github.com/users/gustavscholin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I meeting the same issue with a custom training script (built using GLUE script as model), using ELECTRA for classification task.",
"Hi! This should have been solved by https://github.com/huggingface/transformers/pull/4450.\r\nCould you install from source and let me know if it fixes your issue?",
"The code is running, but I receive this warning :\r\n\r\n```\r\nUserWarning: This overload of addcdiv_ is deprecated:\r\n\taddcdiv_(Number value, Tensor tensor1, Tensor tensor2)\r\nConsider using one of the following signatures instead:\r\n\taddcdiv_(Tensor tensor1, Tensor tensor2, *, Number value) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:760.)\r\n p.data.addcdiv_(-step_size, exp_avg, denom)\r\n```\r\n\r\nand it's **very slow** (2h for 1 epoch). \r\n\r\nOn a K80 GPU, it takes 30 minutes...\r\n\r\n---\r\n\r\nRunning on 8 cores is not working but I think this is due to the RAM limitations.",
"Can confirm that the training loss is now declining as expected in colab notebook and that one epoch takes around 2h per epoch with 1 TPU-core. I also tested with a K80 but unlike for @Colanim it took around 5h per epoch, which I find reasonable in the sense that the TPU is faster. The batch size was the default Trainer value for both tests (which is 5 I believe).\r\n\r\nI get @Colanim's UserWarning regardless of using GPU or TPU.\r\n\r\nSo imo the bug is fixed!"
] | 1,589 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
When using the Trainer with a Colab TPU it seems like the training loss remains more or less constant around 1.11 during the whole training process. With Colab GPU/CPU however it you get the expected loss decline.
## Information
Model I am using (Bert, XLNet ...):Bert Base Cased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
When you run `run_glue.py` with `xla_spawn.py` as shown in Running on TPUs section in the examples README.
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
The problem is reproduced in this Colab notebook:
https://colab.research.google.com/drive/1PLPReml5UOHC3iQdx8QvfCxuhDekTD2l?usp=sharing
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The training loss declines the same way it does when using GPU/CPU.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0a0+96885f7 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4358/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/4358/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4357/comments | https://api.github.com/repos/huggingface/transformers/issues/4357/events | https://github.com/huggingface/transformers/pull/4357 | 618,126,497 | MDExOlB1bGxSZXF1ZXN0NDE3OTEyMDE3 | 4,357 | Create README.md | {
"login": "sy-wada",
"id": 62933006,
"node_id": "MDQ6VXNlcjYyOTMzMDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/62933006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sy-wada",
"html_url": "https://github.com/sy-wada",
"followers_url": "https://api.github.com/users/sy-wada/followers",
"following_url": "https://api.github.com/users/sy-wada/following{/other_user}",
"gists_url": "https://api.github.com/users/sy-wada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sy-wada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sy-wada/subscriptions",
"organizations_url": "https://api.github.com/users/sy-wada/orgs",
"repos_url": "https://api.github.com/users/sy-wada/repos",
"events_url": "https://api.github.com/users/sy-wada/events{/privacy}",
"received_events_url": "https://api.github.com/users/sy-wada/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Great! I suspect Exbert won't work out of the box but we are looking into enabling it automatically"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4357/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4357",
"html_url": "https://github.com/huggingface/transformers/pull/4357",
"diff_url": "https://github.com/huggingface/transformers/pull/4357.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4357.patch",
"merged_at": 1589479571000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4356/comments | https://api.github.com/repos/huggingface/transformers/issues/4356/events | https://github.com/huggingface/transformers/issues/4356 | 618,111,434 | MDU6SXNzdWU2MTgxMTE0MzQ= | 4,356 | GPT2 checkpoint breaks on new transformers version (2.9.1) | {
"login": "Laksh1997",
"id": 59830552,
"node_id": "MDQ6VXNlcjU5ODMwNTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/59830552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laksh1997",
"html_url": "https://github.com/Laksh1997",
"followers_url": "https://api.github.com/users/Laksh1997/followers",
"following_url": "https://api.github.com/users/Laksh1997/following{/other_user}",
"gists_url": "https://api.github.com/users/Laksh1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laksh1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laksh1997/subscriptions",
"organizations_url": "https://api.github.com/users/Laksh1997/orgs",
"repos_url": "https://api.github.com/users/Laksh1997/repos",
"events_url": "https://api.github.com/users/Laksh1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laksh1997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Do you mind giving us a reproducible code example so that we may check on our end?",
"I have the same issue, my trained models work on 2.8.0 but I get the same error when loading saved model on 2.9.1",
"Hi, sorry for not getting back to this @LysandreJik .\r\n\r\nIt seems that in Transformers 2.8.0 and below, there is no parameter called `masked_bias`.\r\n\r\nHowever in 2.9+, there is now a parameter called `masked_bias` on line 109 in `modeling_gpt2.py`: https://github.com/huggingface/transformers/blob/v2.9.0/src/transformers/modeling_gpt2.py#L109\r\n\r\nThus, the error above is seen.\r\n\r\nCould anything be done about this? Otherwise it renders all GPT2 checkpoints before 2.8.0 unusable, unless manually tinkering with the checkpoint to add this parameter?",
"Hi! I'm trying to reproduce, but I don't get an error like you do. These values indeed do not get loaded, but this does not raise a `RuntimeError`.\r\n\r\nDo you mind giving me a short code sample so that I may debug on my side? Thanks a lot.",
"The weights are all proprietary unfortunately but let me try and do something.",
"@LysandreJik \r\n\r\nFirst, `pip install transformers==2.8.0`\r\n\r\nThen do:\r\n```\r\nimport torch\r\nfrom transformers import GPT2Config, GPT2Model\r\ntorch.save(GPT2Model(GPT2Config(n_layer=2)), \"gpt2_state_dict.pt\")\r\n```\r\n\r\nThen, `pip install transformers==2.9.0`\r\nand do\r\n```\r\nimport torch\r\nfrom transformers import GPT2Config, GPT2Model\r\ngpt2 = GPT2Model(GPT2Config(n_layer=2))\r\nstate_dict = torch.load(\"gpt2_state_dict.pt\")\r\ngpt2.load_state_dict(state_dict)\r\n```\r\n\r\nAnd you get an error: \r\n```\r\nRuntimeError: Error(s) in loading state_dict for GPT2Model:\r\n\tMissing key(s) in state_dict: \"h.0.attn.masked_bias\", \"h.1.attn.masked_bias\".\r\n```",
"I see! Indeed, there's not much we can do about that, unfortunately. We have our own loading/saving method for that purpose.\r\n\r\nOut of curiosity, why don't you use `from_pretrained`/`save_pretrained`?",
"I'm wrapping the model in a Pytorch Lightning module, and the most convenient way to save that is to save the state dictionary.",
"In that case, you could have a workaround like:\r\n\r\n```py\r\nimport torch\r\nfrom transformers import GPT2Config, GPT2Model\r\n\r\n# Use the from_pretrained method\r\ntemp = GPT2Model.from_pretrained(directory)\r\ntemp.save_pretrained(directory)\r\n\r\ngpt2 = GPT2Model(GPT2Config(n_layer=2))\r\nstate_dict = torch.load(f\"{directory}/pytorch_model.pt\")\r\ngpt2.load_state_dict(state_dict)\r\n```\r\n\r\nLoading/saving using `from_pretrained` shouldn't raise an error, and once it's saved you can then load the state dict like you did. Once you have saved it using this, you can always just load the state dict.\r\n",
"@LysandreJik yes, that seems like a good workaround. Thanks!",
"Glad I could help, sorry for taking so long to get back to you. Feel free to reopen if you face any other issues down the road."
] | 1,589 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2
Language I am using the model on (English, Chinese ...): SMILES
## Description
When trying to load a gpt2 checkpoint from before 2.9.1, it loads fine.
Checked on 2.8.0 and it works fine.
Then on 2.9.1 I can't load the model and I get this error:
```
RuntimeError: Error(s) in loading state_dict for LMWrapper:
Missing key(s) in state_dict: "model.transformer.h.0.attn.masked_bias", "model.transformer.h.1.attn.masked_bias", "model.transformer.h.2.attn.masked_bias", "model.transformer.h.3.attn.masked_bias", "model.transformer.h.4.attn.masked_bias", "model.transformer.h.5.attn.masked_bias".
```
LMWrapper is just a wrapper around the GPT2 Model, which is named "model" in self
- `transformers` version: 2.9.1
- Platform: Linux and Mac
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0 GPU and CPU
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes and No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4356/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4355/comments | https://api.github.com/repos/huggingface/transformers/issues/4355/events | https://github.com/huggingface/transformers/issues/4355 | 618,071,073 | MDU6SXNzdWU2MTgwNzEwNzM= | 4,355 | DistilGPT2 Finetuning with GPU | {
"login": "orientino",
"id": 36336751,
"node_id": "MDQ6VXNlcjM2MzM2NzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/36336751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orientino",
"html_url": "https://github.com/orientino",
"followers_url": "https://api.github.com/users/orientino/followers",
"following_url": "https://api.github.com/users/orientino/following{/other_user}",
"gists_url": "https://api.github.com/users/orientino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orientino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orientino/subscriptions",
"organizations_url": "https://api.github.com/users/orientino/orgs",
"repos_url": "https://api.github.com/users/orientino/repos",
"events_url": "https://api.github.com/users/orientino/events{/privacy}",
"received_events_url": "https://api.github.com/users/orientino/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It should be automatic.\r\n\r\nWhat does torch.cuda.is_available say?",
"torch.cuda.is_available return False.\r\nThis is what my setup is by running nvidia-smi\r\n```\r\nThu May 14 15:04:38 2020\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 431.87 Driver Version: 431.87 CUDA Version: 10.1 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n|===============================+======================+======================|\r\n| 0 GeForce GTX 166... WDDM | 00000000:01:00.0 Off | N/A |\r\n| N/A 57C P8 3W / N/A | 153MiB / 6144MiB | 0% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n\r\n+-----------------------------------------------------------------------------+\r\n| Processes: GPU Memory |\r\n| GPU PID Type Process name Usage |\r\n|=============================================================================|\r\n| No running processes found |\r\n+-----------------------------------------------------------------------------+\r\n```",
"Yes just checked. torch is not able to find cuda",
"You should upgrade your nvidia driver.\r\n\r\nYou probably installed pytorch 1.5.0 bumped up the required version of nvidia-driver.",
"yes.\r\n`pip install torch==1.4.0`\r\nwill solve the issue."
] | 1,589 | 1,589 | 1,589 | NONE | null | Hi, i'm a beginner in this field and would like to finetune the model DistilGPT2 using my laptop. I have an Nvidia 1660Ti and followed the guide example
```
export TRAIN_FILE=/path/to/dataset/wiki.train.raw
export TEST_FILE=/path/to/dataset/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
```
But during the finetuning my GPU stays at 0% and all the work is done by the CPU, is there a way to force the GPU to finetune? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4355/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4354/comments | https://api.github.com/repos/huggingface/transformers/issues/4354/events | https://github.com/huggingface/transformers/issues/4354 | 618,055,609 | MDU6SXNzdWU2MTgwNTU2MDk= | 4,354 | Not getting expected results for freshly-pretrained BERT | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | CONTRIBUTOR | null | # ❓ Questions & Help
I have a freshly trained BERT (using the original BERT) which doesn't seem to function as expected after moving it to PyTorch/Transformers. I don't have any syntax errors anywhere. So I am not sure how to go about understanding the bug. Wondering if anyone has suggestions for me.
Below I am providing details on every single step of my training process.
Here is a comment from my training:
```
2020-05-14 08:36:41,144 : Saving checkpoints for 1875000 into gs:// .... /perbert_L-12_H-768_A-12/model.ckpt.
2020-05-14 08:37:07,703 : loss = 1.7460283, step = 1875000 (1664.300 sec)
2020-05-14 08:37:07,704 : global_step/sec: 15.0213
2020-05-14 08:37:07,704 : examples/sec: 1922.73
2020-05-14 08:37:07,705 : Enqueue next (25000) batch(es) of data to infeed.
2020-05-14 08:37:07,705 : Dequeue next (25000) batch(es) of data from outfeed.
```
As you can see, the loss function is pretty low after being trained for about 180k steps.
Now I had it converted with `convert_tf_checkpoint_to_pytorch`:
```python
from pytorch_pretrained_bert.convert_tf_checkpoint_to_pytorch import convert_tf_checkpoint_to_pytorch
model = "perbert_L-12_H-768_A-12"
checkpoint = "1875000"
convert_tf_checkpoint_to_pytorch(
f'/models/{model}/model.ckpt-{checkpoint}.index',
bert_config_file=f'models/{model}/bert_config.json',
pytorch_dump_path=f"models/{model}/saved_model_{checkpoint}")
```
Then I load the results in masked language model to see whether it does something reasonable or not:
```python
import torch
from transformers import BertConfig, BertTokenizer, BertForMaskedLM
dir = "/Users/danielk/ideaProjects/farsi-language-models/src/models/perbert_L-12_H-768_A-12/"
checkpoint = 1875000
tokenizer = BertTokenizer.from_pretrained(dir)
config = BertConfig.from_json_file(dir + '/bert_config.json')
model = BertForMaskedLM.from_pretrained(dir + '/saved_model_' + str(checkpoint), config=config)
string = "بزرگترین اقتصاد دنیا در حال فروپاشی موقت است. این امر مساله ای کاملا طبیعی است"
tokens = tokenizer.tokenize(string)
print("tokens: " + tokens)
print("indices: " + tokenizer.encode(string))
input_ids = torch.tensor([tokenizer.encode(string)])
outputs = model(input_ids)
predictions = outputs[0]
for masked_index, _ in enumerate(tokens):
print(" * input string: " + tokens[masked_index])
predictions_numpy = predictions[0, masked_index].cpu().data.numpy()
predictions_with_index = list(enumerate(predictions_numpy))
predictions_with_index = sorted(predictions_with_index, key=lambda x: -x[1])
top_indices = [x for x in predictions_with_index[:10]]
predicted_tokens = tokenizer.convert_ids_to_tokens([x[0] for x in top_indices])
print(predicted_tokens)
print("...")
```
The output of the tokenization is correct; the tokens are reasonable and they're mapped to the right indices:
```
tokens: ['بزرگترین', 'اقتصاد', 'دنیا', 'در', 'حال', 'فروپاشی', 'موقت', 'است', '[UNK]', 'این', 'امر', 'مساله', 'ای', 'کاملا', 'طبیعی', 'است']
indices: [2, 989, 499, 655, 6, 166, 16459, 3756, 13, 1, 11, 1312, 3703, 36, 1089, 1070, 13, 3]
```
However, the output of the masking is not as expected:
```
* input string: بزرگترین
['Plugin', '##AN', '##AZ', 'ورزشکاران', 'خانوارها', 'items', 'Moha', 'بکنید', '##Style', 'سهیلی']
...
* input string: اقتصاد
['استان', 'آنها', 'نماد', 'বন', 'ویتنام', 'محكوم', '##جاری', '##sc', 'public', 'آلبرت']
...
* input string: دنیا
['##Lang', 'public', 'آی', 'یافتهاند', 'check', 'یادتون', 'ضریب', 'رصدخانه', 'تزئینات', 'رف']
...
* input string: در
['##Lang', 'Plugin', 'বন', '##BX', 'گرن', 'مغز', 'منجمد', 'فضل', 'سرمایشی', 'public']
...
* input string: حال
['##Lang', 's', 'تمهیدات', 'خخخ', 'Product', 'حلزون', '##ten', '##تیپ', '##sc', 'آی']
...
* input string: فروپاشی
['##Pack', 'বন', 'باشد', 'عجایب', '##File', '##Lang', '##column', 's', '##BG', '334']
...
* input string: موقت
['بن', 'عاشورایی', 'تایباد', 'Vivo', 'بلوغ', '##dani', 'اشتیاق', '1351', '[UNUSED_194]', '##أفادت']
...
* input string: است
['85', 'مشار', '##گرایان', '##جاری', 'while', '##cause', 'ژیل', 'مولد', 'نقلیه', 'رحم']
...
* input string: [UNK]
['آنها', 'چسبندگی', 'سرمایشی', '##یس', 'کلی', 'آی', 'آزمایشی', '##گ', 'مولد', 'سینماهای']
...
* input string: این
['اسپانیایی', '##زآپ', 'سهیلی', 'رسیدند', 'استان', 'بن', 'کریر', '##پد', 'تایباد', '##ث']
...
* input string: امر
['ش', 'বন', '##آيند', 'اعتباری', 'نامرئی', '##ملی', 'public', 'اومد', 'یافتهاند', 'موعظه']
...
* input string: مساله
['إيران', 'বন', 'public', '##سمح', 'wp', 'نماد', 'شئون', 'website', '##تیپ', '##اعتراض']
...
* input string: ای
['مردمان', '[UNUSED_203]', 'خورده', 'public', 'لهم', 'آنها', 'کعبه', 'الهی', 'رصدخانه', 'سرمایشی']
...
* input string: کاملا
['public', '77', 'مروج', 'تایباد', '##aghi', 'منصب', '##Lang', '[UNUSED_203]', '##198', 'مغز']
...
* input string: طبیعی
['##Lang', 'Plugin', '##den', '##مفید', 'وزنه', 'دلهره', 'check', 'Vivo', 'Biz', 'خواهان']
...
* input string: است
['ju', '##Lang', 'عقربه', 'استان', 'اثرانگشت', '##aghi', 'انزلی', 'چسبندگی', '##tis', 'فرماید']
...
```
The above output that the masked language model has returned is quite random. Clearly, something is off.
I briefly inspected one of my tfrecord files, but they also seem to be reasonable; here is one example block:
```json
{
"features": {
"feature": {
"masked_lm_positions": {
"int64List": {
"value": [
"2",
"3",
"5",
"15",
"23",
"27",
"33",
"41",
"55",
"60",
"72",
"74",
"87",
"97",
"103",
"104",
"106",
"112",
"0",
"0"
]
}
},
"masked_lm_ids": {
"int64List": {
"value": [
"5601",
"33",
"4125",
"12",
"6",
"11635",
"9808",
"178",
"8",
"287",
"6",
"3213",
"6",
"23",
"8",
"814",
"613",
"12466",
"0",
"0"
]
}
},
"masked_lm_weights": {
"floatList": {
"value": [
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
0.0,
0.0
]
}
},
"segment_ids": {
"int64List": {
"value": [
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"0",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"0",
"0",
"0",
"0",
"0",
"0"
]
}
},
"input_ids": {
"int64List": {
"value": [
"2",
"2824",
"4",
"4",
"6",
"4",
"599",
"5388",
"16",
"16598",
"4513",
"5",
"2824",
"2219",
"217",
"5891",
"7",
"3117",
"4710",
"24069",
"16",
"69",
"5",
"4",
"127",
"166",
"3386",
"4",
"178",
"13",
"1",
"278",
"26085",
"5791",
"5",
"1401",
"1801",
"5",
"2348",
"127",
"905",
"4",
"13",
"1",
"2824",
"1127",
"8447",
"7144",
"18646",
"5",
"168",
"14763",
"9",
"6",
"7103",
"4",
"3878",
"1978",
"5",
"430",
"287",
"1",
"3",
"12466",
"23",
"814",
"6",
"14259",
"6",
"4426",
"46",
"453",
"4",
"707",
"4",
"46",
"6",
"4426",
"365",
"338",
"7935",
"9",
"6",
"748",
"6",
"4426",
"44",
"4",
"476",
"1",
"1",
"7",
"12466",
"10",
"12053",
"1070",
"6",
"4",
"1832",
"44",
"8012",
"7968",
"46",
"4",
"4",
"49",
"4",
"287",
"14",
"6",
"14259",
"707",
"4",
"1",
"1348",
"44",
"6849",
"62",
"6931",
"31",
"1",
"3",
"0",
"0",
"0",
"0",
"0",
"0"
]
}
},
"input_mask": {
"int64List": {
"value": [
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"1",
"0",
"0",
"0",
"0",
"0",
"0"
]
}
},
"next_sentence_labels": {
"int64List": {
"value": [
"1"
]
}
}
}
}
}
```
Here are some thoughts:
- Could the model conversion somehow go wrong without raising any errors?
- Persian is a right-to-left language; could that be a factor? In theory, it should not be an issue, since from the perspective of the either code (TF and PyTorch) these are just strings/byres sequenced after each other.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4354/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4353/comments | https://api.github.com/repos/huggingface/transformers/issues/4353/events | https://github.com/huggingface/transformers/pull/4353 | 617,867,298 | MDExOlB1bGxSZXF1ZXN0NDE3NzA1MTQy | 4,353 | Fixing tokenization of extra_id symbols in T5Tokenizer | {
"login": "mansimov",
"id": 1727860,
"node_id": "MDQ6VXNlcjE3Mjc4NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1727860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mansimov",
"html_url": "https://github.com/mansimov",
"followers_url": "https://api.github.com/users/mansimov/followers",
"following_url": "https://api.github.com/users/mansimov/following{/other_user}",
"gists_url": "https://api.github.com/users/mansimov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mansimov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mansimov/subscriptions",
"organizations_url": "https://api.github.com/users/mansimov/orgs",
"repos_url": "https://api.github.com/users/mansimov/repos",
"events_url": "https://api.github.com/users/mansimov/events{/privacy}",
"received_events_url": "https://api.github.com/users/mansimov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Awesome this looks good to me! \r\n\r\n@LysandreJik - I tried this fix and it does fix the issue https://github.com/huggingface/transformers/issues/4021 . To me it seems like this line was just forgotten when implementing T5 tokenizer.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=h1) Report\n> Merging [#4353](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/94cb73c2d2efeb188b522ff352f98b15124ba9f8&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4353 +/- ##\n=======================================\n Coverage 78.21% 78.21% \n=======================================\n Files 120 120 \n Lines 20038 20039 +1 \n=======================================\n+ Hits 15673 15674 +1 \n Misses 4365 4365 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.49% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.08% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=footer). Last update [94cb73c...c9824f4](https://codecov.io/gh/huggingface/transformers/pull/4353?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi,\r\n\r\nIs the below expected behavior? I am initializing the tokenizer with `tokenizer = T5Tokenizer.from_pretrained('t5-large', extra_ids=0, additional_special_tokens=special_tokens)` where special_tokens is a list like ['_self_say', '_partner_say', '_self_persona', '_partner_persona']. Despite specifying extra_ids=0 I see a lot of sentinel tokens in the outputs with negative ids. For example:\r\n\r\n```\r\nInput: _context _setting_name Tower _setting_desc The inside tower is made from a combination of wood and brick. It also has some metal to hold all the doors in place. _object a door _object a door _self_name pet dog _self_persona I am mans best friend and I wouldn't have it any other way. I tend to my master and never leave his side. I sleep at his feet and guard the room at night from things that go bump in the night. _option hug knight _option hit knight _history \r\n\r\nResponse: <extra_id_-100>,<extra_id_-99> erreichen<extra_id_-98> hit knightăminterească a door<extra_id_-97>îî<extra_id_-96> a doorăm!<extra_id_-95> timpul<extra_id_-94> pentru<extra_id_-93>itățile Tower<extra_id_-92> I am Mans best friend.<extra_id_-91> doud<extra_id_-90> casă facem<extra_id_-89> a doorșteștehlen<extra_id_-88><extra_id_-87> tower is made from different materials. The outside\r\n```\r\n\r\n"
] | 1,589 | 1,599 | 1,590 | CONTRIBUTOR | null | Fixing tokenization of extra_id symbols in T5Tokenizer. Related to issue #4021 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4353/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4353",
"html_url": "https://github.com/huggingface/transformers/pull/4353",
"diff_url": "https://github.com/huggingface/transformers/pull/4353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4353.patch",
"merged_at": 1590437070000
} |
https://api.github.com/repos/huggingface/transformers/issues/4352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4352/comments | https://api.github.com/repos/huggingface/transformers/issues/4352/events | https://github.com/huggingface/transformers/pull/4352 | 617,844,409 | MDExOlB1bGxSZXF1ZXN0NDE3Njg2OTQ3 | 4,352 | Longformer | {
"login": "ibeltagy",
"id": 2287797,
"node_id": "MDQ6VXNlcjIyODc3OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibeltagy",
"html_url": "https://github.com/ibeltagy",
"followers_url": "https://api.github.com/users/ibeltagy/followers",
"following_url": "https://api.github.com/users/ibeltagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions",
"organizations_url": "https://api.github.com/users/ibeltagy/orgs",
"repos_url": "https://api.github.com/users/ibeltagy/repos",
"events_url": "https://api.github.com/users/ibeltagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibeltagy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is awesome @ibeltagy . I´m very excited about longformer.\r\n\r\nIn the original implementation a good speedup was thanks to the personalized kernel in tvm. In this pull, this speedup is lost, or it was adapted somehow?",
"Thanx @ibeltagy! \r\n\r\nIs it possible for you or someone else who is familiar with TF2 to submit also a PR for TF2? I tried to migrate your code in the past but I couldn’t migrate the new attention mechanism written for TVM, so I moved on. Your work alongside Reformer has special interest for people like me and my colleagues, who are usually doing research with long documents (>=1000words). \r\n\r\nMaybe @thomwolf and @julien-c know the right person...\r\n",
"@ibeltagy - Looks awesome the PR :-) I added some minor comments. \r\n\r\nI think we can merge this very soon :-) \r\n\r\nTo pass the check_code_qualtiy test, you can run `make style` from the root folder. \r\n\r\nRegarding the unit test, I think you can copy-paste a lot of the code from `test_roberta_modeling.py`. In order to skip tests for `torch_script, pruning, ..` (features we can add in another PR) you can do the same as is done in the reformer tests here: https://github.com/huggingface/transformers/blob/94cb73c2d2efeb188b522ff352f98b15124ba9f8/tests/test_modeling_reformer.py#L510 . \r\n\r\nIf possible it would also be great to add some integration tests comparing the outputs (and maybe even gradients) to those of your original model code. I think you could use the same integration test structure as was done for Roberta or Reformer.",
"@bratao , @iliaschalkidis, about the custom CUDA kernel. We [recently](https://github.com/allenai/longformer/pull/27/files#diff-04c6e90faac2675aa89e2176d2eec7d8R4) added an efficient PyTorch implementation of our sliding window attention (without dilation), which is faster and uses less memory (with fp16) and should be easy to port to TF. The TVM code is still needed for the dilated sliding window attention, but dilation is more important for char-lm than fine-tuning on downstream tasks (QA, classification .. etc). This PR only uses the PyTorch code.",
"Thanks, @patrickvonplaten. I will address your comments and let you know. \r\n\r\nI also want to add `LongformerForSequenceClassification`, `LongformerForMultipleChoice`, `LongformerForTokenClassification`, `LongformerForQuestionAnswering` with automatic setting of global attention based on the task, so that the user doesn't need to worry about it. I will keep this to a different PR unless you think it is better to add it here. ",
"> Thanks, @patrickvonplaten. I will address your comments and let you know.\r\n> \r\n> I also want to add `LongformerForSequenceClassification`, `LongformerForMultipleChoice`, `LongformerForTokenClassification`, `LongformerForQuestionAnswering` with automatic setting of global attention based on the task, so that the user doesn't need to worry about it. I will keep this to a different PR unless you think it is better to add it here.\r\n\r\nIt'd be great to add those as well :-) I think doing it in a second PR would be better as well",
"@patrickvonplaten, @sshleifer, I am done addressing your reviews. I still need to upload model weights but waiting to get access to the AllenAI organization account. Also, can you check the failed tests? they seem like issues with CircleCI rather than the code. \r\n",
"@sshleifer, addressed your comments. Now `ci/circleci: run_tests_torch_and_tf` is timing out.",
"I'm sorry.\r\n\r\nRebase may help the timeout, but I agree that it does not appear to be caused by this code.\r\n\r\nI'd just verify that there aren't new tests that should be decorated `@slow` (append `--durations 0` to your local pytest command) and it should be fine.",
"The slowest one is 0.10s. \r\n```\r\n0.10s call tests/test_modeling_longformer.py::LongformerModelTest::test_save_load\r\n0.10s call tests/test_modeling_longformer.py::LongformerModelTest::test_attention_outputs\r\n0.07s call tests/test_modeling_longformer.py::LongformerModelTest::test_resize_tokens_embeddings\r\n0.06s call tests/test_modeling_longformer.py::LongformerModelTest::test_determinism\r\n0.05s call tests/test_modeling_longformer.py::LongformerModelTest::test_inputs_embeds\r\n0.04s call tests/test_modeling_longformer.py::LongformerModelTest::test_correct_missing_keys\r\n0.04s call tests/test_modeling_longformer.py::LongformerModelTest::test_hidden_states_output\r\n0.04s call tests/test_modeling_longformer.py::LongformerModelTest::test_longformer_model\r\n0.03s call tests/test_modeling_longformer.py::LongformerModelTest::test_initialization\r\n0.03s call tests/test_modeling_longformer.py::LongformerModelTest::test_longformer_for_masked_lm\r\n0.02s call tests/test_modeling_longformer.py::LongformerModelTest::test_model_common_attributes\r\n```\r\n\r\nMaybe the integration tests are slower than average because of the long document? ",
"@patrickvonplaten, @sshleifer, looks like rebasing fixed the problem. Thanks.",
"> ## Adding the Longformer model\r\n> Paper: https://arxiv.org/abs/2004.05150\r\n> The code is mostly copied from https://github.com/allenai/longformer\r\n> Addressing issue: #3783\r\n> \r\n> **TODOs**\r\n> \r\n> * [x] Add unit tests\r\n> * [x] Address PR reviews\r\n> * [ ] Upload model to HF instead of AllenAI s3 buckets.\r\n> \r\n> **TODOs for other PRs**\r\n> \r\n> * Add `LongformerForSequenceClassification`, `LongformerForMultipleChoice`, `LongformerForTokenClassification`, `LongformerForQuestionAnswering` with automatic setting of global attention based on the task.\r\n> * Add `LongformerEncoderDecoder`\r\n> * A few small TODOs in the code\r\n> * Add an example to show how to convert existing pretrained models into their long version.\r\n\r\nIs there any particular timeframe by when we can have LongformerForQuestionAnswering? Would love to use it in a small project of mine.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@18d233d`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `84.48%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4352 +/- ##\n=========================================\n Coverage ? 78.30% \n=========================================\n Files ? 123 \n Lines ? 20373 \n Branches ? 0 \n=========================================\n Hits ? 15953 \n Misses ? 4420 \n Partials ? 0 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `82.75% <82.75%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `92.68% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.57% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.61% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbG9uZ2Zvcm1lci5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.72% <0.00%> (ø)` | |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.47% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.54% <0.00%> (ø)` | |\n| ... and [120 more](https://codecov.io/gh/huggingface/transformers/pull/4352/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=footer). Last update [18d233d...d1c4bbc](https://codecov.io/gh/huggingface/transformers/pull/4352?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | ## Adding the Longformer model
Paper: https://arxiv.org/abs/2004.05150
The code is mostly copied from https://github.com/allenai/longformer
Addressing issue: https://github.com/huggingface/transformers/issues/3783
**TODOs**
- [x] Add unit tests
- [x] Address PR reviews
- [x] Upload model to HF instead of AllenAI s3 buckets.
**TODOs for other PRs**
- Add `LongformerForSequenceClassification`, `LongformerForMultipleChoice`, `LongformerForTokenClassification`, `LongformerForQuestionAnswering` with automatic setting of global attention based on the task.
- Add `LongformerEncoderDecoder`
- Add an example to show how to convert existing pretrained models into their long version. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4352/reactions",
"total_count": 20,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 13,
"confused": 0,
"heart": 7,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4352/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4352",
"html_url": "https://github.com/huggingface/transformers/pull/4352",
"diff_url": "https://github.com/huggingface/transformers/pull/4352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4352.patch",
"merged_at": 1589897084000
} |
https://api.github.com/repos/huggingface/transformers/issues/4351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4351/comments | https://api.github.com/repos/huggingface/transformers/issues/4351/events | https://github.com/huggingface/transformers/pull/4351 | 617,839,529 | MDExOlB1bGxSZXF1ZXN0NDE3NjgzMDQy | 4,351 | tf add resize_token_embeddings method | {
"login": "dzorlu",
"id": 3424293,
"node_id": "MDQ6VXNlcjM0MjQyOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3424293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dzorlu",
"html_url": "https://github.com/dzorlu",
"followers_url": "https://api.github.com/users/dzorlu/followers",
"following_url": "https://api.github.com/users/dzorlu/following{/other_user}",
"gists_url": "https://api.github.com/users/dzorlu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dzorlu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dzorlu/subscriptions",
"organizations_url": "https://api.github.com/users/dzorlu/orgs",
"repos_url": "https://api.github.com/users/dzorlu/repos",
"events_url": "https://api.github.com/users/dzorlu/events{/privacy}",
"received_events_url": "https://api.github.com/users/dzorlu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=h1) Report\n> Merging [#4351](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e8db8b845a971b0cf63a0896b9deb5b316028a8b&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `42.50%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4351 +/- ##\n==========================================\n+ Coverage 76.80% 76.86% +0.05% \n==========================================\n Files 128 128 \n Lines 21602 21669 +67 \n==========================================\n+ Hits 16591 16655 +64 \n- Misses 5011 5014 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `74.48% <ø> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.26% <13.33%> (-2.46%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.61% <33.33%> (-0.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `82.43% <33.33%> (-0.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `90.33% <33.33%> (-0.96%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.68% <33.33%> (-0.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.87% <33.33%> (-0.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.54% <33.33%> (-0.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `80.03% <33.33%> (-0.27%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.11% <50.00%> (-0.23%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/4351/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=footer). Last update [e8db8b8...8dc8f96](https://codecov.io/gh/huggingface/transformers/pull/4351?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"hi team- i know you guys are busy. do you mind taking a look when you have a chance & let me know w/ any comments? thanks!",
"Hi- Thanks for the feedback. Sure, I can.\r\n\r\nSorry context switching is difficult at times :) I remember that `TFBertMainLayer` implementation was slightly different because it does not inherit from `TFPreTrainedModel` as T5 does. So it requires its own `_get_resized_embeddings` and test modules. But to the extent other architectures are similar to `T5`, it shouldn't be a problem. \r\n\r\nIs there a source of truth for all the TF architectures implemented? Is https://github.com/huggingface/transformers#model-architectures kept up to date?\r\n\r\nDeniz\r\n",
"Yes, that list is kept up to date!",
"@LysandreJik please take a look. thanks!",
"This looks good! Pinging @jplu for review as he's the TF master around here.",
"Thank you @jplu @LysandreJik ! Looking forward to contributing more. "
] | 1,589 | 1,592 | 1,592 | NONE | null | Introduces the `resize_token_embeddings` method for TF models (with the exception of `Transformer-XL`).
My first commit! Thank you for the awesome library.
I often have found myself in need to extend the embeddings vectors to accommodate auxiliary data, it seems that TF models don't have the functionality. Size of the embeddings layer can be changed like following:
```
SIZE = 50000
# T5 Generator
MODEL_NAME = 't5-small'
model2 = TFT5ForConditionalGeneration.from_pretrained(MODEL_NAME)
emb1 = model2.get_input_embeddings()
emb2 = model2.resize_token_embeddings(SIZE)
assert emb2.weight.shape[0] == SIZE
np.allclose(emb1.weight[:10].numpy(), emb2.weight[:10].numpy())
# BERT
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
emb1 = model.get_input_embeddings()
emb2 = model.resize_token_embeddings(SIZE)
assert emb2.word_embeddings.shape[0] == SIZE
np.allclose(emb1.word_embeddings[:10].numpy(), emb2.word_embeddings[:10].numpy())
```
- [x] BERT
- [x] GPT
- [x] GPT-2
- [ ] Transformer-XL
- [x] XLNet
- [x] XLM
- [x] RoBERTa
- [x] DistilBERT
- [x] CTRL
- [x] CamemBERT
- [x] ALBERT
- [x] T5
- [x] XLM-RoBERTa
- [x] FlauBERT
- [x] ELECTRA
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4351/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4351",
"html_url": "https://github.com/huggingface/transformers/pull/4351",
"diff_url": "https://github.com/huggingface/transformers/pull/4351.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4351.patch",
"merged_at": 1592520087000
} |
https://api.github.com/repos/huggingface/transformers/issues/4350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4350/comments | https://api.github.com/repos/huggingface/transformers/issues/4350/events | https://github.com/huggingface/transformers/issues/4350 | 617,829,144 | MDU6SXNzdWU2MTc4MjkxNDQ= | 4,350 | RobertaForSequenceClassification for BERT | {
"login": "pashok3d",
"id": 35535358,
"node_id": "MDQ6VXNlcjM1NTM1MzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/35535358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pashok3d",
"html_url": "https://github.com/pashok3d",
"followers_url": "https://api.github.com/users/pashok3d/followers",
"following_url": "https://api.github.com/users/pashok3d/following{/other_user}",
"gists_url": "https://api.github.com/users/pashok3d/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pashok3d/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pashok3d/subscriptions",
"organizations_url": "https://api.github.com/users/pashok3d/orgs",
"repos_url": "https://api.github.com/users/pashok3d/repos",
"events_url": "https://api.github.com/users/pashok3d/events{/privacy}",
"received_events_url": "https://api.github.com/users/pashok3d/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | CONTRIBUTOR | null | # ❓ Questions & Help
Could you please explain, what are the consequences of using RobertaForSequenceClassification head with BERT pre-trained model? Will it be the same effect as using BertForSequenceClassification? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4350/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4349/comments | https://api.github.com/repos/huggingface/transformers/issues/4349/events | https://github.com/huggingface/transformers/issues/4349 | 617,828,746 | MDU6SXNzdWU2MTc4Mjg3NDY= | 4,349 | Ability to specify directory of tensorboard logs in BART finetuning example | {
"login": "isabelcachola",
"id": 15042219,
"node_id": "MDQ6VXNlcjE1MDQyMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/15042219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isabelcachola",
"html_url": "https://github.com/isabelcachola",
"followers_url": "https://api.github.com/users/isabelcachola/followers",
"following_url": "https://api.github.com/users/isabelcachola/following{/other_user}",
"gists_url": "https://api.github.com/users/isabelcachola/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isabelcachola/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isabelcachola/subscriptions",
"organizations_url": "https://api.github.com/users/isabelcachola/orgs",
"repos_url": "https://api.github.com/users/isabelcachola/repos",
"events_url": "https://api.github.com/users/isabelcachola/events{/privacy}",
"received_events_url": "https://api.github.com/users/isabelcachola/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # 🚀 Feature request
Using [this script](https://github.com/huggingface/transformers/blob/839bfaedb21e42edee093b9e21e2c2f1ea7514f0/examples/summarization/bart/finetune.py) to finetune BART, it'd be useful if we could specify the directory to store the tensorboard logs. Right now, it looks like the logs are saved to `lightning_logs/version_X`.
Ideally, this is the command I would run:
```{bash}
python finetune.py \
--data_dir=./cnn-dailymail/cnn_dm \
--model_name_or_path=bart-large \
--learning_rate=3e-5 \
--train_batch_size=4 \
--eval_batch_size=4 \
--output_dir=$OUTPUT_DIR \
--do_train \
--logging_dir logs $@
```
## Motivation
Being able to specify the directory of your tensorboard logs makes it easier to compare multiple experiments.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4349/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4349/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4348/comments | https://api.github.com/repos/huggingface/transformers/issues/4348/events | https://github.com/huggingface/transformers/pull/4348 | 617,807,120 | MDExOlB1bGxSZXF1ZXN0NDE3NjU2MjAz | 4,348 | Add link to W&B to see whole training logs | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"love it, thanks Manuel! 👍 ",
"You are welcome. From now on I will add it to all my models."
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4348/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4348",
"html_url": "https://github.com/huggingface/transformers/pull/4348",
"diff_url": "https://github.com/huggingface/transformers/pull/4348.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4348.patch",
"merged_at": 1589414698000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4347/comments | https://api.github.com/repos/huggingface/transformers/issues/4347/events | https://github.com/huggingface/transformers/issues/4347 | 617,773,659 | MDU6SXNzdWU2MTc3NzM2NTk= | 4,347 | TracerWarning on modeling_gpt2.py:147 when using Torchscript | {
"login": "Damiox",
"id": 599804,
"node_id": "MDQ6VXNlcjU5OTgwNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/599804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Damiox",
"html_url": "https://github.com/Damiox",
"followers_url": "https://api.github.com/users/Damiox/followers",
"following_url": "https://api.github.com/users/Damiox/following{/other_user}",
"gists_url": "https://api.github.com/users/Damiox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Damiox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Damiox/subscriptions",
"organizations_url": "https://api.github.com/users/Damiox/orgs",
"repos_url": "https://api.github.com/users/Damiox/repos",
"events_url": "https://api.github.com/users/Damiox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Damiox/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | The following script:
```
model = GPT2LMHeadModel.from_pretrained('gpt2-medium')
model.eval()
example = torch.randn(100, 50).abs().mul(1000).type(torch.int64)
traced_model = torch.jit.trace(model, example)
```
I
It's returning the following Warning:
```
python3.7/site-packages/transformers/modeling_gpt2.py:147: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
```
Is this something to worry about? I see there was a similar issue with sqrt in https://github.com/huggingface/transformers/issues/3954
Additionally, if I wanted to use Torchscript with past, then I should still trace the model in such way so that the past is being recorded in the model. Is that correct?
- `transformers` version: 2.9.0
- Platform: macos
- Python version: 3.7
- PyTorch version: 1.5.0 cpu
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4347/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4346/comments | https://api.github.com/repos/huggingface/transformers/issues/4346/events | https://github.com/huggingface/transformers/issues/4346 | 617,695,286 | MDU6SXNzdWU2MTc2OTUyODY= | 4,346 | Discrepancy in the generation of T5 here vs the original code | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think you should add a prefix, such as `\"question: Which is the best conductor\"` for hugging face code. Can you try that and see whether the results improve? "
] | 1,589 | 1,591 | 1,591 | CONTRIBUTOR | null | # ❓ Questions & Help
We have [several models](https://github.com/allenai/unifiedqa) train with the original T5 code. Now trying the models in your code, we see some discrepancies. For example, on this input: "Which is best conductor? \n (A) iron (B) feather (C) wood (D) plastic" while [the original code](https://unifiedqa.apps.allenai.org/) produces "iron", the Transformers code produces: "feather (C) feather (C) feather (C) feather (C) feather (C"
## Details
Here's what I've done:
First I downloaded a subset of the model directory, as I only need some files:
```
$ ls -l
total 316048
-rw-r--r-- 1 tafjord staff 387 May 12 18:51 checkpoint
-rw-r--r-- 1 tafjord staff 8 May 12 18:01 model.ckpt-1100500.data-00000-of-00002
-rw-r--r-- 1 tafjord staff 121752064 May 12 18:01 model.ckpt-1100500.data-00001-of-00002
-rw-r--r-- 1 tafjord staff 5677 May 12 18:01 model.ckpt-1100500.index
-rw-r--r-- 1 tafjord staff 26892327 May 12 18:02 model.ckpt-1100500.meta
```
Then edit the checkpoint file so it refers to the right checkpoint in the first line:
```
$ cat checkpoint
model_checkpoint_path: "model.ckpt-1100500"
all_model_checkpoint_paths: "model.ckpt-1000000"
all_model_checkpoint_paths: "model.ckpt-1020100"
...
```
Now, can run this with Transformers 2.9:
```python
from transformers import T5Config, T5Tokenizer, T5ForConditionalGeneration
from transformers.modeling_t5 import load_tf_weights_in_t5
base_model = "t5-small"
tokenizer = T5Tokenizer.from_pretrained(base_model)
model = T5ForConditionalGeneration(T5Config.from_pretrained(base_model))
load_tf_weights_in_t5(model, None, "/Users/tafjord/models/t5/unifiedqa-small/")
model.eval()
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
return [tokenizer.decode(x) for x in res]
```
This agrees with the output of T5 code:
```python
run_model("Which is best conductor? \n (A) iron (B) feather")
```
which gives: `['iron']`
```python
run_model("Scott filled a tray with juice and put it in a freezer. The next day, Scott opened the freezer. How did the juice most likely change? \n (A) It condensed. (B) It evaporated. (C) It became a gas. (D) It became a solid.")
```
which produces: `['it condensed.']`. The original T5 code produces 'It condensed." (with a capital letter).
This doesn't work at all:
```python:
run_model("Which is best conductor? \n (A) iron (B) feather (C) wood (D) plastic")
```
which produces: `['feather (C) feather (C) feather (C) feather (C) feather (C']` (the T5 code says "iron")
So maybe I'm missing some weights or some preprocessing, haven't dug into that yet? Also some simple beam search settings are not helping here:
```python
run_model("Which is best conductor? \n (A) iron (B) feather (C) wood (D) plastic",
temperature=0.7, num_return_sequences=4, num_beams=20)
```
which produces:
```
['feather (C) feather (C) feather (C) feather (C) feather (C',
'feather (C) feather (C) feather (C) feather (C) feather',
'feather (C) feather (C) feather (C) feather',
'feather (C) feather (C) feather']
```
FYI @OyvindTafjord | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4346/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4345/comments | https://api.github.com/repos/huggingface/transformers/issues/4345/events | https://github.com/huggingface/transformers/pull/4345 | 617,645,786 | MDExOlB1bGxSZXF1ZXN0NDE3NTIyODcx | 4,345 | Add image and metadata | {
"login": "ViktorAlm",
"id": 1090762,
"node_id": "MDQ6VXNlcjEwOTA3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1090762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ViktorAlm",
"html_url": "https://github.com/ViktorAlm",
"followers_url": "https://api.github.com/users/ViktorAlm/followers",
"following_url": "https://api.github.com/users/ViktorAlm/following{/other_user}",
"gists_url": "https://api.github.com/users/ViktorAlm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ViktorAlm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ViktorAlm/subscriptions",
"organizations_url": "https://api.github.com/users/ViktorAlm/orgs",
"repos_url": "https://api.github.com/users/ViktorAlm/repos",
"events_url": "https://api.github.com/users/ViktorAlm/events{/privacy}",
"received_events_url": "https://api.github.com/users/ViktorAlm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"[model page](https://huggingface.co/ViktorAlm/electra-base-norwegian-uncased-discriminator)"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Unfortunately i accidentally orphaned my other PR | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4345/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4345",
"html_url": "https://github.com/huggingface/transformers/pull/4345",
"diff_url": "https://github.com/huggingface/transformers/pull/4345.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4345.patch",
"merged_at": 1589414716000
} |
https://api.github.com/repos/huggingface/transformers/issues/4344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4344/comments | https://api.github.com/repos/huggingface/transformers/issues/4344/events | https://github.com/huggingface/transformers/pull/4344 | 617,598,992 | MDExOlB1bGxSZXF1ZXN0NDE3NDg0OTAw | 4,344 | Added the feature to provide a generation prompt for encoder-decoder models. | {
"login": "lukovnikov",
"id": 1732910,
"node_id": "MDQ6VXNlcjE3MzI5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukovnikov",
"html_url": "https://github.com/lukovnikov",
"followers_url": "https://api.github.com/users/lukovnikov/followers",
"following_url": "https://api.github.com/users/lukovnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions",
"organizations_url": "https://api.github.com/users/lukovnikov/orgs",
"repos_url": "https://api.github.com/users/lukovnikov/repos",
"events_url": "https://api.github.com/users/lukovnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukovnikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"The code quality check complains about lines being too long in modeling_utils.py.\r\nThere are several unchanged places where this is already the case. Should I ignore this?",
"Also adding @yjernite since we talked about something similar",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"will re-open after sampling re-factor"
] | 1,589 | 1,595 | 1,595 | CONTRIBUTOR | null | Added the `decoder_input_ids` argument to `.generate(...)`, which should be used (only) with encoder-decoder models to start generation from some sequence. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4344/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4344",
"html_url": "https://github.com/huggingface/transformers/pull/4344",
"diff_url": "https://github.com/huggingface/transformers/pull/4344.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4344.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4343/comments | https://api.github.com/repos/huggingface/transformers/issues/4343/events | https://github.com/huggingface/transformers/issues/4343 | 617,598,899 | MDU6SXNzdWU2MTc1OTg4OTk= | 4,343 | [docs] XLNetLMHeadModel example in documentation does not produce the right probabilities | {
"login": "Futrell",
"id": 1498405,
"node_id": "MDQ6VXNlcjE0OTg0MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1498405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Futrell",
"html_url": "https://github.com/Futrell",
"followers_url": "https://api.github.com/users/Futrell/followers",
"following_url": "https://api.github.com/users/Futrell/following{/other_user}",
"gists_url": "https://api.github.com/users/Futrell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Futrell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Futrell/subscriptions",
"organizations_url": "https://api.github.com/users/Futrell/orgs",
"repos_url": "https://api.github.com/users/Futrell/repos",
"events_url": "https://api.github.com/users/Futrell/events{/privacy}",
"received_events_url": "https://api.github.com/users/Futrell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"`XLNet` usually requires some padding to work well, see https://huggingface.co/transformers/usage.html#text-generation. \r\n\r\nYou could try to use padding as shown here: https://huggingface.co/transformers/usage.html#text-generation and see whether the problem persists?",
"This resolves the problem."
] | 1,589 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLNet
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ X ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X ] my own task or dataset: (give details below)
## To reproduce
Pasting in the example given at https://huggingface.co/transformers/model_doc/xlnet.html#transformers.XLNetLMHeadModel.forward but changing the words, I get nonsensical probabilities:
```{python}
from transformers import XLNetTokenizer, XLNetLMHeadModel
import torch
tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetLMHeadModel.from_pretrained('xlnet-large-cased')
# The same way can the XLNetLMHeadModel be used to be trained by standard auto-regressive language modeling.
input_ids = torch.tensor(tokenizer.encode("Hit the nail on the <mask>", add_special_tokens=False)).unsqueeze(0) # We will predict the masked token
labels = torch.tensor(tokenizer.encode("head", add_special_tokens=False)).unsqueeze(0)
assert labels.shape[0] == 1, 'only one word will be predicted'
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, -1] = 1.0 # Previous tokens don't see last token as is done in standard auto-regressive lm training
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float) # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, -1] = 1.0 # Our first (and only) prediction will be the last token of the sequence (the masked token)
outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping, labels=labels)
loss, next_token_logits = outputs[:2] # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
```
I get loss equal to `5.9626` which means the probability P(head | Hit the nail on the __) = 0.003, much too low. Also, looking at `next_token_logits`, the maximum probability word appears to be nonsensical:
```
In [137]: tokenizer.decode([next_token_logits.argmax()])
Out[137]: 'the'
```
So using this code, the most likely word after "Hit the nail on the" would appear to be "the". I don't think these are the real model probabilities, because if so then text generation would be extremely off. More likely the code is not accessing the right probabilities.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
True model probabilities for words in context, and in particular the test case:
P(nail | Hit the nail on the ___) = a high number
P(the | Hit the nail on the ___) = a lower number
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-4.15.0-88-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.7
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4343/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4342/comments | https://api.github.com/repos/huggingface/transformers/issues/4342/events | https://github.com/huggingface/transformers/pull/4342 | 617,580,075 | MDExOlB1bGxSZXF1ZXN0NDE3NDY5NTQ3 | 4,342 | Added the option to seed generation in encoder-decoder models | {
"login": "lukovnikov",
"id": 1732910,
"node_id": "MDQ6VXNlcjE3MzI5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukovnikov",
"html_url": "https://github.com/lukovnikov",
"followers_url": "https://api.github.com/users/lukovnikov/followers",
"following_url": "https://api.github.com/users/lukovnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions",
"organizations_url": "https://api.github.com/users/lukovnikov/orgs",
"repos_url": "https://api.github.com/users/lukovnikov/repos",
"events_url": "https://api.github.com/users/lukovnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukovnikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Added the `decoder_input_ids` argument to `.generate()`, which acts as `input_ids` for the decoder instead of fixed BOS tokens used now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4342/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4342",
"html_url": "https://github.com/huggingface/transformers/pull/4342",
"diff_url": "https://github.com/huggingface/transformers/pull/4342.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4342.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4341/comments | https://api.github.com/repos/huggingface/transformers/issues/4341/events | https://github.com/huggingface/transformers/pull/4341 | 617,570,147 | MDExOlB1bGxSZXF1ZXN0NDE3NDYxMTE3 | 4,341 | rerun notebook 02-transformers | {
"login": "nikitajz",
"id": 12535180,
"node_id": "MDQ6VXNlcjEyNTM1MTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/12535180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikitajz",
"html_url": "https://github.com/nikitajz",
"followers_url": "https://api.github.com/users/nikitajz/followers",
"following_url": "https://api.github.com/users/nikitajz/following{/other_user}",
"gists_url": "https://api.github.com/users/nikitajz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikitajz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikitajz/subscriptions",
"organizations_url": "https://api.github.com/users/nikitajz/orgs",
"repos_url": "https://api.github.com/users/nikitajz/repos",
"events_url": "https://api.github.com/users/nikitajz/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikitajz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @nikitajz, \r\n\r\nLGTM ! 👍 "
] | 1,589 | 1,705 | 1,589 | CONTRIBUTOR | null | Just a notebook rerun (some cells were not executed in the original one).
Few minor changes:
- removed installation cell output for readability (it can't be collapsed in GitHub notebook viewer)
- added prints with tensors details in the last cell (DE BERT model)
I'm unable to attach nbdiff as html (due to github limitations), hence adding both as zipped
[nbdiff_02-transformers.zip](https://github.com/huggingface/transformers/files/4623078/nbdiff_02-transformers.zip) and the link to [Gdrive](https://drive.google.com/open?id=1VlucbmczZjUOI8nuCJCAxFoKgjFsJKw0) for your convenience.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4341/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4341",
"html_url": "https://github.com/huggingface/transformers/pull/4341",
"diff_url": "https://github.com/huggingface/transformers/pull/4341.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4341.patch",
"merged_at": 1589553189000
} |
https://api.github.com/repos/huggingface/transformers/issues/4340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4340/comments | https://api.github.com/repos/huggingface/transformers/issues/4340/events | https://github.com/huggingface/transformers/issues/4340 | 617,564,249 | MDU6SXNzdWU2MTc1NjQyNDk= | 4,340 | Support multitask learning | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Having thought about this more, it would probably make most sense as an example of a PyTorch `Sampler`which implements temperature-scaled mixing on `ConcatDataset([dataset1, dataset2,...])`.\r\n\r\nI'll attempt it using the t5 model firstly, and contribute it back if successful.",
"Hi, @ghomasHudson,\r\nThis seems like a great addition to hugging face, I have been looking around as well for multi-task learning. Most libraries out there don't support an easy way to achieve this and a lot of people seem to request this feature. \r\n\r\nI believe there are two way to achieve multitask learning:\r\n1) As you mentioned, treating i/p & o/p as \"text to text\" and most problems can fit in this , so the only change I guess is at the data sampler level. It should be easy to support\r\n2) Another common way is ,having multiple \"heads\" for different tasks, and each task has a shared bert. So, essentially bert is learning on different tasks. There is no easy way to abstract things out for this in hugging face for this yet. \r\n\r\nAs you are trying out 1). Do you think it's worth investing time in the second approach too?\r\nI have this weekend free, might as well raise a PR for the same .",
"@ghomasHudson I think these forms of *multi-task dataset mixing* can also be integrated straight into the new `nlp` library.",
"@avhirupc I've been focusing on 1. as it seems to be the simplest (and currently the best performing on GLUE). I think 2. would still rely on this and extend it further (choosing which head to train based on the task, and a bit of modelling work). \r\n\r\nMultitask learning seems such an obvious part of current NLP approaches so I'm surprised more people aren't requesting it (Maybe it's more of a research-y aim than a production one?)\r\n\r\nMy current approach I'm working on is simply using `ConcatDataset` with weights decided using Temperature-scaled mixing. Something like this:\r\n\r\n``` python\r\ndef temperature_to_weights(dataset_lengths, temperature=2.0, maximum=None, scale=1.0):\r\n '''Calculate mixing rates'''\r\n mixing_rates = []\r\n for length in dataset_lengths:\r\n rate = length * scale\r\n if maximum:\r\n rate = min(rate,maximum)\r\n if temperature != 1.0\r\n rate = rate ** (1.0/temperature)\r\n mixing_rates.append(rate)\r\n return mixing_rates\r\n\r\ndatasets = [Dataset1(), Dataset2()]\r\ndataset_lengths = [len(d) for d in datasets]\r\ndataset_weights = temperature_to_weights(dataset_lengths)\r\n\r\n# Calculate weights per sample\r\nweights = []\r\nfor i in range(len(datasets)):\r\n weights += [dataset_weights[i]] * len(datasets[i])\r\n\r\ndataloader = Dataloader(ConcatDataset(datasets),\r\n sampler=WeightedRandomSampler(\r\n num_samples=min(dataset_lengths),\r\n weight=weights,\r\n replacement=False) \r\n )\r\n```\r\n\r\nThere's still a few things I'm unclear about (e.g. what should the `num_samples` be? clearly if we sample everything it's just the same as not doing any balancing at all). Would be nice to have a `MultitaskSampler` if I can work it out.\r\n\r\n@enzoampil I'm open to ideas about the best way to do this in terms of interfacing with the library. Should properly open an issue over at [nlp](https://github.com/huggingface/nlp).",
"Hi @patrickvonplaten & @thomwolf , \r\n\r\nDo you think working on 2. is a good enhancement to hugging face. Multitask learning is common in industry as well as research. Similar to MT-DNN (https://arxiv.org/pdf/1901.11504.pdf)\r\n\r\nIf you feel it is a good enhancement to hugging face, I could work on it and raise a PR for the same.\r\n\r\nSummarising:\r\nHaving task-specific heads eg classification, token classification, and an ability to train multiple tasks at once? ",
"Hi everybody, \r\n\r\nI think the `nlp` library is the best place for this as @enzoampil said. "
] | 1,589 | 1,591 | 1,591 | NONE | null | # 🚀 Feature request
There should be an easy way to support multitask learning.
## Motivation
It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks).
The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning:
- **Examples-proportional mixing** - sample from tasks proportionally to their dataset size
- **Equal mixing** - sample uniformly from each task
- **Temperature-scaled mixing** - The generalized approach used by multilingual BERT which uses a temperature T, where the mixing rate of each task is raised to the power 1/T and renormalized. When T=1 this is equivalent to equal mixing, and becomes closer to equal mixing with increasing T.
This definitely seems like a reusable component that would allow replication of the headline results used by many models.
The `run_glue.py` example only trains using a single GLUE task at a time, so I'm assuming people have made their own modifications to allow multitask learning. It seems especially sensible that there should be a way of training the T5 model in the multitask setting as was originally intended by the authors.
Maybe this could be an extension of the Trainer or a wrapper around multiple Datasets? Or even just an example.
## Your contribution
I can certainly help with implementation, though would need guidance on the best place to add this functionality.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4340/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4340/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4339/comments | https://api.github.com/repos/huggingface/transformers/issues/4339/events | https://github.com/huggingface/transformers/pull/4339 | 617,532,649 | MDExOlB1bGxSZXF1ZXN0NDE3NDMwNTEy | 4,339 | TPU needs a rendezvous | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | When running on TPU, each ordinal needs to call `save_pretrained`. See [original implementation by @jysohn23](https://github.com/huggingface/transformers/pull/3702/files#diff-28e1baa470caa8e6f23b78a356b6bbdfR162).
If we don't do it this way, the trainer hangs when saving the model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4339/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4339",
"html_url": "https://github.com/huggingface/transformers/pull/4339",
"diff_url": "https://github.com/huggingface/transformers/pull/4339.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4339.patch",
"merged_at": 1589461193000
} |
https://api.github.com/repos/huggingface/transformers/issues/4338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4338/comments | https://api.github.com/repos/huggingface/transformers/issues/4338/events | https://github.com/huggingface/transformers/issues/4338 | 617,410,440 | MDU6SXNzdWU2MTc0MTA0NDA= | 4,338 | can't load checkpoint file from examples/run_language_modeling.py | {
"login": "rfernand2",
"id": 4296158,
"node_id": "MDQ6VXNlcjQyOTYxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4296158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rfernand2",
"html_url": "https://github.com/rfernand2",
"followers_url": "https://api.github.com/users/rfernand2/followers",
"following_url": "https://api.github.com/users/rfernand2/following{/other_user}",
"gists_url": "https://api.github.com/users/rfernand2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rfernand2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rfernand2/subscriptions",
"organizations_url": "https://api.github.com/users/rfernand2/orgs",
"repos_url": "https://api.github.com/users/rfernand2/repos",
"events_url": "https://api.github.com/users/rfernand2/events{/privacy}",
"received_events_url": "https://api.github.com/users/rfernand2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`--model_name_or_path` should be a folder, so you should use just `./output` instead.",
"Thanks. Verified - that fixed it. Please add a note n the README.md to explain this. Thanks.",
"Hi, may I ask how did you get these checkpoint files? I tried to specify the path to the checkpoint that is generated by the script during training (containing _config.json_, _optimizer.pt_, _pytorch_model.bin_, _scheduler.pt_, _training_args.bin_), but I met with a Traceback like this\r\n```\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 277, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 186, in main\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)\r\n File \"H:\\Anaconda3\\envs\\env_name\\lib\\site-packages\\transformers\\tokenization_auto.py\", line 203, in from_pretrained\r\n return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"H:\\Anaconda3\\envs\\env_name\\lib\\site-packages\\transformers\\tokenization_utils.py\", line 902, in from_pretrained\r\n return cls._from_pretrained(*inputs, **kwargs)\r\n File \"H:\\Anaconda3\\envs\\env_name\\lib\\site-packages\\transformers\\tokenization_utils.py\", line 1007, in _from_pretrained\r\n list(cls.vocab_files_names.values()),\r\nOSError: Model name 'C:\\\\path-to-ckpt\\\\checkpoint-17500' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed 'C:\\\\path-to-ckpt\\\\checkpoint-17500' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\r\n```\r\nwhich technically says that the checkpoint folder misses some other files. I wonder where this mismatch comes from if I used the same script to train.",
"Those who are new to this issue I just figured it out and save your time 😜😀\r\n\r\nWhat is this error about?\r\n==> When you run the model for the first time it downloads some files { pytorch_model.bin } and if your internet is broken accidentally between processes it will continue running the pipeline file without completely downloading that pytorch_model.bin file so it will raise this issue.\r\n\r\nSteps : \r\n1 ] Go to C:// Users / UserName / .cache\r\n2 ] Delete .cache folder\r\n3 ] And Done Just Run The Model Once Again......\r\n\r\nYou can connect me through @prashantmore999 { Twitter }\r\n"
] | 1,589 | 1,658 | 1,589 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
**GPT2**
Language I am using the model on (English, Chinese ...):
**English**
The problem arises when using:
* [x ] the official example scripts: (give details below)
There seems to be no supported way of continuing training or evaluating a previously saved model checkpoint.
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
just trying to train/eval on wikitext-2 dataset
## To reproduce
Steps to reproduce the behavior:
1. python ../examples/language-modeling/run_language_modeling.py ^
--output_dir=output ^
--overwrite_output_dir ^
--tokenizer=gpt2 ^
--model_type=gpt2 ^
--model_name_or_path=output/pytorch.pytorch_model.bin ^
--do_eval ^
--per_gpu_eval_batch_size=1 ^
--eval_data_file=%userprofile%/.data/wikitext-2/wikitext-2/wiki.test.tokens
This gives an error because "model_name_or_path" is assumed to be a JSON file that contained pretrained model info, not a saved checkpoint file. The error that occurs here is when trying to load the CONFIG file associated with a pretrained model.
I also tried to create a new "model_checkpoint" argument that I then pass into AutoModelWithLMHead.from_pretrained(), but that ends up with a model/checkpoint mismatch (looks like hidden size in checkpoint file =256, but current model=768). In my usage here, I have never changed the hidden size - just did the "do-train" option and it saved my checkpoints to the output directory. And now, I am just trying to verify I can eval on a checkpoint, and then also continue training on a checkpoint.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expected to be able to specify an checkpoint_path argument in the run_language_modeling.py that would load the checkpoint file and let me continue training on it and/or evaluate it.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4338/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4337/comments | https://api.github.com/repos/huggingface/transformers/issues/4337/events | https://github.com/huggingface/transformers/issues/4337 | 617,371,489 | MDU6SXNzdWU2MTczNzE0ODk= | 4,337 | Wrong dimensions of input to loss function, multilabel classification | {
"login": "simhallq",
"id": 35776028,
"node_id": "MDQ6VXNlcjM1Nzc2MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/35776028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simhallq",
"html_url": "https://github.com/simhallq",
"followers_url": "https://api.github.com/users/simhallq/followers",
"following_url": "https://api.github.com/users/simhallq/following{/other_user}",
"gists_url": "https://api.github.com/users/simhallq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simhallq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simhallq/subscriptions",
"organizations_url": "https://api.github.com/users/simhallq/orgs",
"repos_url": "https://api.github.com/users/simhallq/repos",
"events_url": "https://api.github.com/users/simhallq/events{/privacy}",
"received_events_url": "https://api.github.com/users/simhallq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If you are doing anything custom, then you should write your own model class, right? Plus your own loss function suiting your needs if default ones raises errors?\r\n\r\n[This](https://discuss.pytorch.org/t/what-kind-of-loss-is-better-to-use-in-multilabel-classification/32203/) might help;",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
**Model I am using (Bert, XLNet ...):**
bert-base-uncased
**Language I am using the model on (English, Chinese ...):**
English
**The problem arises when using:**
* [x] the official example scripts: (give details below)
[](url)
Problem originally arose when running my own scripts but I was able to reproduce with example code from [https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification](https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification)
**The tasks I am working on is:**
Multi-label classification
## To reproduce
Add `.from_pretrained("bert-base-uncased",num_labels=3)` and and 2 additional labels (`torch.tensor([1,0,0])`) to example code from docs, like so:
```
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained("bert-base-uncased",num_labels=3)
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
labels = torch.tensor([1,0,0]).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
```
> Traceback (most recent call last):
File "/Users/Simpan/agge/agust/playground/sve/test/loss_bug.py", line 8, in <module>
outputs = model(input_ids, labels=labels)
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/transformers/modeling_bert.py", line 1193, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/torch/nn/functional.py", line 2009, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/Users/Simpan/agge/agust/playground/sve/env/lib/python3.7/site-packages/torch/nn/functional.py", line 1836, in nll_loss
.format(input.size(0), target.size(0)))
ValueError: Expected input batch_size (1) to match target batch_size (3).
## Expected behavior
Was expecting to get loss for output nodes vs. labels (3 in this case).
Furthermore, removing labels as input to model returns logits with right dimensions:
```
#outputs = model(input_ids, labels=labels)
outputs = model(input_ids)
print(outputs)
```
> (tensor([[ 0.1703, -0.0389, 0.0923]], grad_fn=<AddmmBackward>),)
## Environment info
- `transformers` version: 2.9.0
- Platform: Darwin-18.5.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4337/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4336/comments | https://api.github.com/repos/huggingface/transformers/issues/4336/events | https://github.com/huggingface/transformers/issues/4336 | 617,325,595 | MDU6SXNzdWU2MTczMjU1OTU= | 4,336 | Unable to load weights from pytorch checkpoint file | {
"login": "manueltonneau",
"id": 29440170,
"node_id": "MDQ6VXNlcjI5NDQwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/29440170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueltonneau",
"html_url": "https://github.com/manueltonneau",
"followers_url": "https://api.github.com/users/manueltonneau/followers",
"following_url": "https://api.github.com/users/manueltonneau/following{/other_user}",
"gists_url": "https://api.github.com/users/manueltonneau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueltonneau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueltonneau/subscriptions",
"organizations_url": "https://api.github.com/users/manueltonneau/orgs",
"repos_url": "https://api.github.com/users/manueltonneau/repos",
"events_url": "https://api.github.com/users/manueltonneau/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueltonneau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I had this problem today,too. I create a new container as my new gpu environment, but cannot load any pretrained due to this error, but the same load pretrained codes are normal run on my old enviroment to download the pretrained",
"Hi @mananeau, when I look on the website and click on \"show all files\" for your model, it only lists the configuration and vocabulary. Have you uploaded the model file?",
"I believe I did. It can be found under [this link](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.bin). Also, when doing `transformers-cli s3 ls`, I get this output: \r\n\r\n",
"I have this problem too, and I do have all my files\r\n\r\n",
"For this problem\r\nI switched to pip install with the repository of tranformers=2.8 which had been download in my old environment. \r\n\r\nIt normal works to download and load any pretrained weight\r\n\r\nI don't know why, but it's work",
"> I switched to pip install with the repository of tranformers=2.8 which had been download in my old environment.\r\n\r\nI cannot confirm this on my end. Tried with transformers==2.8.0 and still getting the same error. ",
"@mananeau We could make it clearer/more validated, but the upload CLI is meant to use only for models/tokenizers saved using the .save_pretrained() method.\r\n\r\nIn particular here, your model file should be named `pytorch_model.bin`\r\n\r\n",
"> For this problem\r\n> I switched to pip install with the repository of tranformers=2.8 which had been download in my old environment.\r\n> \r\n> It normal works to download and load any pretrained weight\r\n> \r\n> I don't know why, but it's work\r\n\r\nThis worked for me too",
"hi !!\r\nWhen i try this code explore_model.ipynb from https://github.com/sebkim/lda2vec-pytorch, the following error occurs .\r\n\r\nhow to resolve it ?? someone help me plz",
"@fathia-ghribi this is unrelated to the issue here. Please open a new issue and fill out the issue template so that we may help you. On a second note, your error does not seem to be related to this library.",
"I had this problem when I trained the model with `torch==1.6.0` and tried to load the model with `1.3.1`. The issue was fixed by upgrading to `1.6.0` in my environment where I'm loading the model.",
"Had same error on `torch==1.8.1` and `simpletransfomers==0.61.4`\r\ndowngrading torch or simpletransfomers doesn't work for me, because the issue caused by the file - not properly downloaded.\r\n\r\nI solved this issue with git clone my model on local, or upload model files on google drive and change directory.\r\n```\r\nmodel = T5Model(\"mt5\", \"/content/drive/MyDrive/dataset/outputs\", \r\n args=model_args, use_cuda=False, from_tf=False, force_download=True)\r\n```",
"> \r\n> \r\n> For this problem\r\n> I switched to pip install with the repository of tranformers=2.8 which had been download in my old environment.\r\n> \r\n> It normal works to download and load any pretrained weight\r\n> \r\n> I don't know why, but it's work\r\n\r\nthank youuuuuu ",
"I had the same problem with:\r\n```\r\nsentence-transformers 2.2.0\r\ntransformers 4.17.0\r\ntorch 1.8.1\r\ntorchvision 0.4.2\r\n\r\nPython 3.7.6\r\n```\r\n\r\nI solved it by upgrading torch with `pip install --upgrade torch torchvision`. Now working with\r\n\r\n```\r\nsentence-transformers 2.2.0\r\ntransformers 4.17.0\r\ntorch 1.10.2\r\ntorchvision 0.11.3\r\n\r\nPython 3.7.6\r\n```",
"In my case, there was something problem during moving the files, so `pytorch_model.bin` file existed but the size was 0 byte. After replacing it with correct file, the error removed.",
"Just delete the corrupted cached files and rerun your code; it will work.",
"> Just delete the corrupted cached files and rerun your code; it will work.\r\n\r\nYes, it works for me",
"I had to downgrade from torch 2.0.1 to 1.13.1.",
"> \r\ndoes it work? I had the same problem\r\n",
"> \r\n\r\nyes, it works,\r\ni delete all the .cache files,then redownload ,Error gone"
] | 1,589 | 1,692 | 1,589 | NONE | null | # 🐛 Bug
## Information
I uploaded two models this morning using the `transformers-cli`. The models can be found on my [huggingface page](https://huggingface.co/mananeau). The folder I uploaded for both models contained a PyTorch model in bin format, a zip file containing the three TF model files, the `config.json` and the `vocab.txt`. [The PT model](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.bin) was created from TF checkpoints [using this code](https://huggingface.co/transformers/converting_tensorflow_models.html). I'm able to download the tokenizer using:
``tokenizer = AutoTokenizer.from_pretrained("mananeau/clinicalcovid-bert-base-cased")``.
Yet, when trying to download the model using:
``model = AutoModel.from_pretrained("mananeau/clinicalcovid-bert-base-cased")``
I am getting the following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/anaconda3/lib/python3.7/site-packages/torch/serialization.py in _check_seekable(f)
226 try:
--> 227 f.seek(f.tell())
228 return True
AttributeError: 'NoneType' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
625 try:
--> 626 state_dict = torch.load(resolved_archive_file, map_location="cpu")
627 except Exception:
~/anaconda3/lib/python3.7/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
425 pickle_load_args['encoding'] = 'utf-8'
--> 426 return _load(f, map_location, pickle_module, **pickle_load_args)
427 finally:
~/anaconda3/lib/python3.7/site-packages/torch/serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)
587
--> 588 _check_seekable(f)
589 f_should_read_directly = _should_read_directly(f)
~/anaconda3/lib/python3.7/site-packages/torch/serialization.py in _check_seekable(f)
229 except (io.UnsupportedOperation, AttributeError) as e:
--> 230 raise_err_msg(["seek", "tell"], e)
231
~/anaconda3/lib/python3.7/site-packages/torch/serialization.py in raise_err_msg(patterns, e)
222 " try to load from it instead.")
--> 223 raise type(e)(msg)
224 raise e
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-5-5451ffd9a3b5> in <module>
----> 1 model = AutoModel.from_pretrained("mananeau/clinicalcovid-bert-base-cased")
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
425 for config_class, model_class in MODEL_MAPPING.items():
426 if isinstance(config, config_class):
--> 427 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
428 raise ValueError(
429 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
627 except Exception:
628 raise OSError(
--> 629 "Unable to load weights from pytorch checkpoint file. "
630 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. "
631 )
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: Ubuntu 18.04
- Python version: 3.7.4
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?): 1.14.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4336/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4336/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4335/comments | https://api.github.com/repos/huggingface/transformers/issues/4335/events | https://github.com/huggingface/transformers/pull/4335 | 617,266,289 | MDExOlB1bGxSZXF1ZXN0NDE3MjE1MDQ0 | 4,335 | [MbartTokenizer] save to sentencepiece.bpe.model | {
"login": "Mehrad0711",
"id": 28717374,
"node_id": "MDQ6VXNlcjI4NzE3Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehrad0711",
"html_url": "https://github.com/Mehrad0711",
"followers_url": "https://api.github.com/users/Mehrad0711/followers",
"following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions",
"organizations_url": "https://api.github.com/users/Mehrad0711/orgs",
"repos_url": "https://api.github.com/users/Mehrad0711/repos",
"events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehrad0711/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks! \r\nCan you verify that\r\n```bash\r\nRUN_SLOW=1 pytest --tb=short -p no:warnings tests/test_modeling_bart.py -sv\r\n```\r\nworks with this change?",
"Hi Sam!\r\nYes, the test passes on my machine.",
"I think you need to run `make style` for `check_code_quality` to pass.",
"Thanks, I just did but now black is complaining about other files being reformated. ",
"sorry about that, there were some changes in the interim.\r\n\r\nTry\r\n```bash\r\npip install flake8 --upgrade\r\nmake style\r\n```\r\n\r\nyou might also need to `git merge master` or `git rebase master`\r\nAt the end, this PR should only change `tokenization_bart.py`.\r\n\r\n",
"Hi Sam!\r\nThe PR is now passing the tests. Is there anything else you would like to update?",
"Nope, thanks for contributing! Merging!"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | There is a mismatch between VOCAB_FILES_NAMES for XLMRobertaTokenizer and MBartTokenizer.
When saving tokenizer files with save_pretrained(), XLMRobertaTokenizer's vocab_file_name is used, while during loading using load_pretrained(), MBartTokenizer's vocab_file_name is used.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4335/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4335",
"html_url": "https://github.com/huggingface/transformers/pull/4335",
"diff_url": "https://github.com/huggingface/transformers/pull/4335.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4335.patch",
"merged_at": 1589806444000
} |
https://api.github.com/repos/huggingface/transformers/issues/4334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4334/comments | https://api.github.com/repos/huggingface/transformers/issues/4334/events | https://github.com/huggingface/transformers/issues/4334 | 617,245,982 | MDU6SXNzdWU2MTcyNDU5ODI= | 4,334 | [bug in run_glue.py] GlueDataset with no local_rank when init | {
"login": "jia-zhuang",
"id": 32734827,
"node_id": "MDQ6VXNlcjMyNzM0ODI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32734827?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jia-zhuang",
"html_url": "https://github.com/jia-zhuang",
"followers_url": "https://api.github.com/users/jia-zhuang/followers",
"following_url": "https://api.github.com/users/jia-zhuang/following{/other_user}",
"gists_url": "https://api.github.com/users/jia-zhuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jia-zhuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jia-zhuang/subscriptions",
"organizations_url": "https://api.github.com/users/jia-zhuang/orgs",
"repos_url": "https://api.github.com/users/jia-zhuang/repos",
"events_url": "https://api.github.com/users/jia-zhuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/jia-zhuang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not on master.\r\n\r\nYou should install from source when running the examples, as specified in the [README](https://github.com/huggingface/transformers#run-the-examples)."
] | 1,589 | 1,589 | 1,589 | NONE | null | https://github.com/huggingface/transformers/blob/241759101e7104192d01a07fc70432fa02ae8cb7/examples/text-classification/run_glue.py#L137
GlueData need to add local_rank parameter when init
```python
train_dataset = GlueDataset(data_args, tokenizer=tokenizer, local_rank=training_args.local_rank) if training_args.do_train else None
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4334/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4333/comments | https://api.github.com/repos/huggingface/transformers/issues/4333/events | https://github.com/huggingface/transformers/pull/4333 | 617,217,541 | MDExOlB1bGxSZXF1ZXN0NDE3MTc2MTE0 | 4,333 | Clarification of model upload instructions | {
"login": "manueltonneau",
"id": 29440170,
"node_id": "MDQ6VXNlcjI5NDQwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/29440170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueltonneau",
"html_url": "https://github.com/manueltonneau",
"followers_url": "https://api.github.com/users/manueltonneau/followers",
"following_url": "https://api.github.com/users/manueltonneau/following{/other_user}",
"gists_url": "https://api.github.com/users/manueltonneau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueltonneau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueltonneau/subscriptions",
"organizations_url": "https://api.github.com/users/manueltonneau/orgs",
"repos_url": "https://api.github.com/users/manueltonneau/repos",
"events_url": "https://api.github.com/users/manueltonneau/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueltonneau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=h1) Report\n> Merging [#4333](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/241759101e7104192d01a07fc70432fa02ae8cb7&el=desc) will **decrease** coverage by `1.63%`.\n> The diff coverage is `77.07%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4333 +/- ##\n==========================================\n- Coverage 78.18% 76.55% -1.64% \n==========================================\n Files 120 128 +8 \n Lines 20020 21502 +1482 \n==========================================\n+ Hits 15652 16460 +808 \n- Misses 4368 5042 +674 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/transformers\\_cli.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFuc2Zvcm1lcnNfY2xpLnB5) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.33% <ø> (-0.15%)` | :arrow_down: |\n| [src/transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2NhbWVtYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `35.71% <0.00%> (-64.29%)` | :arrow_down: |\n| [src/transformers/configuration\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (-0.08%)` | :arrow_down: |\n| ... and [152 more](https://codecov.io/gh/huggingface/transformers/pull/4333/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=footer). Last update [2417591...cca51b7](https://codecov.io/gh/huggingface/transformers/pull/4333?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,591 | 1,591 | NONE | null | ## What changed
In the `README.md`, I specified two details in the model upload part:
- added `folder` to the `./path/to/pretrained_model/` just to make sure it is straightforward
- added a comment line specifying that the configuration file should be entitled `config.json` for the model to appear on the website. Linked with [this issue](https://github.com/huggingface/transformers/issues/4322) I raised.
This [website page](https://huggingface.co/transformers/model_sharing.html) should also be modified accordingly but I wasn't sure where to include the changes.
## Additional potential improvements
- One extra thing I thought of was to specify what kind of model files is accepted when mentioning the folder but I didn't have information on that. My upload with PyTorch weights went smoothly. What about TF weights? Does uploading only the three TF model files work? Is that the case for TF1 and TF2 weights?
- One minor improvement could also be to mention earlier that the model name on hugging face will be the same as the name of the uploaded folder. In my case, I read this after uploading the folder and I had to delete the files on S3 and re-upload with a new folder name that suited me better.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4333/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4333",
"html_url": "https://github.com/huggingface/transformers/pull/4333",
"diff_url": "https://github.com/huggingface/transformers/pull/4333.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4333.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4332/comments | https://api.github.com/repos/huggingface/transformers/issues/4332/events | https://github.com/huggingface/transformers/issues/4332 | 617,192,460 | MDU6SXNzdWU2MTcxOTI0NjA= | 4,332 | Extractive Text Summarization | {
"login": "timsuchanek",
"id": 1094804,
"node_id": "MDQ6VXNlcjEwOTQ4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1094804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timsuchanek",
"html_url": "https://github.com/timsuchanek",
"followers_url": "https://api.github.com/users/timsuchanek/followers",
"following_url": "https://api.github.com/users/timsuchanek/following{/other_user}",
"gists_url": "https://api.github.com/users/timsuchanek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timsuchanek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timsuchanek/subscriptions",
"organizations_url": "https://api.github.com/users/timsuchanek/orgs",
"repos_url": "https://api.github.com/users/timsuchanek/repos",
"events_url": "https://api.github.com/users/timsuchanek/events{/privacy}",
"received_events_url": "https://api.github.com/users/timsuchanek/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Are you sampling tokens? If so, turn it off and maybe also turn up the beam size. That will give more extractive outputs",
"Thanks @Laksh1997 for the idea, unfortunately, the results don't get better.\r\nThis is the article I'm summarizing: https://www.gatesnotes.com/Health/Pandemic-Innovation\r\n\r\n### Result with `min_length=500, max_length=1000` and default settings of the summarizer pipeline:\r\n\r\nThis is the first part of a two-part series on the impact of global warming. The second part of the series will look at ways to reduce the effects of climate change. The third part will focus on ways to prevent the spread of the disease. The fourth part will be on how we can make sure we don't see a repeat of what happened in the 1980s and 1990s. It will also look at how to make sure that we don’t see an increase in the number of people who need to be treated for the disease every time it rears its head. The final part of this series looks at ways we can reduce the impact on the economy of the global warming crisis by reducing the amount of money spent on health care. It is also a look at some of the ways in which we can prevent the disease from getting worse, such as by making sure we have better access to the right equipment and training for doctors and nurses. The last part will look back at how we were able to stop the disease’s spread in the first place, and how we’ve been able to do so since then. It’ll be interesting to see how we respond to the current crisis, which has caused a lot of people to lose their jobs and homes, as well as the loss of health care and the cost of living that has gone up by a third since the beginning of the year. We’re in the midst of a global pandemic, but we have a long way to go before we see the full extent of the damage caused by climate change, which is likely to be much worse in the coming years. We also need to look at what we can do to prevent it from happening in the future, including ways to make it easier for people to get the care they need. We need to make the most of the time we have left before it gets worse, and we need to do it in a way that makes it easier to get to the bottom of the problem. We can do this by focusing on what we are doing now, rather than focusing on the causes of the illness, which can be hard to come by in a small number of cases. We should also be looking for ways to keep the disease at a low level so that it doesn't spread as far and as fast as possible. We are seeing more and more people get sick and dying, and this is a good thing, but it also means that we have less time to prepare for the future.\r\n\r\n### Result with `num_beams=8` and `do_sample=False`:\r\nThis is the first part of a two-part series on the impact of climate change on the U.S. and the world. The second part of the series will look at ways to reduce the effects of global warming. The third part will focus on how we can reduce the number of people affected by climate change. The fourth and final part will be a look at some of the ways we can make sure we don't suffer the same fate as those who have been affected by the climate change pandemics of the past few years. It will be the first of a series of articles on the topic, and will be followed by a series on climate change in the next few months. For more information, go to: http://www.cnn.com/2013/01/29/climate-change/index.html#storylink=cpy, and for more information on the Global Warming Program, visit: http://www.climatechange.org/2013-01-29/global-warming-program/. For more on the World Health Organization (WHO), go to www.welcome.org/. For information on how to get involved in the fight against climate change, visit the WHO’s website. For information about how to help people in need of financial assistance, visit www.worldhealth.org. For confidential support, call the Samaritans on 08457 90 90 90 or visit a local Samaritans branch, see www.samaritans.org for details. For support on suicide matters call the National Suicide Prevention Lifeline on 1-800-273-TALK (8255). For support in the UK, visit The Samaritans’ local branch, or click here. For help in the United States, see the National Institutes of Health (NHS), which has a range of programs that can help people cope with the changing nature of the threat to their health, such as the threat of pneumococcal meningitis, sepsis, stroke, and other ailments. For all the information you need, visit http:www.nhs.uk/news/publications/cnn/2014/07/09/world-health-paediatric-pneumonia-and-sickness-in-the-middle-of-a-drought.html. For the full series, see:http:/ / www.nhc.gov/newspeak/stories/2014-09-09/the-world-succeeding-against-climate-changes.html?title=World-warming-disease-infiltrating-crisis-initiative.\r\n\r\nI have no idea where it gets the idea of climate change from :D \r\n",
"@timsuchanek Note that transformer models can only consider context of up to N number of **subtokens** (in the case of BART I think N = 1024). \r\n\r\nSo, if the input context (the long document) is greater than this, it will be truncated to 1024 subtokens.\r\n\r\nThis means if you ask the decoder to generate more than what it can consider in context, it will at best copy the context, and at worse start to make up stuff. \r\n\r\nI'm not sure if min_length and max_length refer to subtokens or tokens in the huggingface implementation.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | CONTRIBUTOR | null | # 🚀 Feature request
While the abstractive text summarization with T5 and Bart already achieve impressive results, it would be great to add support for state-of-the-art **extractive** text summarization, such as the recent [MatchSum](https://github.com/maszhongming/MatchSum) which outperforms [PreSum](https://github.com/nlpyang/PreSumm) by a significant margin.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
The Bart-based summarization is already pretty awesome.
However, I recently got this summary with Bart from a Bill Gates article:
> "We are seeing more and more people get sick and dying, and this is a good thing, but it also means that we have less time to prepare for the future."
It seems to me, that the extractive methods are still "less risky" while they can also achieve great results.
So adding an easy way to access one of the extractive methods, for example the new [MatchSum](https://github.com/maszhongming/MatchSum) algorithm, which also now released the pre-trained models for CNN/DM, would be really awesome! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4332/reactions",
"total_count": 14,
"+1": 14,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4332/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4331/comments | https://api.github.com/repos/huggingface/transformers/issues/4331/events | https://github.com/huggingface/transformers/pull/4331 | 617,157,796 | MDExOlB1bGxSZXF1ZXN0NDE3MTI5NjAx | 4,331 | Add image and link to model paper | {
"login": "ViktorAlm",
"id": 1090762,
"node_id": "MDQ6VXNlcjEwOTA3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1090762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ViktorAlm",
"html_url": "https://github.com/ViktorAlm",
"followers_url": "https://api.github.com/users/ViktorAlm/followers",
"following_url": "https://api.github.com/users/ViktorAlm/following{/other_user}",
"gists_url": "https://api.github.com/users/ViktorAlm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ViktorAlm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ViktorAlm/subscriptions",
"organizations_url": "https://api.github.com/users/ViktorAlm/orgs",
"repos_url": "https://api.github.com/users/ViktorAlm/repos",
"events_url": "https://api.github.com/users/ViktorAlm/events{/privacy}",
"received_events_url": "https://api.github.com/users/ViktorAlm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=h1) Report\n> Merging [#4331](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/241759101e7104192d01a07fc70432fa02ae8cb7&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4331 +/- ##\n=======================================\n Coverage 78.18% 78.18% \n=======================================\n Files 120 120 \n Lines 20020 20020 \n=======================================\n Hits 15652 15652 \n Misses 4368 4368 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=footer). Last update [2417591...65588fe](https://codecov.io/gh/huggingface/transformers/pull/4331?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"lol, it's great",
"You can also add it as a metadata for it to display on social media:\r\n\r\n```\r\n---\r\nlanguage: norwegian\r\nthumbnail: https://i.imgur.com/yxnE5GC.png\r\n---\r\n```"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Now i've had my fun. I thought i could do better but it will do :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4331/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4331",
"html_url": "https://github.com/huggingface/transformers/pull/4331",
"diff_url": "https://github.com/huggingface/transformers/pull/4331.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4331.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4330 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4330/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4330/comments | https://api.github.com/repos/huggingface/transformers/issues/4330/events | https://github.com/huggingface/transformers/issues/4330 | 617,146,855 | MDU6SXNzdWU2MTcxNDY4NTU= | 4,330 | Can't set attribute 'device' | {
"login": "cuongnm71",
"id": 26346343,
"node_id": "MDQ6VXNlcjI2MzQ2MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/26346343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cuongnm71",
"html_url": "https://github.com/cuongnm71",
"followers_url": "https://api.github.com/users/cuongnm71/followers",
"following_url": "https://api.github.com/users/cuongnm71/following{/other_user}",
"gists_url": "https://api.github.com/users/cuongnm71/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cuongnm71/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cuongnm71/subscriptions",
"organizations_url": "https://api.github.com/users/cuongnm71/orgs",
"repos_url": "https://api.github.com/users/cuongnm71/repos",
"events_url": "https://api.github.com/users/cuongnm71/events{/privacy}",
"received_events_url": "https://api.github.com/users/cuongnm71/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I met the same issue, have you fixed? Thanks."
] | 1,589 | 1,658 | 1,595 | NONE | null | I'm BERT finetunning, came across this problem
Here is my config, i'm running on 3rd GPU:
```json
BertConfig {
"attention_probs_dropout_prob": 0.1,
"device": "cuda:3",
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_hidden_states": true,
"output_past": true,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"type_vocab_size": 2,
"use_pooler": true,
"vocab_size": 105879,
"weight_class": [
1,
1
]
}
```
and here is the output error in terminal:
```
Traceback (most recent call last):
File "main.py", line 50, in <module>
model = BERTQa.from_pretrained(args.folder_model, config=config)
File "/home/dle/anaconda3/envs/phobert/lib/python3.7/site-packages/transformers/modeling_utils.py", line 622, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/dstore/home/dle/phobert/QA_PhoBERT/ZaloBert.py", line 10, in __init__
self.device = 'cuda:3'
File "/home/dle/anaconda3/envs/phobert/lib/python3.7/site-packages/torch/nn/modules/module.py", line 638, in __setattr__
object.__setattr__(self, name, value)
AttributeError: can't set attribute
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4330/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4330/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4329 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4329/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4329/comments | https://api.github.com/repos/huggingface/transformers/issues/4329/events | https://github.com/huggingface/transformers/issues/4329 | 617,134,790 | MDU6SXNzdWU2MTcxMzQ3OTA= | 4,329 | Cache directory | {
"login": "abhigenie92",
"id": 12474640,
"node_id": "MDQ6VXNlcjEyNDc0NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/12474640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhigenie92",
"html_url": "https://github.com/abhigenie92",
"followers_url": "https://api.github.com/users/abhigenie92/followers",
"following_url": "https://api.github.com/users/abhigenie92/following{/other_user}",
"gists_url": "https://api.github.com/users/abhigenie92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhigenie92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhigenie92/subscriptions",
"organizations_url": "https://api.github.com/users/abhigenie92/orgs",
"repos_url": "https://api.github.com/users/abhigenie92/repos",
"events_url": "https://api.github.com/users/abhigenie92/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhigenie92/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | Where is the cache directory?
```
BertTokenizer.from_pretrained('bert-base-uncased')
```
I want to download manually and place it there.
```
OSError: Couldn't reach server at '{}' to download vocabulary files.``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4329/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4328 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4328/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4328/comments | https://api.github.com/repos/huggingface/transformers/issues/4328/events | https://github.com/huggingface/transformers/pull/4328 | 617,115,039 | MDExOlB1bGxSZXF1ZXN0NDE3MDk0NDUw | 4,328 | wrong variable name used | {
"login": "elyesmanai",
"id": 21007166,
"node_id": "MDQ6VXNlcjIxMDA3MTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/21007166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elyesmanai",
"html_url": "https://github.com/elyesmanai",
"followers_url": "https://api.github.com/users/elyesmanai/followers",
"following_url": "https://api.github.com/users/elyesmanai/following{/other_user}",
"gists_url": "https://api.github.com/users/elyesmanai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elyesmanai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elyesmanai/subscriptions",
"organizations_url": "https://api.github.com/users/elyesmanai/orgs",
"repos_url": "https://api.github.com/users/elyesmanai/repos",
"events_url": "https://api.github.com/users/elyesmanai/events{/privacy}",
"received_events_url": "https://api.github.com/users/elyesmanai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=h1) Report\n> Merging [#4328](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/241759101e7104192d01a07fc70432fa02ae8cb7&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4328 +/- ##\n=======================================\n Coverage 78.18% 78.18% \n=======================================\n Files 120 120 \n Lines 20020 20020 \n=======================================\n+ Hits 15652 15653 +1 \n+ Misses 4368 4367 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.08% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=footer). Last update [2417591...f0bb172](https://codecov.io/gh/huggingface/transformers/pull/4328?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Turned
`nlp.tokenizer.mask_token` to `fill_mask.tokenizer.mask_token` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4328/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4328",
"html_url": "https://github.com/huggingface/transformers/pull/4328",
"diff_url": "https://github.com/huggingface/transformers/pull/4328.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4328.patch",
"merged_at": 1589379723000
} |
https://api.github.com/repos/huggingface/transformers/issues/4327 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4327/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4327/comments | https://api.github.com/repos/huggingface/transformers/issues/4327/events | https://github.com/huggingface/transformers/issues/4327 | 617,100,460 | MDU6SXNzdWU2MTcxMDA0NjA= | 4,327 | 🐛 Trainer on TPU : KeyError '__getstate__' | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As a temporary work-around, I made the `BatchEncoding` object pickable :\r\n\r\n```python\r\nfrom transformers.tokenization_utils import BatchEncoding\r\n\r\ndef red(self):\r\n return BatchEncoding, (self.data, )\r\n\r\nBatchEncoding.__reduce__ = red\r\n```\r\n\r\nNot closing yet, as this seems to be just a work-around and not a real solution.",
"Cc @mfuntowicz ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi,I got same error with transformers 2.9.0, and 2.9.1 following error:\r\n\r\n------------\r\nTraceback (most recent call last):\r\nTraceback (most recent call last):\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/queues.py\", line 234, in _feed\r\n obj = _ForkingPickler.dumps(obj)\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/reduction.py\", line 51, in dumps\r\n cls(buf, protocol).dump(obj)\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/site-packages/transformers/tokenization_utils.py\", line 203, in __getattr__\r\n return self.data[item]\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/queues.py\", line 234, in _feed\r\n obj = _ForkingPickler.dumps(obj)\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/reduction.py\", line 51, in dumps\r\n cls(buf, protocol).dump(obj)\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/site-packages/transformers/tokenization_utils.py\", line 203, in __getattr__\r\n return self.data[item]\r\nKeyError: '__getstate__'\r\nKeyError: '__getstate__'\r\nTraceback (most recent call last):\r\nTraceback (most recent call last):\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/queues.py\", line 234, in _feed\r\n obj = _ForkingPickler.dumps(obj)\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/reduction.py\", line 51, in dumps\r\n cls(buf, protocol).dump(obj)\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/site-packages/transformers/tokenization_utils.py\", line 203, in __getattr__\r\n return self.data[item]\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/queues.py\", line 234, in _feed\r\n obj = _ForkingPickler.dumps(obj)\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/multiprocessing/reduction.py\", line 51, in dumps\r\n cls(buf, protocol).dump(obj)\r\nKeyError: '__getstate__'\r\n File \"/Users/kiwi/anaconda/python.app/Contents/lib/python3.6/site-packages/transformers/tokenization_utils.py\", line 203, in __getattr__\r\n return self.data[item]\r\nKeyError: '__getstate__'\r\n------------------------------------------------------------------\r\nMy code pieces:\r\n\r\ndl = DataLoader(data, batch_size=self.batch_size, shuffle=self.shuffle, collate_fn=partial(self.tok_collate),\r\n num_workers=2)\r\n return dl\r\n\r\n def tok_collate(self, batch_data):\r\n\r\n encoded = self.tokenizer.batch_encode_plus(\r\n [x[0] for x in batch_data],\r\n add_special_tokens=True,\r\n #return_tensors='pt',\r\n pad_to_max_length=True)\r\n\r\n for i in range(len(encoded['input_ids'])):\r\n print(\"tokens : {}\".format([self.tokenizer.convert_ids_to_tokens(s) for s in encoded['input_ids'][i]]))\r\n print(\"input_ids : {}\".format(encoded['input_ids'][i]))\r\n print(\"token_type_ids : {}\".format(encoded['token_type_ids'][i]))\r\n print(\"attention_mask : {}\".format(encoded['attention_mask'][i]))\r\n print('------------')\r\n\r\n if self.predict:\r\n return encoded\r\n else:\r\n labels = torch.tensor([x[1] for x in batch_data])\r\n # print('labels: ', labels)\r\n return encoded, labels",
"@Colanim How did you solve it?",
"I think it was fixed in the latest version of `transformers`.\r\n\r\nIf you need to work with an older version of `transformers`, for me the work-around I mentioned earlier was working :\r\n\r\n> As a temporary work-around, I made the `BatchEncoding` object pickable :\r\n> \r\n> ```python\r\n> from transformers.tokenization_utils import BatchEncoding\r\n> \r\n> def red(self):\r\n> return BatchEncoding, (self.data, )\r\n> \r\n> BatchEncoding.__reduce__ = red\r\n> ```\r\n\r\n",
"Indeed, this should have been fixed in the versions `v3+`. Thanks for opening an issue @Colanim."
] | 1,589 | 1,595 | 1,595 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using : **ELECTRA base**
Language I am using the model on : **English**
The problem arises when using:
* [ ] the official example scripts
* [x] **my own modified scripts**
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task
* [x] **my own task or dataset**
## To reproduce
I'm trying to fine-tune a model on Colab TPU, using the new Trainer API. But I'm struggling.
Here is a self-contained [Colab notebook](https://colab.research.google.com/drive/1J0m_ULnSHgtXs1FXni9O3VHPdZCMu9SZ?usp=sharing) to reproduce the error (it's a dummy example).
When running the notebook, I get the following error :
```
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 199, in __getattr__
return self.data[item]
KeyError: '__getstate__'
```
---
Full stack trace :
```
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 172, in _worker
batch = xm.send_cpu_data_to_device(batch, device)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 624, in send_cpu_data_to_device
return ToXlaTensorArena(convert_fn, select_fn).transform(data)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 307, in transform
return self._replace_tensors(inputs)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/core/xla_model.py", line 301, in _replace_tensors
convert_fn)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/utils/utils.py", line 199, in for_each_instance_rewrite
return _for_each_instance_rewrite(value, select_fn, fn, rwmap)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/utils/utils.py", line 179, in _for_each_instance_rewrite
result.append(_for_each_instance_rewrite(x, select_fn, fn, rwmap))
File "/usr/local/lib/python3.6/dist-packages/torch_xla/utils/utils.py", line 187, in _for_each_instance_rewrite
result = copy.copy(value)
File "/usr/lib/python3.6/copy.py", line 96, in copy
rv = reductor(4)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 199, in __getattr__
return self.data[item]
KeyError: '__getstate__'
```
**Any hint on how to make this dummy example work is welcomed.**
## Environment info
- `transformers` version: **2.9.0**
- Platform: **Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic**
- Python version: **3.6.9**
- PyTorch version (GPU?): **1.6.0a0+cf82011 (False)**
- Tensorflow version (GPU?): **2.2.0 (False)**
- Using GPU in script?: **No**
- Using distributed or parallel set-up in script?: **No**
@jysohn23 @julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4327/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4326 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4326/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4326/comments | https://api.github.com/repos/huggingface/transformers/issues/4326/events | https://github.com/huggingface/transformers/issues/4326 | 617,057,896 | MDU6SXNzdWU2MTcwNTc4OTY= | 4,326 | Train model from scratch with Tensorflow | {
"login": "tqdo",
"id": 53948469,
"node_id": "MDQ6VXNlcjUzOTQ4NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/53948469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqdo",
"html_url": "https://github.com/tqdo",
"followers_url": "https://api.github.com/users/tqdo/followers",
"following_url": "https://api.github.com/users/tqdo/following{/other_user}",
"gists_url": "https://api.github.com/users/tqdo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tqdo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tqdo/subscriptions",
"organizations_url": "https://api.github.com/users/tqdo/orgs",
"repos_url": "https://api.github.com/users/tqdo/repos",
"events_url": "https://api.github.com/users/tqdo/events{/privacy}",
"received_events_url": "https://api.github.com/users/tqdo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> 1/ It seems like that tutorial is for training using PyTorch. Is there any tutorial for Tensorflow?\r\n\r\n[Here](https://github.com/stefan-it/turkish-bert/blob/master/CHEATSHEET.md) is a cheatsheet for pretraining using TF code. It is written for BERT but the training for ALBERT is similar. \r\n\r\nRegarding pretraining code for ALBERT in TensorFlow, you will find relevant resources in [the official ALBERT repository](https://github.com/google-research/ALBERT). You will need a corpus in txt format with one sentence per line. Ideally after splitting the corpus in smaller shards, you can use the `create_pretraining_data.py` file to transform the txt files in tfrecords. After that, you can run `run_pretraining.py` to pretrain your model. \r\n\r\n> 2/ I am thinking about training an ALBERT model with just 6 layers. If possible I want to use the weights of the pre-trained ALBERT (since all layers share parameters) and also the structure but just change the number of layers from 12 to 6. Is it possible or I need to train entirely from scratch?\r\n\r\nThis makes me think of what @VictorSanh et al. did in [DistilBERT](https://arxiv.org/abs/1910.01108), namely initializing with every other layer from a vanilla BERT base. The difference is they added a distillation loss for the student to learn from the teacher. I'm not sure whether you can modify the number of layers in the configuration (that could be) but pretraining with less layers might severely impact your model performance. One thing you could do to maximize performance is to pretrain a normal 12-layer ALBERT base and then distil it (more on how to do this [here](https://github.com/huggingface/transformers/tree/master/examples/distillation)). ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hello, Has there been anything done on this issue, if yes can you please share?"
] | 1,589 | 1,652 | 1,595 | NONE | null | First of all thank you for a great library. I am quite new to the library so sorry if my questions sound dumb. I see a tutorial to train a model from scratch (https://huggingface.co/blog/how-to-train) which is very useful. My questions are:
1/ It seems like that tutorial is for training using PyTorch. Is there any tutorial for Tensorflow?
2/ I am thinking about training an ALBERT model with just 6 layers. If possible I want to use the weights of the pre-trained ALBERT (since all layers share parameters) and also the structure but just change the number of layers from 12 to 6. Is it possible or I need to train entirely from scratch? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4326/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4324 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4324/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4324/comments | https://api.github.com/repos/huggingface/transformers/issues/4324/events | https://github.com/huggingface/transformers/pull/4324 | 617,018,931 | MDExOlB1bGxSZXF1ZXN0NDE3MDE4NTEw | 4,324 | (v2) Improvements to the wandb integration | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | See #4220 and #4221 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4324/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4324",
"html_url": "https://github.com/huggingface/transformers/pull/4324",
"diff_url": "https://github.com/huggingface/transformers/pull/4324.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4324.patch",
"merged_at": 1589334722000
} |
https://api.github.com/repos/huggingface/transformers/issues/4323 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4323/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4323/comments | https://api.github.com/repos/huggingface/transformers/issues/4323/events | https://github.com/huggingface/transformers/pull/4323 | 616,949,966 | MDExOlB1bGxSZXF1ZXN0NDE2OTYxMTIy | 4,323 | Fix FFN dropout in TFAlbertLayer, and split dropout in TFAlbertAttent… | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik Any thoughts?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Not stale",
"Will take care of the conflict and merge after https://github.com/huggingface/transformers/pull/6247 is merged.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=h1) Report\n> Merging [#4323](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/155288f04ba9a5d0a0e4d5be4f6d4e808ad8cfff&el=desc) will **decrease** coverage by `2.57%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4323 +/- ##\n==========================================\n- Coverage 79.94% 77.37% -2.58% \n==========================================\n Files 153 153 \n Lines 27902 27907 +5 \n==========================================\n- Hits 22307 21593 -714 \n- Misses 5595 6314 +719 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.15% <100.00%> (+0.10%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `88.39% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-30.36%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/4323/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=footer). Last update [155288f...a6da32a](https://codecov.io/gh/huggingface/transformers/pull/4323?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, just merged with master and fixed conflicts.",
"Thanks a lot for your contribution @jarednielsen !"
] | 1,589 | 1,597 | 1,597 | CONTRIBUTOR | null | …ion into two separate dropout layers.
1) The dropout in TFAlbertLayer's FFN submodule was never used, since it was applied to hidden_states, which was immediately overwritten. Fixed to apply to ffn_output.
2) Likewise, the dropout in TFAlbertAttention defaulted to the dropout defined in TFBertAttention. The ALBERT paper handles this a bit differently, instead having two separate parameters controlling dropout probabilities. See https://github.com/google-research/albert/blob/master/modeling.py#L971-L993 for an example of how it is coded in the original repo. The `attention_1` scope uses `attention_probs_dropout_prob`, while the `output` scope uses `hidden_dropout_prob`.
Since the default dropout probabilities are 0 for ALBERT, these changes shouldn't affect the model accuracies when using default config. It will help practitioners and researchers who are hyperparameter tuning to have proper dropout implementation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4323/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4323",
"html_url": "https://github.com/huggingface/transformers/pull/4323",
"diff_url": "https://github.com/huggingface/transformers/pull/4323.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4323.patch",
"merged_at": 1597233163000
} |
https://api.github.com/repos/huggingface/transformers/issues/4322 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4322/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4322/comments | https://api.github.com/repos/huggingface/transformers/issues/4322/events | https://github.com/huggingface/transformers/issues/4322 | 616,923,162 | MDU6SXNzdWU2MTY5MjMxNjI= | 4,322 | Uploaded models not appearing on website | {
"login": "manueltonneau",
"id": 29440170,
"node_id": "MDQ6VXNlcjI5NDQwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/29440170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueltonneau",
"html_url": "https://github.com/manueltonneau",
"followers_url": "https://api.github.com/users/manueltonneau/followers",
"following_url": "https://api.github.com/users/manueltonneau/following{/other_user}",
"gists_url": "https://api.github.com/users/manueltonneau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueltonneau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueltonneau/subscriptions",
"organizations_url": "https://api.github.com/users/manueltonneau/orgs",
"repos_url": "https://api.github.com/users/manueltonneau/repos",
"events_url": "https://api.github.com/users/manueltonneau/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueltonneau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @mananeau we should improve validation and error messages soon, but the config.json file should be exactly named `config.json`\r\n\r\nIf you rename those files your models will pop up. \r\n\r\nAre they TF2 models?",
"Got it, will try now. \r\n\r\n> Are they TF2 models?\r\n\r\nThey are TF1.11 and TF1.15 models.\r\n",
"It worked, thanks a lot for your swift reply :) I created a PR mentioned above to clarify the upload instructions in the `README.md`. \r\n\r\nKeep on rocking!",
"> They are TF1.11 and TF1.15 models.\r\n\r\nFYI you can host those models if you want, but the transformers library doesn't support them",
"(I'd be interested in hearing more about your use case here though)"
] | 1,589 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
## Information
I uploaded two models (named `clinicalcovid-bert-base-cased` and `biocovid-bert-large-cased`) following the instructions on the website. The folder I uploaded for each model contains the configuration json file, the vocabulary in txt format, the three tensorflow models files and the pytorch model bin. They are now downloadable from S3 at the following links:
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `clinicalcovid-bert-base-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/bert_config.json) • [`tensorflow weights`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.ckpt.data-00000-of-00001) • [`tensorflow.meta`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.ckpt.meta) • [`tensorflow.index`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.ckpt.index) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/clinicalcovid_bert_base_cased.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/clinicalcovid-bert-base-cased/vocab.txt)
| `biocovid-bert-large-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/bert_config_bio_58k_large.json) • [`tensorflow weights`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/biocovid_bert_large_cased.ckpt.data-00000-of-00001) • [`tensorflow.meta`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/biocovid_bert_large_cased.ckpt.meta) • [`tensorflow.index`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/biocovid_bert_large_cased.ckpt.index) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/biocovid_bert.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/mananeau/biocovid-bert-large-cased/vocab_cased_pubmed_pmc_30k.txt)
Yet, they are not appearing on the [website](https://huggingface.co/mananeau). Any idea why that could be?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4322/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4321 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4321/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4321/comments | https://api.github.com/repos/huggingface/transformers/issues/4321/events | https://github.com/huggingface/transformers/pull/4321 | 616,859,693 | MDExOlB1bGxSZXF1ZXN0NDE2ODg5NzU5 | 4,321 | Add modelcard with acknowledgements | {
"login": "ViktorAlm",
"id": 1090762,
"node_id": "MDQ6VXNlcjEwOTA3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1090762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ViktorAlm",
"html_url": "https://github.com/ViktorAlm",
"followers_url": "https://api.github.com/users/ViktorAlm/followers",
"following_url": "https://api.github.com/users/ViktorAlm/following{/other_user}",
"gists_url": "https://api.github.com/users/ViktorAlm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ViktorAlm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ViktorAlm/subscriptions",
"organizations_url": "https://api.github.com/users/ViktorAlm/orgs",
"repos_url": "https://api.github.com/users/ViktorAlm/repos",
"events_url": "https://api.github.com/users/ViktorAlm/events{/privacy}",
"received_events_url": "https://api.github.com/users/ViktorAlm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | Add Acknowledgements | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4321/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4321",
"html_url": "https://github.com/huggingface/transformers/pull/4321",
"diff_url": "https://github.com/huggingface/transformers/pull/4321.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4321.patch",
"merged_at": 1589310057000
} |
https://api.github.com/repos/huggingface/transformers/issues/4320 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4320/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4320/comments | https://api.github.com/repos/huggingface/transformers/issues/4320/events | https://github.com/huggingface/transformers/pull/4320 | 616,832,852 | MDExOlB1bGxSZXF1ZXN0NDE2ODY4MDgy | 4,320 | Question Answering for TF trainer | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=h1) Report\n> Merging [#4320](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bf5042240d33286460b83f3dbf9be77500faab3&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4320 +/- ##\n==========================================\n- Coverage 78.16% 78.15% -0.01% \n==========================================\n Files 120 120 \n Lines 20005 20009 +4 \n==========================================\n+ Hits 15636 15638 +2 \n- Misses 4369 4371 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4320/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `17.75% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4320/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `55.31% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4320/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4320/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.09% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4320/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=footer). Last update [4bf5042...2a6162b](https://codecov.io/gh/huggingface/transformers/pull/4320?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LMK when it's ready to merge",
"Good to merge!"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | This PR add the Question Answering task in the Tensorflow trainer. For the moment the evaluation is not possible due to the complexity of the SQuAD metrics but the training is working. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4320/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4320/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4320",
"html_url": "https://github.com/huggingface/transformers/pull/4320",
"diff_url": "https://github.com/huggingface/transformers/pull/4320.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4320.patch",
"merged_at": 1589376152000
} |
https://api.github.com/repos/huggingface/transformers/issues/4319 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4319/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4319/comments | https://api.github.com/repos/huggingface/transformers/issues/4319/events | https://github.com/huggingface/transformers/issues/4319 | 616,774,674 | MDU6SXNzdWU2MTY3NzQ2NzQ= | 4,319 | AutoModel.from_pretrained with torchscript flag raises a TypeError: __init__() got an unexpected keyword argument 'torchscript' | {
"login": "jonsnowseven",
"id": 25992803,
"node_id": "MDQ6VXNlcjI1OTkyODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/25992803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonsnowseven",
"html_url": "https://github.com/jonsnowseven",
"followers_url": "https://api.github.com/users/jonsnowseven/followers",
"following_url": "https://api.github.com/users/jonsnowseven/following{/other_user}",
"gists_url": "https://api.github.com/users/jonsnowseven/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonsnowseven/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonsnowseven/subscriptions",
"organizations_url": "https://api.github.com/users/jonsnowseven/orgs",
"repos_url": "https://api.github.com/users/jonsnowseven/repos",
"events_url": "https://api.github.com/users/jonsnowseven/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonsnowseven/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue still exists and is relevant to me. Can we get an official response?",
"Yes, I would also like an official answer, please.",
"It appears that `AutoConfig` accepts a `torchscript` keyword parameter. The `AutoConfig` object can then be passed as the `config` keyword parameter to `AutoModel`. Hope this workaround helps @jonsnowseven ",
"Hi! This was fixed by https://github.com/huggingface/transformers/pull/5665. Could you try to install from source and try again?"
] | 1,589 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using: BertModel and AutoModel
Language I am using the model on: English
## To reproduce
Steps to reproduce the behavior:
```python
from transformers.modeling_auto import AutoModel
from transformers.modeling_bert import BertModel
bert_model = BertModel.from_pretrained('bert-base-uncased', torchscript=True)
bert_model = AutoModel.from_pretrained('bert-base-uncased', torchscript=True)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Behaviour
`bert_model = AutoModel.from_pretrained('bert-base-uncased', torchscript=True)` raises a
`TypeError: __init__() got an unexpected keyword argument 'torchscript'`
## Expected behaviour
Successfully create a BertModel object using AutoModel class.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: `Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64`
- Python version: 3.6.10
- PyTorch version: 1.3.1
- Tensorflow version: Not applicable
- Using GPU in script?: Not applicable
- Using distributed or parallel set-up in script?: Not applicable
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4319/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4319/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4318 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4318/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4318/comments | https://api.github.com/repos/huggingface/transformers/issues/4318/events | https://github.com/huggingface/transformers/pull/4318 | 616,749,712 | MDExOlB1bGxSZXF1ZXN0NDE2ODAxMzY5 | 4,318 | [model_cards]: 🇹🇷 Add new ELECTRA small and base models for Turkish | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=h1) Report\n> Merging [#4318](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bf5042240d33286460b83f3dbf9be77500faab3&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4318 +/- ##\n==========================================\n+ Coverage 78.16% 78.17% +0.01% \n==========================================\n Files 120 120 \n Lines 20005 20005 \n==========================================\n+ Hits 15636 15638 +2 \n+ Misses 4369 4367 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.09% <0.00%> (+0.28%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=footer). Last update [4bf5042...dc686bb](https://codecov.io/gh/huggingface/transformers/pull/4318?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,589 | 1,589 | COLLABORATOR | null | Hi,
this PR introduces model cards for new ELECTRA small and base models for Turkish 🇹🇷.
More information (checkpoint evaluation, downstream evaluation on Pos Tagging and NER, loss curves, TensorBoards) see [this repository](https://github.com/stefan-it/turkish-bert/tree/electra/electra). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4318/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4318",
"html_url": "https://github.com/huggingface/transformers/pull/4318",
"diff_url": "https://github.com/huggingface/transformers/pull/4318.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4318.patch",
"merged_at": 1589310078000
} |
https://api.github.com/repos/huggingface/transformers/issues/4317 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4317/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4317/comments | https://api.github.com/repos/huggingface/transformers/issues/4317/events | https://github.com/huggingface/transformers/issues/4317 | 616,706,486 | MDU6SXNzdWU2MTY3MDY0ODY= | 4,317 | How to use DistilBertTokenizer in C++ | {
"login": "alxmamaev",
"id": 17113191,
"node_id": "MDQ6VXNlcjE3MTEzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/17113191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alxmamaev",
"html_url": "https://github.com/alxmamaev",
"followers_url": "https://api.github.com/users/alxmamaev/followers",
"following_url": "https://api.github.com/users/alxmamaev/following{/other_user}",
"gists_url": "https://api.github.com/users/alxmamaev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alxmamaev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alxmamaev/subscriptions",
"organizations_url": "https://api.github.com/users/alxmamaev/orgs",
"repos_url": "https://api.github.com/users/alxmamaev/repos",
"events_url": "https://api.github.com/users/alxmamaev/events{/privacy}",
"received_events_url": "https://api.github.com/users/alxmamaev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | I saw that fast tokenizers was realized as Rust library, but I not fund bindings for C++, and also I cannot use a pretrained distilled tokenizer in fast mode. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4317/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4316 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4316/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4316/comments | https://api.github.com/repos/huggingface/transformers/issues/4316/events | https://github.com/huggingface/transformers/pull/4316 | 616,698,169 | MDExOlB1bGxSZXF1ZXN0NDE2NzU5NzE5 | 4,316 | Allow BatchEncoding to be initialized empty. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good job finding this!"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | This is required by recent changes introduced in TF 2.2. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4316/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4316",
"html_url": "https://github.com/huggingface/transformers/pull/4316",
"diff_url": "https://github.com/huggingface/transformers/pull/4316.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4316.patch",
"merged_at": 1589310166000
} |
https://api.github.com/repos/huggingface/transformers/issues/4315 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4315/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4315/comments | https://api.github.com/repos/huggingface/transformers/issues/4315/events | https://github.com/huggingface/transformers/pull/4315 | 616,654,488 | MDExOlB1bGxSZXF1ZXN0NDE2NzIzNzc2 | 4,315 | Update README.md | {
"login": "savasy",
"id": 6584825,
"node_id": "MDQ6VXNlcjY1ODQ4MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6584825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/savasy",
"html_url": "https://github.com/savasy",
"followers_url": "https://api.github.com/users/savasy/followers",
"following_url": "https://api.github.com/users/savasy/following{/other_user}",
"gists_url": "https://api.github.com/users/savasy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/savasy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/savasy/subscriptions",
"organizations_url": "https://api.github.com/users/savasy/orgs",
"repos_url": "https://api.github.com/users/savasy/repos",
"events_url": "https://api.github.com/users/savasy/events{/privacy}",
"received_events_url": "https://api.github.com/users/savasy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=h1) Report\n> Merging [#4315](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bf5042240d33286460b83f3dbf9be77500faab3&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4315 +/- ##\n=======================================\n Coverage 78.16% 78.16% \n=======================================\n Files 120 120 \n Lines 20005 20005 \n=======================================\n+ Hits 15636 15637 +1 \n+ Misses 4369 4368 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.09% <0.00%> (+0.28%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=footer). Last update [4bf5042...ef8a2ed](https://codecov.io/gh/huggingface/transformers/pull/4315?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4315/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4315",
"html_url": "https://github.com/huggingface/transformers/pull/4315",
"diff_url": "https://github.com/huggingface/transformers/pull/4315.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4315.patch",
"merged_at": 1589310095000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4314 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4314/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4314/comments | https://api.github.com/repos/huggingface/transformers/issues/4314/events | https://github.com/huggingface/transformers/issues/4314 | 616,630,248 | MDU6SXNzdWU2MTY2MzAyNDg= | 4,314 | run_generation.py GPT2 model only using 1st GPU, OOM error | {
"login": "randywreed",
"id": 5059871,
"node_id": "MDQ6VXNlcjUwNTk4NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5059871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/randywreed",
"html_url": "https://github.com/randywreed",
"followers_url": "https://api.github.com/users/randywreed/followers",
"following_url": "https://api.github.com/users/randywreed/following{/other_user}",
"gists_url": "https://api.github.com/users/randywreed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/randywreed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/randywreed/subscriptions",
"organizations_url": "https://api.github.com/users/randywreed/orgs",
"repos_url": "https://api.github.com/users/randywreed/repos",
"events_url": "https://api.github.com/users/randywreed/events{/privacy}",
"received_events_url": "https://api.github.com/users/randywreed/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I don't think we currently support multi-gpu in this script.",
"16 GB VRAM might not be enough to allow batching of generated texts.",
"Actually, if you install apex and are able to cast the model as FP16 using `model.half()`, you can get a batch size of up to 30 for the 1.5B. (nvidia-smi is reporting 14GB VRAM used on a single T4)\r\n\r\nMy tool is slightly different than the script, but I'll be sure to include bulk 1.5B generation as a documentation example.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,589 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
GPT2-xl (pre-trained)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
Run in jupyter notebok script below, GCP T4x2
## To reproduce
Steps to reproduce the behavior:
```
prompt_text="If God is defined as something that is all powerful and all knowing, a strong artificial intelligence might be an actual God. If this happens the implications for religion are"
num_of_responses=15
!python transformers/examples/text-generation/run_generation.py --model_type GPT2 --model_name_or_path /spell/GPT2Model/GPT2Model/ --length 1000 --stop_token '<|endoftext|>' --temperature .7 --repetition_penalty 85 --prompt "$prompt_text" --num_return_sequences $num_of_responses
```
Nvidia-smi during run only reports gpu1 using memory gpu2 sits at 11m and never moves
Ultimately program errors out with OOM error:
Traceback (most recent call last):
File "transformers/examples/text-generation/run_generation.py", line 268, in <module>
main()
File "transformers/examples/text-generation/run_generation.py", line 237, in main
num_return_sequences=args.num_return_sequences,
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py", line 1148, in generate
model_specific_kwargs=model_specific_kwargs,
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py", line 1190, in _generate_no_beam_search
outputs = self(**model_inputs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_gpt2.py", line 615, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_gpt2.py", line 499, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_gpt2.py", line 236, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_gpt2.py", line 191, in forward
present = torch.stack((key.transpose(-2, -1), value)) # transpose to have same shapes for stacking
RuntimeError: CUDA out of memory. Tried to allocate 84.00 MiB (GPU 0; 14.76 GiB total capacity; 13.69 GiB already allocated; 37.44 MiB free; 13.93 GiB reserved in total by PyTorch)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- `transformers` version: 2.9.0
- Platform: Linux-5.3.0-1018-gcp-x86_64-with-glibc2.10
- Python version: 3.8.2
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?:
- Using distributed or parallel set-up in script?: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4314/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4313 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4313/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4313/comments | https://api.github.com/repos/huggingface/transformers/issues/4313/events | https://github.com/huggingface/transformers/pull/4313 | 616,629,970 | MDExOlB1bGxSZXF1ZXN0NDE2NzA0MDQ1 | 4,313 | Update README.md | {
"login": "savasy",
"id": 6584825,
"node_id": "MDQ6VXNlcjY1ODQ4MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6584825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/savasy",
"html_url": "https://github.com/savasy",
"followers_url": "https://api.github.com/users/savasy/followers",
"following_url": "https://api.github.com/users/savasy/following{/other_user}",
"gists_url": "https://api.github.com/users/savasy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/savasy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/savasy/subscriptions",
"organizations_url": "https://api.github.com/users/savasy/orgs",
"repos_url": "https://api.github.com/users/savasy/repos",
"events_url": "https://api.github.com/users/savasy/events{/privacy}",
"received_events_url": "https://api.github.com/users/savasy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4313/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4313",
"html_url": "https://github.com/huggingface/transformers/pull/4313",
"diff_url": "https://github.com/huggingface/transformers/pull/4313.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4313.patch",
"merged_at": 1589310106000
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.