url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/5020
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5020/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5020/comments
https://api.github.com/repos/huggingface/transformers/issues/5020/events
https://github.com/huggingface/transformers/issues/5020
638,998,796
MDU6SXNzdWU2Mzg5OTg3OTY=
5,020
Run error when run PPLM in example!
{ "login": "huanghonggit", "id": 49581245, "node_id": "MDQ6VXNlcjQ5NTgxMjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/49581245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/huanghonggit", "html_url": "https://github.com/huanghonggit", "followers_url": "https://api.github.com/users/huanghonggit/followers", "following_url": "https://api.github.com/users/huanghonggit/following{/other_user}", "gists_url": "https://api.github.com/users/huanghonggit/gists{/gist_id}", "starred_url": "https://api.github.com/users/huanghonggit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/huanghonggit/subscriptions", "organizations_url": "https://api.github.com/users/huanghonggit/orgs", "repos_url": "https://api.github.com/users/huanghonggit/repos", "events_url": "https://api.github.com/users/huanghonggit/events{/privacy}", "received_events_url": "https://api.github.com/users/huanghonggit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,592
1,593
1,593
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> ![image](https://user-images.githubusercontent.com/49581245/84684669-94247480-af6b-11ea-993b-a2eed85ac805.png) And after I run other model also report error like this. <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5020/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5019
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5019/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5019/comments
https://api.github.com/repos/huggingface/transformers/issues/5019/events
https://github.com/huggingface/transformers/issues/5019
638,998,129
MDU6SXNzdWU2Mzg5OTgxMjk=
5,019
Not able to reproduce XNLI results from Google's mBERT weights.
{ "login": "bsinghpratap", "id": 7297516, "node_id": "MDQ6VXNlcjcyOTc1MTY=", "avatar_url": "https://avatars.githubusercontent.com/u/7297516?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bsinghpratap", "html_url": "https://github.com/bsinghpratap", "followers_url": "https://api.github.com/users/bsinghpratap/followers", "following_url": "https://api.github.com/users/bsinghpratap/following{/other_user}", "gists_url": "https://api.github.com/users/bsinghpratap/gists{/gist_id}", "starred_url": "https://api.github.com/users/bsinghpratap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bsinghpratap/subscriptions", "organizations_url": "https://api.github.com/users/bsinghpratap/orgs", "repos_url": "https://api.github.com/users/bsinghpratap/repos", "events_url": "https://api.github.com/users/bsinghpratap/events{/privacy}", "received_events_url": "https://api.github.com/users/bsinghpratap/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Edit: For some reason the model: xlm-mlm-tlm-xnli15-1024 (https://huggingface.co/transformers/pretrained_models.html), also doesn't learn properly. The train accuracy never goes beyond 0.35.\r\n\r\nFix for XLM is provided here: https://github.com/huggingface/transformers/pull/5035\r\n\r\nThe issue of mBERT (weights from Google) still not resolved.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
NONE
null
# 🐛 Bug Hello, I am using mBERT from https://github.com/google-research/bert/blob/master/multilingual.md and using it with the run_xnli.py file. Language: Learning the model on English (en) and testing it on French (fr). The problem arises when I download the mBERT's weights from: https://github.com/google-research/bert/blob/master/multilingual.md and then use https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py to get the PyTorch compatible weights. I then used the PyTorch compatible weights with run_xnli provided here: https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_xnli.py. Using the language as fr and train_language as en. The model doesn't learn properly and doesn't provide performance above 0.40 on fr. Though if I use the weights directly provided by huggingface for bert-base-multilingual-cased (as mentioned here: https://huggingface.co/transformers/v2.3.0/examples.html). It works fine and the mBERT model is able to achieve SOTA performance. Would it be possible to understand as why this might be happening? The tasks I am working on is XNLI. Learning on English (en) and zero-shot testing on French (fr). Steps to reproduce the behavior: 1. Get the mBERT model from the link provided here: https://github.com/google-research/bert/blob/master/multilingual.md 2.Get the PyTorch compatible weights using: https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py 3. Copy the vocab.txt and config.json from TF weights. Add "model_type"="bert" to config. 4. run the example run_xnli.py provided here: https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_xnli.py 5. Default parameters can be used with model_path set to the PyTorch weights of mBERT. - `transformers` version: 2.11 - Python version: 3.7 - PyTorch version (GPU): 1.5.0 - Tensorflow version (GPU): 2.2.0 - Using GPU in script: Yes - Using parallel setup across 3 1080ti GPUs
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5019/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5018
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5018/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5018/comments
https://api.github.com/repos/huggingface/transformers/issues/5018/events
https://github.com/huggingface/transformers/issues/5018
638,989,851
MDU6SXNzdWU2Mzg5ODk4NTE=
5,018
bart-large-xsum config task_specific_params['summarization_params'] wrong
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "set to `{}`" ]
1,592
1,592
1,592
CONTRIBUTOR
null
They are bart-large-cnn-params.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5018/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5017
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5017/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5017/comments
https://api.github.com/repos/huggingface/transformers/issues/5017/events
https://github.com/huggingface/transformers/issues/5017
638,941,743
MDU6SXNzdWU2Mzg5NDE3NDM=
5,017
Add CRF layer after Transformer model
{ "login": "yuyan-z", "id": 64955334, "node_id": "MDQ6VXNlcjY0OTU1MzM0", "avatar_url": "https://avatars.githubusercontent.com/u/64955334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuyan-z", "html_url": "https://github.com/yuyan-z", "followers_url": "https://api.github.com/users/yuyan-z/followers", "following_url": "https://api.github.com/users/yuyan-z/following{/other_user}", "gists_url": "https://api.github.com/users/yuyan-z/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuyan-z/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuyan-z/subscriptions", "organizations_url": "https://api.github.com/users/yuyan-z/orgs", "repos_url": "https://api.github.com/users/yuyan-z/repos", "events_url": "https://api.github.com/users/yuyan-z/events{/privacy}", "received_events_url": "https://api.github.com/users/yuyan-z/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "You can directly feed the output of a transformer's hidden states into a CRF implementation like this https://github.com/s14t284/TorchCRF\r\n\r\nHope it helps!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "> I've read a paper titled \"Named Entity Recognition in Chinese Electronic Medical Records Using Transformer-CRF\".\r\n> It takes Transformer's output as CRF's input, as shown in the figure.\r\n> Which function could I use to implement it? model.add() doesn't work.\r\n> ![Screenshot_20200615_173206_cn wps moffice_eng](https://user-images.githubusercontent.com/64955334/84675939-7d782080-af5f-11ea-8d9d-9dc812b2229a.png)\r\n\r\n小哥哥 实现了没", "This repository have showed how to add a CRF layer on transformers to get a better performance on token classification task.\r\nhttps://github.com/shushanxingzhe/transformers_ner", "I don't think that this implementation is good. \r\nFirst, it doesn't take into account the fact that WP get padding index (usually -100) which is not expected in torchcrf and also it go over all the tags, also the one you won't use like the not first WP of a token (token==space separated string)", "Does anyone have a clean implementation of a BERTCRF? Preferably in a Jupyter notebook?" ]
1,592
1,661
1,598
NONE
null
I've read a paper titled "Named Entity Recognition in Chinese Electronic Medical Records Using Transformer-CRF". It takes Transformer's output as CRF's input, as shown in the figure. Which function could I use to implement it? model.add() doesn't work. ![Screenshot_20200615_173206_cn wps moffice_eng](https://user-images.githubusercontent.com/64955334/84675939-7d782080-af5f-11ea-8d9d-9dc812b2229a.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5017/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5016
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5016/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5016/comments
https://api.github.com/repos/huggingface/transformers/issues/5016/events
https://github.com/huggingface/transformers/issues/5016
638,938,658
MDU6SXNzdWU2Mzg5Mzg2NTg=
5,016
Cached TF files cannot be loaded with `from_pretrained` without internet connection
{ "login": "amaiya", "id": 47191980, "node_id": "MDQ6VXNlcjQ3MTkxOTgw", "avatar_url": "https://avatars.githubusercontent.com/u/47191980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amaiya", "html_url": "https://github.com/amaiya", "followers_url": "https://api.github.com/users/amaiya/followers", "following_url": "https://api.github.com/users/amaiya/following{/other_user}", "gists_url": "https://api.github.com/users/amaiya/gists{/gist_id}", "starred_url": "https://api.github.com/users/amaiya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amaiya/subscriptions", "organizations_url": "https://api.github.com/users/amaiya/orgs", "repos_url": "https://api.github.com/users/amaiya/repos", "events_url": "https://api.github.com/users/amaiya/events{/privacy}", "received_events_url": "https://api.github.com/users/amaiya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! This issue should have been solved by https://github.com/huggingface/transformers/pull/5116", "Hi @LysandreJik: I just wanted to let you know that this issue is still not resolved as of v3.02. If you ensure the model files (and vocab, etc.) exist in the cache, and then you turn off wifi/internet, model loading does not work in TensorFlow (but does work in PyTorch):\r\n\r\nThis **does not** work:\r\n```python\r\n# wifi off and model files exist in cache\r\nrom transformers import *\r\nmodel = TFAutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased')\r\n# error is: Cannot load weights for distilbert-base-uncased\r\n```\r\n\r\nThis **does** work:\r\n```python\r\n# wifi off and model files exist in cache\r\nfrom transformers import *\r\nmodel = AutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased')\r\n```\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Thanks, @LysandreJik . I can confirm that PR #6091 resolves this issue (tested by manually applying fix to a local `transformers==3.3.0` installation)." ]
1,592
1,601
1,601
NONE
null
# 🐛 Bug ## Information For TF models in latest version of `transformers v2.11.0`, when the internet is turned off, `from_pretrained` throws an error even when the model has previously been downloaded to cache: ```python # does not work (wifi/internet turned off) model = TFBertModel.from_pretrained('bert-base-uncased', local_files_only=True) ``` Error is: ``` OSError: Can't load weights for 'bert-base-uncased'. Make sure that: - 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bert-base-uncased' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. ``` The above error occurs regardless of how `local_files_only` is set. This can be problematic in certain settings (e.g., air-gapped networks, firewalls). By contrast, for PyTorch models with NO internet, `from_pretrained` works perfectly and correctly loads files from cache both when `local_files_only=False` and `local_files_only=True`, which is a useful parameter recently added in PR #2930 by @BramVanroy and merged by @LysandreJik: ```python # works and is fast (wifi/internet turned off) model = BertModel.from_pretrained('bert-base-uncased', local_files_only=True) # works but is slow due to outgoing connection attempts (wifi/internet turned off) model = BertModel.from_pretrained('bert-base-uncased', local_files_only=False) ``` ## Environment Info - `transformers` version: 2.11.0 - Platform: Linux-4.15.0-88-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.1.0 (True)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5016/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5015
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5015/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5015/comments
https://api.github.com/repos/huggingface/transformers/issues/5015/events
https://github.com/huggingface/transformers/pull/5015
638,920,444
MDExOlB1bGxSZXF1ZXN0NDM0NTk1MTYz
5,015
Make DataCollator a callable
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=h1) Report\n> Merging [#5015](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/66bcfbb130a3a60d873597fdca05a8d539ef3401&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `73.91%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5015/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5015 +/- ##\n==========================================\n+ Coverage 77.20% 77.27% +0.06% \n==========================================\n Files 128 128 \n Lines 21854 21845 -9 \n==========================================\n+ Hits 16873 16880 +7 \n+ Misses 4981 4965 -16 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.65% <70.00%> (+0.42%)` | :arrow_up: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.43% <100.00%> (-0.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=footer). Last update [66bcfbb...88370ec](https://codecov.io/gh/huggingface/transformers/pull/5015?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "LGTM – we should however document this change in the next release notes, as it's breaking for end-users who would have implemented a custom data_collator" ]
1,592
1,592
1,592
COLLABORATOR
null
As discussed with @julien-c, change the `DataCollator` to be a callable (and the trainer now can take any function as data collator). I removed the abstract class and made a type alias to keep the type annotations instead.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5015/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5015", "html_url": "https://github.com/huggingface/transformers/pull/5015", "diff_url": "https://github.com/huggingface/transformers/pull/5015.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5015.patch", "merged_at": 1592236714000 }
https://api.github.com/repos/huggingface/transformers/issues/5014
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5014/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5014/comments
https://api.github.com/repos/huggingface/transformers/issues/5014/events
https://github.com/huggingface/transformers/pull/5014
638,910,201
MDExOlB1bGxSZXF1ZXN0NDM0NTg2NzU5
5,014
Add bart-base
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=h1) Report\n> Merging [#5014](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/66bcfbb130a3a60d873597fdca05a8d539ef3401&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5014/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5014 +/- ##\n==========================================\n+ Coverage 77.20% 77.26% +0.05% \n==========================================\n Files 128 128 \n Lines 21854 21854 \n==========================================\n+ Hits 16873 16885 +12 \n+ Misses 4981 4969 -12 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.12% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.25% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=footer). Last update [66bcfbb...0dd162e](https://codecov.io/gh/huggingface/transformers/pull/5014?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "add a model card too if needed/useful", "Will and model cards for this and bart-large at some point." ]
1,592
1,592
1,592
CONTRIBUTOR
null
Also adds two `@slow` integration test for fill mask tasks. I made some guesses on the correct conversion, which will hopefully be confirmed by the authors in this [issue](https://github.com/pytorch/fairseq/issues/2242)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5014/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5014", "html_url": "https://github.com/huggingface/transformers/pull/5014", "diff_url": "https://github.com/huggingface/transformers/pull/5014.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5014.patch", "merged_at": 1592242167000 }
https://api.github.com/repos/huggingface/transformers/issues/5013
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5013/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5013/comments
https://api.github.com/repos/huggingface/transformers/issues/5013/events
https://github.com/huggingface/transformers/pull/5013
638,906,887
MDExOlB1bGxSZXF1ZXN0NDM0NTgzOTg3
5,013
[Modelcard] xlm-roberta-squadv2
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=h1) Report\n> Merging [#5013](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f7c93b3ceec341c3c794e9fc18939bc5d50b0fc2&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5013/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5013 +/- ##\n=======================================\n Coverage 77.26% 77.26% \n=======================================\n Files 128 128 \n Lines 21856 21856 \n=======================================\n+ Hits 16887 16888 +1 \n+ Misses 4969 4968 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5013/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=footer). Last update [f7c93b3...ca606b8](https://codecov.io/gh/huggingface/transformers/pull/5013?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5013/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5013", "html_url": "https://github.com/huggingface/transformers/pull/5013", "diff_url": "https://github.com/huggingface/transformers/pull/5013.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5013.patch", "merged_at": 1592865600000 }
https://api.github.com/repos/huggingface/transformers/issues/5012
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5012/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5012/comments
https://api.github.com/repos/huggingface/transformers/issues/5012/events
https://github.com/huggingface/transformers/pull/5012
638,889,746
MDExOlB1bGxSZXF1ZXN0NDM0NTcwMTMy
5,012
Raise errors that are not raised now
{ "login": "harupy", "id": 17039389, "node_id": "MDQ6VXNlcjE3MDM5Mzg5", "avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harupy", "html_url": "https://github.com/harupy", "followers_url": "https://api.github.com/users/harupy/followers", "following_url": "https://api.github.com/users/harupy/following{/other_user}", "gists_url": "https://api.github.com/users/harupy/gists{/gist_id}", "starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harupy/subscriptions", "organizations_url": "https://api.github.com/users/harupy/orgs", "repos_url": "https://api.github.com/users/harupy/repos", "events_url": "https://api.github.com/users/harupy/events{/privacy}", "received_events_url": "https://api.github.com/users/harupy/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=h1) Report\n> Merging [#5012](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f7c93b3ceec341c3c794e9fc18939bc5d50b0fc2&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5012/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5012 +/- ##\n=======================================\n Coverage 77.26% 77.26% \n=======================================\n Files 128 128 \n Lines 21856 21856 \n=======================================\n+ Hits 16887 16888 +1 \n+ Misses 4969 4968 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5012/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5012/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+0.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=footer). Last update [f7c93b3...d643ec8](https://codecov.io/gh/huggingface/transformers/pull/5012?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Pinging @VictorSanh here", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I'll close this." ]
1,592
1,598
1,598
CONTRIBUTOR
null
Raise errors that are not raised now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5012/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5012/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5012", "html_url": "https://github.com/huggingface/transformers/pull/5012", "diff_url": "https://github.com/huggingface/transformers/pull/5012.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5012.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5011
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5011/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5011/comments
https://api.github.com/repos/huggingface/transformers/issues/5011/events
https://github.com/huggingface/transformers/pull/5011
638,870,524
MDExOlB1bGxSZXF1ZXN0NDM0NTU0MzA3
5,011
[Modelcard] bart-squadv2
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=h1) Report\n> Merging [#5011](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/66bcfbb130a3a60d873597fdca05a8d539ef3401&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5011/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5011 +/- ##\n==========================================\n+ Coverage 77.20% 77.26% +0.05% \n==========================================\n Files 128 128 \n Lines 21854 21854 \n==========================================\n+ Hits 16873 16885 +12 \n+ Misses 4981 4969 -12 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=footer). Last update [66bcfbb...19b3655](https://codecov.io/gh/huggingface/transformers/pull/5011?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hi @flozi00 , this is great ! If possible can you link my original training colab so people can train their own models easily if needed. You can find it [here](https://colab.research.google.com/drive/1I5cK1M_0dLaf5xoewh6swcm5nAInfwHy?usp=sharing) \r\nThanks !" ]
1,592
1,592
1,592
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5011/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5011/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5011", "html_url": "https://github.com/huggingface/transformers/pull/5011", "diff_url": "https://github.com/huggingface/transformers/pull/5011.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5011.patch", "merged_at": 1592865620000 }
https://api.github.com/repos/huggingface/transformers/issues/5010
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5010/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5010/comments
https://api.github.com/repos/huggingface/transformers/issues/5010/events
https://github.com/huggingface/transformers/issues/5010
638,806,360
MDU6SXNzdWU2Mzg4MDYzNjA=
5,010
How can I print output attention on test data?
{ "login": "laetokang", "id": 49485939, "node_id": "MDQ6VXNlcjQ5NDg1OTM5", "avatar_url": "https://avatars.githubusercontent.com/u/49485939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laetokang", "html_url": "https://github.com/laetokang", "followers_url": "https://api.github.com/users/laetokang/followers", "following_url": "https://api.github.com/users/laetokang/following{/other_user}", "gists_url": "https://api.github.com/users/laetokang/gists{/gist_id}", "starred_url": "https://api.github.com/users/laetokang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laetokang/subscriptions", "organizations_url": "https://api.github.com/users/laetokang/orgs", "repos_url": "https://api.github.com/users/laetokang/repos", "events_url": "https://api.github.com/users/laetokang/events{/privacy}", "received_events_url": "https://api.github.com/users/laetokang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should do it on the forward function of the model your are using so probably:\r\n```model = BartForConditionalGeneration.from_pretrained(\"bart-large\")\r\n model(input_ids, output_attentions=True)\r\n```" ]
1,592
1,592
1,592
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I fine-tuned using BART model to carry out Summarization task. I want to print output attention scores about test data, but I don't know which layer I should change "output_attention" be True. <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5010/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5009
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5009/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5009/comments
https://api.github.com/repos/huggingface/transformers/issues/5009/events
https://github.com/huggingface/transformers/pull/5009
638,798,443
MDExOlB1bGxSZXF1ZXN0NDM0NDk0ODg1
5,009
Create README.md for finetuned BERT model
{ "login": "fran-martinez", "id": 7083298, "node_id": "MDQ6VXNlcjcwODMyOTg=", "avatar_url": "https://avatars.githubusercontent.com/u/7083298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fran-martinez", "html_url": "https://github.com/fran-martinez", "followers_url": "https://api.github.com/users/fran-martinez/followers", "following_url": "https://api.github.com/users/fran-martinez/following{/other_user}", "gists_url": "https://api.github.com/users/fran-martinez/gists{/gist_id}", "starred_url": "https://api.github.com/users/fran-martinez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fran-martinez/subscriptions", "organizations_url": "https://api.github.com/users/fran-martinez/orgs", "repos_url": "https://api.github.com/users/fran-martinez/repos", "events_url": "https://api.github.com/users/fran-martinez/events{/privacy}", "received_events_url": "https://api.github.com/users/fran-martinez/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=h1) Report\n> Merging [#5009](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ebab096e864a619717a497089d864d10e21bc536&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5009/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5009 +/- ##\n==========================================\n+ Coverage 77.26% 77.33% +0.06% \n==========================================\n Files 128 128 \n Lines 21854 21854 \n==========================================\n+ Hits 16886 16900 +14 \n+ Misses 4968 4954 -14 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.25% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `33.43% <0.00%> (+4.77%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=footer). Last update [ebab096...97c4e50](https://codecov.io/gh/huggingface/transformers/pull/5009?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5009/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5009", "html_url": "https://github.com/huggingface/transformers/pull/5009", "diff_url": "https://github.com/huggingface/transformers/pull/5009.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5009.patch", "merged_at": 1592865570000 }
https://api.github.com/repos/huggingface/transformers/issues/5008
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5008/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5008/comments
https://api.github.com/repos/huggingface/transformers/issues/5008/events
https://github.com/huggingface/transformers/pull/5008
638,773,781
MDExOlB1bGxSZXF1ZXN0NDM0NDc0NDAy
5,008
Add model card for StackOBERTflow-comments-small
{ "login": "furunkel", "id": 1113873, "node_id": "MDQ6VXNlcjExMTM4NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1113873?v=4", "gravatar_id": "", "url": "https://api.github.com/users/furunkel", "html_url": "https://github.com/furunkel", "followers_url": "https://api.github.com/users/furunkel/followers", "following_url": "https://api.github.com/users/furunkel/following{/other_user}", "gists_url": "https://api.github.com/users/furunkel/gists{/gist_id}", "starred_url": "https://api.github.com/users/furunkel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/furunkel/subscriptions", "organizations_url": "https://api.github.com/users/furunkel/orgs", "repos_url": "https://api.github.com/users/furunkel/repos", "events_url": "https://api.github.com/users/furunkel/events{/privacy}", "received_events_url": "https://api.github.com/users/furunkel/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=h1) Report\n> Merging [#5008](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ebab096e864a619717a497089d864d10e21bc536&el=desc) will **increase** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5008/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5008 +/- ##\n==========================================\n+ Coverage 77.26% 77.66% +0.39% \n==========================================\n Files 128 128 \n Lines 21854 21854 \n==========================================\n+ Hits 16886 16973 +87 \n+ Misses 4968 4881 -87 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=footer). Last update [ebab096...bd9e483](https://codecov.io/gh/huggingface/transformers/pull/5008?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5008/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5008", "html_url": "https://github.com/huggingface/transformers/pull/5008", "diff_url": "https://github.com/huggingface/transformers/pull/5008.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5008.patch", "merged_at": 1592865563000 }
https://api.github.com/repos/huggingface/transformers/issues/5007
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5007/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5007/comments
https://api.github.com/repos/huggingface/transformers/issues/5007/events
https://github.com/huggingface/transformers/pull/5007
638,771,928
MDExOlB1bGxSZXF1ZXN0NDM0NDcyODQy
5,007
Create README.md
{ "login": "fran-martinez", "id": 7083298, "node_id": "MDQ6VXNlcjcwODMyOTg=", "avatar_url": "https://avatars.githubusercontent.com/u/7083298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fran-martinez", "html_url": "https://github.com/fran-martinez", "followers_url": "https://api.github.com/users/fran-martinez/followers", "following_url": "https://api.github.com/users/fran-martinez/following{/other_user}", "gists_url": "https://api.github.com/users/fran-martinez/gists{/gist_id}", "starred_url": "https://api.github.com/users/fran-martinez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fran-martinez/subscriptions", "organizations_url": "https://api.github.com/users/fran-martinez/orgs", "repos_url": "https://api.github.com/users/fran-martinez/repos", "events_url": "https://api.github.com/users/fran-martinez/events{/privacy}", "received_events_url": "https://api.github.com/users/fran-martinez/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,592
1,592
1,592
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5007/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5007", "html_url": "https://github.com/huggingface/transformers/pull/5007", "diff_url": "https://github.com/huggingface/transformers/pull/5007.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5007.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5006
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5006/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5006/comments
https://api.github.com/repos/huggingface/transformers/issues/5006/events
https://github.com/huggingface/transformers/pull/5006
638,746,169
MDExOlB1bGxSZXF1ZXN0NDM0NDUwNzc2
5,006
[Ignore] Added data collator for XLNet and added related calls
{ "login": "shngt", "id": 20009551, "node_id": "MDQ6VXNlcjIwMDA5NTUx", "avatar_url": "https://avatars.githubusercontent.com/u/20009551?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shngt", "html_url": "https://github.com/shngt", "followers_url": "https://api.github.com/users/shngt/followers", "following_url": "https://api.github.com/users/shngt/following{/other_user}", "gists_url": "https://api.github.com/users/shngt/gists{/gist_id}", "starred_url": "https://api.github.com/users/shngt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shngt/subscriptions", "organizations_url": "https://api.github.com/users/shngt/orgs", "repos_url": "https://api.github.com/users/shngt/repos", "events_url": "https://api.github.com/users/shngt/events{/privacy}", "received_events_url": "https://api.github.com/users/shngt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,592
1,593
1,592
CONTRIBUTOR
null
Added modified data collator for XLNet and added related calls in `examples/language-modeling/run_language_modeling.py`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5006/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5006", "html_url": "https://github.com/huggingface/transformers/pull/5006", "diff_url": "https://github.com/huggingface/transformers/pull/5006.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5006.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5005
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5005/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5005/comments
https://api.github.com/repos/huggingface/transformers/issues/5005/events
https://github.com/huggingface/transformers/pull/5005
638,723,686
MDExOlB1bGxSZXF1ZXN0NDM0NDMzMDA3
5,005
Increase pipeline support for ONNX export.
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=h1) Report\n> Merging [#5005](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935&el=desc) will **increase** coverage by `0.37%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5005/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5005 +/- ##\n==========================================\n+ Coverage 76.89% 77.26% +0.37% \n==========================================\n Files 128 128 \n Lines 21854 21854 \n==========================================\n+ Hits 16804 16886 +82 \n+ Misses 5050 4968 -82 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <0.00%> (+19.75%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=footer). Last update [9931f81...9f4da89](https://codecov.io/gh/huggingface/transformers/pull/5005?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
MEMBER
null
Supports all kind of models for all the pipelines we've except `summarization` because of `triu`not supported by ONNX.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5005/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5005", "html_url": "https://github.com/huggingface/transformers/pull/5005", "diff_url": "https://github.com/huggingface/transformers/pull/5005.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5005.patch", "merged_at": 1592241239000 }
https://api.github.com/repos/huggingface/transformers/issues/5004
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5004/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5004/comments
https://api.github.com/repos/huggingface/transformers/issues/5004/events
https://github.com/huggingface/transformers/issues/5004
638,714,422
MDU6SXNzdWU2Mzg3MTQ0MjI=
5,004
[🚀 Feature request] upload bart-base checkpoint
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for letting me know @patil-suraj !\r\n\r\nThis is almost done, blocked on 1 minor [issue](https://github.com/pytorch/fairseq/issues/2242)\r\n\r\nIf I don't get an answer there by Wednesday I will assume that the tokenizer is the same as bart.large and move forward.", "Thanks @sshleifer for immediate response and PR :)" ]
1,592
1,592
1,592
MEMBER
null
# 🚀 Feature request `bart-base` checkpoint is now available in fairseq repo https://github.com/pytorch/fairseq/tree/master/examples/bart#pre-trained-models so now it can be made available here `facebook/bart-base` @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5004/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5003
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5003/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5003/comments
https://api.github.com/repos/huggingface/transformers/issues/5003/events
https://github.com/huggingface/transformers/issues/5003
638,706,719
MDU6SXNzdWU2Mzg3MDY3MTk=
5,003
Cannot import transformers due to issue with signal.py (SIGKILL)
{ "login": "scottleith", "id": 9306342, "node_id": "MDQ6VXNlcjkzMDYzNDI=", "avatar_url": "https://avatars.githubusercontent.com/u/9306342?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scottleith", "html_url": "https://github.com/scottleith", "followers_url": "https://api.github.com/users/scottleith/followers", "following_url": "https://api.github.com/users/scottleith/following{/other_user}", "gists_url": "https://api.github.com/users/scottleith/gists{/gist_id}", "starred_url": "https://api.github.com/users/scottleith/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scottleith/subscriptions", "organizations_url": "https://api.github.com/users/scottleith/orgs", "repos_url": "https://api.github.com/users/scottleith/repos", "events_url": "https://api.github.com/users/scottleith/events{/privacy}", "received_events_url": "https://api.github.com/users/scottleith/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just had to reinstall. I installed pytorch AFTER transformers, and I cloned the transformers repository. Just uninstalled and reinstalled with pip install transformers and the issue went away." ]
1,592
1,592
1,592
NONE
null
# 🐛 Bug Can't import transformers, get a problem with signal.py. I see that an issue was recently fixed... should I install a nightly build, or...? >>> import transformers 2020-06-15 05:40:54.098560: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\15194\anaconda3\envs\ai_env\lib\site-packages\transformers\__init__.py", line 371, in <module> from .benchmark import PyTorchBenchmark, PyTorchBenchmarkArguments File "C:\Users\15194\anaconda3\envs\ai_env\lib\site-packages\transformers\benchmark\__init__.py", line 10, in <module> from .benchmark import PyTorchBenchmark File "C:\Users\15194\anaconda3\envs\ai_env\lib\site-packages\transformers\benchmark\benchmark.py", line 32, in <module> from .benchmark_utils import Benchmark, Memory, measure_peak_memory_cpu, start_memory_tracing, stop_memory_tracing File "C:\Users\15194\anaconda3\envs\ai_env\lib\site-packages\transformers\benchmark\benchmark_utils.py", line 19, in <module> from signal import SIGKILL **ImportError: cannot import name 'SIGKILL' from 'signal' (C:\Users\15194\anaconda3\envs\ai_env\lib\signal.py)** ## Expected behavior The package to import. ## Environment info - `transformers` version: 2.11 - Platform: Windows 10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.0, GPU = yes ( torch.cuda.is_available == True). - Tensorflow version (GPU?): 2.2.0; GPU = yes. - Using GPU in script?: No. - Using distributed or parallel set-up in script?: No.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5003/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5002
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5002/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5002/comments
https://api.github.com/repos/huggingface/transformers/issues/5002/events
https://github.com/huggingface/transformers/issues/5002
638,669,614
MDU6SXNzdWU2Mzg2Njk2MTQ=
5,002
Illegal memory access (cudaErrorIllegalAddress)
{ "login": "prajjwal1", "id": 24690051, "node_id": "MDQ6VXNlcjI0NjkwMDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prajjwal1", "html_url": "https://github.com/prajjwal1", "followers_url": "https://api.github.com/users/prajjwal1/followers", "following_url": "https://api.github.com/users/prajjwal1/following{/other_user}", "gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions", "organizations_url": "https://api.github.com/users/prajjwal1/orgs", "repos_url": "https://api.github.com/users/prajjwal1/repos", "events_url": "https://api.github.com/users/prajjwal1/events{/privacy}", "received_events_url": "https://api.github.com/users/prajjwal1/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "Reducing the batch size further doesn't raise this error. But a lot of RAM is left empty. If this is an issue, then RAM demand exceeded error should be raised.", "I've seen this error, and think it happens right before `OutOfMemory`.\r\nI agree the traceback should be different.\r\nMarking wontfix for now since this is a torch/apex issue, as you suggest, not a transformers issue. ", "I don't think it's an Apex issue also because I ran my code without fp16 integration earlier. Mostly a pytorch issue. I am not sure how RAM usage exceeds in such a short time. Initially, 10 Gigs of RAM is left and suddenly this error pops up. Halving the batch size helped but there are no signs of memory leakage. Not really sure what's happening. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
CONTRIBUTOR
null
# 🐛 Bug ## Information This bug/problem has been discussed on [Pytorch](https://github.com/pytorch/pytorch/issues/21819)/[Apex](https://github.com/NVIDIA/apex/issues/319) and [here](https://github.com/huggingface/transformers/issues/460) (bot marked it as stale) also. I'm using Albert on GLUE (although this issue is model/dataset agnostic). I've made a slight modifications in my train loop (as compared to `train()` in `Trainer()`. The main one which throws this error is when I compute the gradients: ``` grad = torch.autograd.grad(loss, model.parameters(), allow_unused=True) ``` where loss is simply `model(**inputs)[0]` I'm using Pytorch 1.5.0+cu101, transformers 2.11 on one GPU, no multiGPU, although the instance has 2 by (CUDA_VISIBLE_DEVICES=0). I tried with `torch.cuda.set_device()` also. Can you suggest a workaround ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5002/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5001
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5001/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5001/comments
https://api.github.com/repos/huggingface/transformers/issues/5001/events
https://github.com/huggingface/transformers/issues/5001
638,666,056
MDU6SXNzdWU2Mzg2NjYwNTY=
5,001
Segmentation fault when loading pretrained file
{ "login": "davidz123", "id": 27669457, "node_id": "MDQ6VXNlcjI3NjY5NDU3", "avatar_url": "https://avatars.githubusercontent.com/u/27669457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidz123", "html_url": "https://github.com/davidz123", "followers_url": "https://api.github.com/users/davidz123/followers", "following_url": "https://api.github.com/users/davidz123/following{/other_user}", "gists_url": "https://api.github.com/users/davidz123/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidz123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidz123/subscriptions", "organizations_url": "https://api.github.com/users/davidz123/orgs", "repos_url": "https://api.github.com/users/davidz123/repos", "events_url": "https://api.github.com/users/davidz123/events{/privacy}", "received_events_url": "https://api.github.com/users/davidz123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I am having the same problem - I had my code working about a week ago. Then, I installed the library (`pip install transformers`) on a new machine and now it crashes when I try to load any pre-trained model (e.g `BertModel.from_pretrained('bert-base-uncased')`).\r\nI tried downgrading to 2.9.1 and 2.10 and the problem persisted.\r\n\r\nPyTorch version: 1.4.0\r\nGPU type: 'Tesla V100-SXM2-16GB'", "This is the full debug log output:\r\n```\r\nDEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\nDEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 \"HEAD /models.huggingface.co/bert/bert-base-uncased-config.json HTTP/1.1\" 200 0\r\nDEBUG:filelock:Attempting to acquire lock 140382636847512 on /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517.lock\r\nINFO:filelock:Lock 140382636847512 acquired on /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517.lock\r\nINFO:transformers.file_utils:https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json not found in cache or force_download set to True, downloading to /home/ec2-user/.cache/torch/transformers/tmpppid3hrz\r\nDEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\nDEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 \"GET /models.huggingface.co/bert/bert-base-uncased-config.json HTTP/1.1\" 200 433\r\nHBox(children=(FloatProgress(value=0.0, description='Downloading', max=433.0, style=ProgressStyle(description_…\r\nINFO:transformers.file_utils:storing https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json in cache at /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517\r\nINFO:transformers.file_utils:creating metadata file for /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517\r\nDEBUG:filelock:Attempting to release lock 140382636847512 on /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517.lock\r\nINFO:filelock:Lock 140382636847512 released on /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517.lock\r\nINFO:transformers.configuration_utils:loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json from cache at /home/ec2-user/.cache/torch/transformers/4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517\r\nINFO:transformers.configuration_utils:Model config BertConfig {\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 30522\r\n}\r\n\r\nDEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): cdn.huggingface.co:443\r\nDEBUG:urllib3.connectionpool:https://cdn.huggingface.co:443 \"HEAD /bert-base-uncased-pytorch_model.bin HTTP/1.1\" 200 0\r\nDEBUG:filelock:Attempting to acquire lock 140384545811816 on /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157.lock\r\nINFO:filelock:Lock 140384545811816 acquired on /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157.lock\r\nINFO:transformers.file_utils:https://cdn.huggingface.co/bert-base-uncased-pytorch_model.bin not found in cache or force_download set to True, downloading to /home/ec2-user/.cache/torch/transformers/tmp9qzw5qor\r\nDEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): cdn.huggingface.co:443\r\n\r\nDEBUG:urllib3.connectionpool:https://cdn.huggingface.co:443 \"GET /bert-base-uncased-pytorch_model.bin HTTP/1.1\" 200 440473133\r\nHBox(children=(FloatProgress(value=0.0, description='Downloading', max=440473133.0, style=ProgressStyle(descri…\r\nINFO:transformers.file_utils:storing https://cdn.huggingface.co/bert-base-uncased-pytorch_model.bin in cache at /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157\r\nINFO:transformers.file_utils:creating metadata file for /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157\r\nDEBUG:filelock:Attempting to release lock 140384545811816 on /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157.lock\r\nINFO:filelock:Lock 140384545811816 released on /home/ec2-user/.cache/torch/transformers/f2ee78bdd635b758cc0a12352586868bef80e47401abe4c4fcc3832421e7338b.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157.lock\r\n```", "> I am having the same problem - I had my code working about a week ago. Then, I installed the library (`pip install transformers`) on a new machine and now it crashes when I try to load any pre-trained model (e.g `BertModel.from_pretrained('bert-base-uncased')`).\r\n> I tried downgrading to 2.9.1 and 2.10 and the problem persisted.\r\n> \r\n> PyTorch version: 1.4.0\r\n> GPU type: 'Tesla V100-SXM2-16GB'\r\n\r\nThanks for your reply, How did you make your code work before?", "So, I just realized that today I was using a different version of PyTorch (I am working with Amazon's SageMaker and had to spin up a new instance).\r\nI upgraded my PyTorch to 10.5 and cudatools to 10.2 (`conda install pytorch torchvision cudatoolkit=10.2 -c pytorch`) and it just started working again..hope that helps", "@akornilo There is something wrong with my pytorch,turning pytorch version to 1.5+cuda9.2 makes it works. Thx for your advice.\r\n\r\n", "Glad you could resolve your issue! Feel free to reopen if you see the same issue down the road.", "> I am having the same problem - I had my code working about a week ago. Then, I installed the library (`pip install transformers`) on a new machine and now it crashes when I try to load any pre-trained model (e.g `BertModel.from_pretrained('bert-base-uncased')`).\r\n> I tried downgrading to 2.9.1 and 2.10 and the problem persisted.\r\n> \r\n> PyTorch version: 1.4.0\r\n> GPU type: 'Tesla V100-SXM2-16GB'\r\n\r\nDowngrade sentencepiece to 0.1.91. This worked for me after being stuck at the same problem as yours\r\n" ]
1,592
1,603
1,592
NONE
null
When loading model weights file using <model.from_pretrained>, segmentation fault error occur..
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5001/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5000
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5000/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5000/comments
https://api.github.com/repos/huggingface/transformers/issues/5000/events
https://github.com/huggingface/transformers/issues/5000
638,649,389
MDU6SXNzdWU2Mzg2NDkzODk=
5,000
Accessing scores for the entire vocabulary in GPT2
{ "login": "shampp", "id": 55344772, "node_id": "MDQ6VXNlcjU1MzQ0Nzcy", "avatar_url": "https://avatars.githubusercontent.com/u/55344772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shampp", "html_url": "https://github.com/shampp", "followers_url": "https://api.github.com/users/shampp/followers", "following_url": "https://api.github.com/users/shampp/following{/other_user}", "gists_url": "https://api.github.com/users/shampp/gists{/gist_id}", "starred_url": "https://api.github.com/users/shampp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shampp/subscriptions", "organizations_url": "https://api.github.com/users/shampp/orgs", "repos_url": "https://api.github.com/users/shampp/repos", "events_url": "https://api.github.com/users/shampp/events{/privacy}", "received_events_url": "https://api.github.com/users/shampp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You would need to use `GPT2LMHeadModel` for that, and use a softmax layer to have scores. Here's an example:\r\n\r\n```py\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\nimport torch\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n\r\ninputs = tokenizer.encode(\"This is just\", return_tensors=\"pt\")\r\noutput = model(inputs)\r\n\r\nmodel_output = output[0]\r\nlast_token_prediction = model_output[:, -1]\r\nlast_token_softmax = torch.softmax(last_token_prediction, dim=-1).squeeze()\r\n\r\nn = 10\r\ntop_n_values = last_token_softmax.topk(n)\r\n\r\nfor index, value in zip(top_n_values.indices, top_n_values.values):\r\n print(\"Score: \", value.tolist())\r\n print(\"This is just\" + tokenizer.decode(index.tolist()))\r\n```" ]
1,592
1,592
1,592
NONE
null
I want to access the prediciton scores from the GPT2 model. Follwoing the example given in the docstring, I am using the code ``` output = model(input_ids,labels=input_ids) logit = output[1][0][-1,:].detach().numpy() ``` Is it correct ? I am expecting logit variable to be the same size as the vocabulary with corresponding scores.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5000/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4999
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4999/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4999/comments
https://api.github.com/repos/huggingface/transformers/issues/4999/events
https://github.com/huggingface/transformers/pull/4999
638,630,251
MDExOlB1bGxSZXF1ZXN0NDM0MzU4MjIx
4,999
Improve ONNX logging
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=h1) Report\n> Merging [#4999](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935&el=desc) will **increase** coverage by `0.31%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4999/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4999 +/- ##\n==========================================\n+ Coverage 76.89% 77.20% +0.31% \n==========================================\n Files 128 128 \n Lines 21854 21854 \n==========================================\n+ Hits 16804 16872 +68 \n+ Misses 5050 4982 -68 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <0.00%> (+19.75%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=footer). Last update [9931f81...a7a5c18](https://codecov.io/gh/huggingface/transformers/pull/4999?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4999/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4999", "html_url": "https://github.com/huggingface/transformers/pull/4999", "diff_url": "https://github.com/huggingface/transformers/pull/4999.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4999.patch", "merged_at": 1592211892000 }
https://api.github.com/repos/huggingface/transformers/issues/4998
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4998/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4998/comments
https://api.github.com/repos/huggingface/transformers/issues/4998/events
https://github.com/huggingface/transformers/pull/4998
638,622,488
MDExOlB1bGxSZXF1ZXN0NDM0MzUxODU3
4,998
Remove deprecation warning TF2.2
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The only CI message is :\r\n\r\n```\r\nwould reformat /home/circleci/transformers/src/transformers/trainer_tf.py\r\nOh no! 💥 💔 💥\r\n1 file would be reformatted, 299 files would be left unchanged.\r\n```\r\n\r\nHow can I see what to change to fit the code style ?", "Thanks for the PR, but merging this, will remove the trainer from being compliant with TF <= 2.1. Which is not what we want yet. But we keep it here when we will be ready to do so ;)" ]
1,592
1,592
1,592
CONTRIBUTOR
null
`strategy.experimental_run_v2()` is deprecated in favor of `strategy.run()` https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/distribute/distribute_lib.py#L953-L957 Fix #4992
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4998/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4998", "html_url": "https://github.com/huggingface/transformers/pull/4998", "diff_url": "https://github.com/huggingface/transformers/pull/4998.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4998.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4997
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4997/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4997/comments
https://api.github.com/repos/huggingface/transformers/issues/4997/events
https://github.com/huggingface/transformers/pull/4997
638,613,246
MDExOlB1bGxSZXF1ZXN0NDM0MzQ0NDI3
4,997
Fix importing transformers on Windows - SIGKILL not defined
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=h1) Report\n> Merging [#4997](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935&el=desc) will **increase** coverage by `0.72%`.\n> The diff coverage is `66.66%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4997/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4997 +/- ##\n==========================================\n+ Coverage 76.89% 77.61% +0.72% \n==========================================\n Files 128 128 \n Lines 21854 21856 +2 \n==========================================\n+ Hits 16804 16964 +160 \n+ Misses 5050 4892 -158 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4997/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `72.96% <66.66%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4997/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-1.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4997/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <0.00%> (+19.75%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4997/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=footer). Last update [9931f81...d6f30c5](https://codecov.io/gh/huggingface/transformers/pull/4997?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Great, thanks @mfuntowicz " ]
1,592
1,592
1,592
MEMBER
null
On Windows platform there is no such `signal.SIGKILL`, we need to send `CTRL_C_EVENT` which is `Ctrl+C`. This PR aims at: 1. Not importing the `signal.SIGKILL` on Windows platforms as it's undefined 2. Using the right `signal.CTRL_C_EVENT` on Windows platforms and `signal.SIGKILL` on Unix*.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4997/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4997", "html_url": "https://github.com/huggingface/transformers/pull/4997", "diff_url": "https://github.com/huggingface/transformers/pull/4997.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4997.patch", "merged_at": 1592242618000 }
https://api.github.com/repos/huggingface/transformers/issues/4996
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4996/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4996/comments
https://api.github.com/repos/huggingface/transformers/issues/4996/events
https://github.com/huggingface/transformers/issues/4996
638,562,594
MDU6SXNzdWU2Mzg1NjI1OTQ=
4,996
❓ How to use TFTrainer on TPU ? Unable to destroy remote tensor handles
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "It seems to be related to the size of dataset : if I use `--training_steps` instead of `--num_epochs` I don't have the error.\r\n\r\n---\r\n\r\nBut a similar error appear at evaluation time :\r\n\r\n```\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __inference__evaluate_steps_517418}} Compilation failure: Output shapes of then and else branches do not match: (pred[1]) vs. (pred[])\r\n [[{{node lossed_bart/model/decoder/cond}}]]\r\n TPU compilation failed\r\n [[tpu_compile_succeeded_assert/_17738495357405513889/_9]]\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
CONTRIBUTOR
null
# ❓ Questions & Help I'm trying to train my model on TPU using `TFTrainer`. Training starts fine, but after a few training steps, I'm having this error : > Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: 4 root error(s) found. I don't know what I am doing wrong, any help is greatly appreciated. --- Here is the full stacktrace : ``` 2020/06/15 05:14:31 - INFO - transformers.trainer_tf - Epoch 2 Step 1500 Train Loss 3.0905 2020/06/15 05:17:33 - INFO - transformers.trainer_tf - Epoch 2 Step 2000 Train Loss 2.8351 2020/06/15 05:20:35 - INFO - transformers.trainer_tf - Epoch 2 Step 2500 Train Loss 3.1043 2020-06-15 05:22:53.919745: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:76] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: 4 root error(s) found. (0) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started (1) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started (2) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started (3) Out of range: {{function_node __inference__accumulate_next_440382}} End of sequence [[{{node IteratorGetNext_6}}]] 0 successful operations. 5 derived errors ignored. Traceback (most recent call last): File "train.py", line 136, in <module> main() File "train.py", line 112, in main trainer.train() File "/home/me/.venv/x/lib/python3.6/site-packages/transformers/trainer_tf.py", line 274, in train for training_loss in self._training_steps(train_ds, optimizer): File "/home/me/.venv/x/lib/python3.6/site-packages/transformers/trainer_tf.py", line 319, in _training_steps self._apply_gradients(optimizer) File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__ result = self._call(*args, **kwds) File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 611, in _call return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2420, in __call__ return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1665, in _filtered_call self.captured_inputs) File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1746, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 598, in call ctx=ctx) File "/home/me/.venv/x/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.OutOfRangeError: 4 root error(s) found. (0) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started (1) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started (2) Cancelled: {{function_node __inference__accumulate_next_440382}} Function was cancelled before it was started (3) Out of range: {{function_node __inference__accumulate_next_440382}} End of sequence [[{{node IteratorGetNext_6}}]] 0 successful operations. 5 derived errors ignored. [Op:__inference__apply_gradients_373775] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4996/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4995
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4995/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4995/comments
https://api.github.com/repos/huggingface/transformers/issues/4995/events
https://github.com/huggingface/transformers/pull/4995
638,557,475
MDExOlB1bGxSZXF1ZXN0NDM0Mjk4MzY2
4,995
append keyword arguments to the output
{ "login": "Kyeongpil", "id": 6302455, "node_id": "MDQ6VXNlcjYzMDI0NTU=", "avatar_url": "https://avatars.githubusercontent.com/u/6302455?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kyeongpil", "html_url": "https://github.com/Kyeongpil", "followers_url": "https://api.github.com/users/Kyeongpil/followers", "following_url": "https://api.github.com/users/Kyeongpil/following{/other_user}", "gists_url": "https://api.github.com/users/Kyeongpil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kyeongpil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kyeongpil/subscriptions", "organizations_url": "https://api.github.com/users/Kyeongpil/orgs", "repos_url": "https://api.github.com/users/Kyeongpil/repos", "events_url": "https://api.github.com/users/Kyeongpil/events{/privacy}", "received_events_url": "https://api.github.com/users/Kyeongpil/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
CONTRIBUTOR
null
prepare_inputs_for_generation function receives keyword arguments (model_specific_kwargs) However, these arguments are not ignored. I fix this codes to return arguments including these keyword arguments. Actually, I don't have an idea about the purpose of prepare_logits_for_generation function, which only returns the logit values.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4995/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4995", "html_url": "https://github.com/huggingface/transformers/pull/4995", "diff_url": "https://github.com/huggingface/transformers/pull/4995.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4995.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4994
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4994/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4994/comments
https://api.github.com/repos/huggingface/transformers/issues/4994/events
https://github.com/huggingface/transformers/issues/4994
638,506,361
MDU6SXNzdWU2Mzg1MDYzNjE=
4,994
🐛 TFTrainer not working on TPU (TF2.2)
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834054694, "node_id": "MDU6TGFiZWwxODM0MDU0Njk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow", "name": "TensorFlow", "color": "FF6F00", "default": false, "description": "Anything TensorFlow" } ]
closed
false
null
[]
[ "Currently as a work-around I set the optimizer as an attribute and remove the argument :\r\n\r\nAfter this line :\r\nhttps://github.com/huggingface/transformers/blob/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935/src/transformers/trainer_tf.py#L235\r\n\r\nI add :\r\n\r\n```python\r\nself.optimizer = optimizer\r\n```\r\n\r\nAnd replace the argument optimizer :\r\n\r\nhttps://github.com/huggingface/transformers/blob/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935/src/transformers/trainer_tf.py#L326-L335\r\n\r\n```python\r\n def _step(self):\r\n \"\"\"Applies gradients and resets accumulation.\"\"\"\r\n gradient_scale = self.gradient_accumulator.step * self.args.strategy.num_replicas_in_sync\r\n gradients = [\r\n gradient / tf.cast(gradient_scale, gradient.dtype) for gradient in self.gradient_accumulator.gradients\r\n ]\r\n gradients = [(tf.clip_by_value(grad, -self.args.max_grad_norm, self.args.max_grad_norm)) for grad in gradients]\r\n\r\n self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables)))\r\n self.gradient_accumulator.reset()\r\n```\r\n\r\nAnd finally replace the call :\r\n\r\nhttps://github.com/huggingface/transformers/blob/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935/src/transformers/trainer_tf.py#L324\r\n\r\n```python\r\nself.args.strategy.experimental_run_v2(self._step)\r\n```\r\n\r\n---\r\n\r\nNot closing as it's only a work-around. Any cleaner solution to put in a PR ?", "Hello !\r\n\r\nNice finding! TPUs with TF Trainer is currently under developement and not works for several cases. If you really need to train your model with TPUs I suggest you to use the PyTorch version of the trainer. Full support of TPUs for the TF Trainer will arrive I hope this month.\r\n\r\nBut if you are ready to make PRs, you are welcome to do so :)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
CONTRIBUTOR
null
# 🐛 Bug ## Information The problem arises when using: * [ ] the official example scripts * [x] my own modified scripts The tasks I am working on is: * [x] an official GLUE/SQUaD task: CNN/DM * [ ] my own task or dataset ## To reproduce Steps to reproduce the behavior: 1. Install `transformers` from `master` 2. Run TPU training using `TFTrainer` I get the following error : >TypeError: Failed to convert object of type <class 'transformers.optimization_tf.AdamWeightDecay'> to Tensor. Contents: <transformers.optimization_tf.AdamWeightDecay object at 0x7faddc7cfe80>. Consider casting elements to a supported type. --- Here : https://github.com/huggingface/transformers/blob/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935/src/transformers/trainer_tf.py#L324 we pass `optimizer` as arguments. But according to the documentation in TF : https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/distribute/distribute_lib.py#L890-L891 >All arguments in `args` or `kwargs` should either be nest of tensors or `tf.distribute.DistributedValues` containing tensors or composite tensors. ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.9.0-9-amd64-x86_64-with-debian-9.12 - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: TPU training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4994/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4993
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4993/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4993/comments
https://api.github.com/repos/huggingface/transformers/issues/4993/events
https://github.com/huggingface/transformers/issues/4993
638,504,383
MDU6SXNzdWU2Mzg1MDQzODM=
4,993
Importing transformers causes segmentation fault when setting cuda device
{ "login": "jcyk", "id": 11582128, "node_id": "MDQ6VXNlcjExNTgyMTI4", "avatar_url": "https://avatars.githubusercontent.com/u/11582128?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcyk", "html_url": "https://github.com/jcyk", "followers_url": "https://api.github.com/users/jcyk/followers", "following_url": "https://api.github.com/users/jcyk/following{/other_user}", "gists_url": "https://api.github.com/users/jcyk/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcyk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcyk/subscriptions", "organizations_url": "https://api.github.com/users/jcyk/orgs", "repos_url": "https://api.github.com/users/jcyk/repos", "events_url": "https://api.github.com/users/jcyk/events{/privacy}", "received_events_url": "https://api.github.com/users/jcyk/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @jcyk, \r\n\r\nwhen trying to reproduce this error with PyTorch 1.5.0, there is no problem.\r\n\r\nHowever, when I run your code with PyTorch 1.4.0 (as you did), I get the following error: \r\n\r\n```python \r\n1.4.0\r\n2.11.0\r\nFalse\r\nTraceback (most recent call last):\r\n File \"./bug_4993.py\", line 16, in <module>\r\n main(0)\r\n File \"./bug_4993.py\", line 8, in main\r\n torch.cuda.set_device(local_rank)\r\n File \"/home/patrick/anaconda3/envs/pytorch_1_4/lib/python3.8/site-packages/torch/cuda/__init__.py\", line 292, in set_device\r\n torch._C._cuda_setDevice(device)\r\nAttributeError: module 'torch._C' has no attribute '_cuda_setDevice'\r\n```\r\n, which is not related to `transformers`.\r\n\r\nAlso when going to PyTorch 1.4 documentation: https://pytorch.org/docs/1.4.0/cuda.html#torch.cuda.set_device\r\n\r\nYou can see that `set_device` is not recommended and that you should use https://pytorch.org/docs/1.4.0/cuda.html#torch.cuda.device instead. \r\n\r\nCould you try using this function instead and see what happens? Also, it's very hard to trace back\r\n `Segmentation fault (core dumped)` errors. Can you try getting a more explicit error message?", "hi @patrickvonplaten , thanks for your reply.\r\n\r\nPlease notice that if i remove the line `import transformers`, the problem will disappear. That is why I suspect there is a problem with `transformers`. Please see the following two examples.\r\n\r\ncode0\r\n```\r\nimport torch\r\n#import transformers\r\n\r\ndef main(local_rank):\r\n device = torch.device('cuda', local_rank)\r\n x = torch.tensor([1,2,3], device=device)\r\n\r\nif __name__ == \"__main__\":\r\n print (torch.__version__)\r\n #print (transformers.__version__)\r\n print (torch.cuda.is_available())\r\n main(0)\r\n```\r\noutput0\r\n```\r\n1.4.0\r\nTrue\r\n```\r\n\r\ncode1\r\n```\r\nimport torch\r\nimport transformers\r\n\r\ndef main(local_rank):\r\n device = torch.device('cuda', local_rank)\r\n x = torch.tensor([1,2,3], device=device)\r\n\r\nif __name__ == \"__main__\":\r\n print (torch.__version__)\r\n print (transformers.__version__)\r\n print (torch.cuda.is_available())\r\n main(0)\r\n```\r\noutput1\r\n```\r\n1.4.0\r\n2.11.0\r\nTrue\r\nSegmentation fault (core dumped)\r\n```\r\n\r\n", "I am experiencing this exact same problem, and updating to pytorch 1.5 is not an option. did you have any success figuring this out?", "EDIT: This problem is caused by the sentencepiece dependency. It goes away if I comment out all references to this dependency. This will break `xlnet`, `xlm_roberta`, `marian`, `t5`, `albert`, `reformer`, and `camembert`, but if you are using any of the non-sentencepiece models, this should solve your problem.", "@daphnei thx for pointing out this! \r\nThe solution for me was to upgrade torch to `1.5.1+cu92`, and downgrade transformers version to `2.6.0.`\r\nQuite weird problem!\r\n\r\n", "That seems like a better fix than my hack! Unfortunately, I'm using a managed machine which doesn't have the CUDA version to support Torch 1.5. ", "> @daphnei thx for pointing out this!\r\n> The solution for me was to upgrade torch to `1.5.1+cu92`, and downgrade transformers version to `2.6.0.`\r\n> Quite weird problem!\r\n\r\nI have met the exactly same problem with you, and did you find the root cause? is this issue same to report 'fix segmentation fault' [#2207](https://github.com/huggingface/transformers/pull/2207)?\r\nThe following were my test results with different versions:\r\n1. Segmentation fault (core dumped): torch 1.2.0 ; transformers 2.11.0;cuda 10.0\r\n2. Segmentation fault (core dumped): torch 1.2.0 ; transformers 2.6.0;cuda 10.0\r\n3. Segmentation fault (core dumped): torch 1.2.0 ; transformers 2.5.1;cuda 10.0\r\n4. Success: torch 1.1.0 ; transformers 2.5.1;cuda 10.0\r\n\r\nBTW, is there a torch release of ‘1.5.1+cuda10.0’? \r\nThanks.\r\n\r\n\r\n\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "1.5.1+cu10.2 it's okay, but my device is Telsa k40m,Here is the Error: `RuntimeError: CUDA error: no kernel image is available for execution on the device` not support higher version PyTorch version。Here is my Test:\r\n\r\n- pytorch==1.2.0~1.3.0\r\n It works well in Telsa K40m, running `python -c \"from transformers import pipeline; print(pipeline('sentiment-analysis')('I hate you'))\"` Segmentation fault (core dumped) responded\r\n- pytorch==1.5.0\r\n it's some error for `import torch; a= torch.Tensor(5,3); a=a.cuda(); a` ; RuntimeError: CUDA error: no kernel image is available for execution on the device \r\n\r\nMy Env is :\r\n> NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: 11.0", "I slove this question!\r\nand I've outlined two solutions:\r\n\r\n- Update your torch, such as from 1.2.0 to 1.7\r\n\r\n- reduce the version of the associated package, such as `sentencepiece` : from 0.1.94 to 0.1.91 and delete `dataclasses`\r\n\r\n\r\nI tried two above solutions and they both work! And because some reason I cannot upgrade the cuda version to adapt torch1.7, so I use the second solution : )\r\n\r\nMy Env is : \r\n`transformers 3.5.0\r\npython 3.7 \r\nCUDA 10.1 \r\npytorch 1.2.0`\r\n" ]
1,592
1,605
1,599
NONE
null
# 🐛 Bug ## Information The problem arises when using: my own modified scripts: (give details below) ## To reproduce ``` import torch import transformers def main(local_rank): torch.cuda.set_device(local_rank) device = torch.device('cuda', local_rank) if __name__ == "__main__": print (torch.__version__) print (transformers.__version__) print (torch.cuda.is_available()) main(0) ``` ## Expected behavior ``` 1.4.0 2.11.0 True Segmentation fault (core dumped) ``` if commenting out `import transformers`, everything will be fine. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: linux - Python version: 3.6.8 - PyTorch version (GPU?): 1.4.0 GPU - Tensorflow version (GPU?): None - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4993/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/4993/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4992
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4992/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4992/comments
https://api.github.com/repos/huggingface/transformers/issues/4992/events
https://github.com/huggingface/transformers/issues/4992
638,467,072
MDU6SXNzdWU2Mzg0NjcwNzI=
4,992
[TFTrainer] Tensorflow Warning : experimental_run_v2 is deprecated
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "closes by #4998" ]
1,592
1,592
1,592
CONTRIBUTOR
null
# 🐛 Bug When running my code with TFTrainer, I'm receiving : >WARNING - tensorflow - From /home/me/.venv/bart/lib/python3.6/site-packages/transformers/trainer_tf.py:355: StrategyBase.experimental_run_v2 (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version. Instructions for updating: renamed to `run` Is it expected ? How can I remove this warning ? ## Information The problem arises when using: * [ ] the official example scripts * [x] my own modified scripts The tasks I am working on is: * [x] an official GLUE/SQUaD task: CNN/DM * [ ] my own task or dataset ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-4.9.0-9-amd64-x86_64-with-debian-9.12 - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: Using TPU training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4992/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4991
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4991/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4991/comments
https://api.github.com/repos/huggingface/transformers/issues/4991/events
https://github.com/huggingface/transformers/issues/4991
638,438,083
MDU6SXNzdWU2Mzg0MzgwODM=
4,991
evaluating with trainer.py with TPU results in sudden RAM spike and crash
{ "login": "LeonieWeissweiler", "id": 30300891, "node_id": "MDQ6VXNlcjMwMzAwODkx", "avatar_url": "https://avatars.githubusercontent.com/u/30300891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeonieWeissweiler", "html_url": "https://github.com/LeonieWeissweiler", "followers_url": "https://api.github.com/users/LeonieWeissweiler/followers", "following_url": "https://api.github.com/users/LeonieWeissweiler/following{/other_user}", "gists_url": "https://api.github.com/users/LeonieWeissweiler/gists{/gist_id}", "starred_url": "https://api.github.com/users/LeonieWeissweiler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeonieWeissweiler/subscriptions", "organizations_url": "https://api.github.com/users/LeonieWeissweiler/orgs", "repos_url": "https://api.github.com/users/LeonieWeissweiler/repos", "events_url": "https://api.github.com/users/LeonieWeissweiler/events{/privacy}", "received_events_url": "https://api.github.com/users/LeonieWeissweiler/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Any update on this? I've got a similar issue " ]
1,592
1,618
1,598
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Roberta with the official how to train from scratch example Language I am using the model on (English, Chinese ...): Esperanto The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. follow the official how to train with esperanto example 2. define a second dataset for testing, both datasets are TextDatasets, the train set is around 300MB, the test set around 3MB 3. run trainer.train() with evaluate_during_training The evaluation loop runs a few examples before crashing the colab session due to an unknown reason. I have managed to increase the number of examples from 2 to 6 by reducing the per_device_eval_batch_size. My eval dataset only contains 31 examples. When checking the colab logs after crashing, it seems that trainer.py tried to allocate 4GB of memory before crashing, which seems unrealistic to me given the size of the datasets. Specifically (I think this is the relevant part, please correct me if I'm wrong): 2020-06-14 21:45:53.187694: E tensorflow/compiler/xla/xla_client/xla_util.cc:76] (1) Resource exhausted: Attempting to reserve 2.78G at the bottom of memory. That was not possible. There are 2.75G free, 0B reserved, and 2.75G reservable. When switching to GPU, I encounter a similar issue, but this time with a complete stack trace because this doesn't crash the runtime: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-14-10e96b9a36a3> in <module>() ----> 1 trainer.train() 2 trainer.save_model("./drive/My Drive/models/roberta/output") 2 frames <ipython-input-11-34c6e8e4c40d> in train(self, model_path) 507 508 if self.args.evaluate_during_training: --> 509 self.evaluate() 510 511 if self.args.save_steps > 0 and self.global_step % self.args.save_steps == 0: <ipython-input-11-34c6e8e4c40d> in evaluate(self, eval_dataset, prediction_loss_only) 706 eval_dataloader = self.get_eval_dataloader(eval_dataset) 707 print(2) --> 708 output = self._prediction_loop(eval_dataloader, description="Evaluation") 709 print(3) 710 self._log(output.metrics) <ipython-input-11-34c6e8e4c40d> in _prediction_loop(self, dataloader, description, prediction_loss_only) 775 preds = logits.detach() 776 else: --> 777 preds = torch.cat((preds, logits.detach()), dim=0) 778 if inputs.get("labels") is not None: 779 if label_ids is None: RuntimeError: CUDA out of memory. Tried to allocate 6.75 GiB (GPU 0; 15.90 GiB total capacity; 7.99 GiB already allocated; 6.40 GiB free; 8.81 GiB reserved in total by PyTorch) ## Expected behavior I expected the eval loss or error to be computed. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: colab - Python version: 3.6 - Using GPU in script?: no, TPU with xla - Using distributed or parallel set-up in script?: in theory, but only 1 TPU core available
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4991/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4990
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4990/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4990/comments
https://api.github.com/repos/huggingface/transformers/issues/4990/events
https://github.com/huggingface/transformers/issues/4990
638,417,369
MDU6SXNzdWU2Mzg0MTczNjk=
4,990
Save the Dataset for training GPT2
{ "login": "gaps013", "id": 9932320, "node_id": "MDQ6VXNlcjk5MzIzMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/9932320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaps013", "html_url": "https://github.com/gaps013", "followers_url": "https://api.github.com/users/gaps013/followers", "following_url": "https://api.github.com/users/gaps013/following{/other_user}", "gists_url": "https://api.github.com/users/gaps013/gists{/gist_id}", "starred_url": "https://api.github.com/users/gaps013/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaps013/subscriptions", "organizations_url": "https://api.github.com/users/gaps013/orgs", "repos_url": "https://api.github.com/users/gaps013/repos", "events_url": "https://api.github.com/users/gaps013/events{/privacy}", "received_events_url": "https://api.github.com/users/gaps013/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**: I am trying to train GPT2 on a custom dataset, the training text file is over 100GB, when creating the Dataset for training, it is taking too long and I was hoping if there is any way to save the dataset for later use. I am using hugging face blog post on how to train transformers from scratch as a guide (https://huggingface.co/blog/how-to-train). Thank You
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4990/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4989
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4989/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4989/comments
https://api.github.com/repos/huggingface/transformers/issues/4989/events
https://github.com/huggingface/transformers/pull/4989
638,390,424
MDExOlB1bGxSZXF1ZXN0NDM0MTcyNzQw
4,989
attention
{ "login": "yang-chenyu104", "id": 61871674, "node_id": "MDQ6VXNlcjYxODcxNjc0", "avatar_url": "https://avatars.githubusercontent.com/u/61871674?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yang-chenyu104", "html_url": "https://github.com/yang-chenyu104", "followers_url": "https://api.github.com/users/yang-chenyu104/followers", "following_url": "https://api.github.com/users/yang-chenyu104/following{/other_user}", "gists_url": "https://api.github.com/users/yang-chenyu104/gists{/gist_id}", "starred_url": "https://api.github.com/users/yang-chenyu104/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yang-chenyu104/subscriptions", "organizations_url": "https://api.github.com/users/yang-chenyu104/orgs", "repos_url": "https://api.github.com/users/yang-chenyu104/repos", "events_url": "https://api.github.com/users/yang-chenyu104/events{/privacy}", "received_events_url": "https://api.github.com/users/yang-chenyu104/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,592
1,592
1,592
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4989/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4989", "html_url": "https://github.com/huggingface/transformers/pull/4989", "diff_url": "https://github.com/huggingface/transformers/pull/4989.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4989.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4988
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4988/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4988/comments
https://api.github.com/repos/huggingface/transformers/issues/4988/events
https://github.com/huggingface/transformers/issues/4988
638,381,103
MDU6SXNzdWU2MzgzODExMDM=
4,988
error while instantiating model
{ "login": "kevinghst", "id": 18520313, "node_id": "MDQ6VXNlcjE4NTIwMzEz", "avatar_url": "https://avatars.githubusercontent.com/u/18520313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevinghst", "html_url": "https://github.com/kevinghst", "followers_url": "https://api.github.com/users/kevinghst/followers", "following_url": "https://api.github.com/users/kevinghst/following{/other_user}", "gists_url": "https://api.github.com/users/kevinghst/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevinghst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinghst/subscriptions", "organizations_url": "https://api.github.com/users/kevinghst/orgs", "repos_url": "https://api.github.com/users/kevinghst/repos", "events_url": "https://api.github.com/users/kevinghst/events{/privacy}", "received_events_url": "https://api.github.com/users/kevinghst/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
NONE
null
Hi, For my experiments, I need to make some changes to the forward pass of Roberta sequence classification model. Thus, I copied **RobertaForSequenceClassification** from _modeling_roberta.py_ into a [separate file](https://github.com/kevinghst/mixmatch_from_scratch/blob/master/models_roberta.py) in my repo. And I try to instantiate it in my _main.py_ like this: ``` model = RobertaForSequenceClassification.from_pretrained( 'roberta-base', num_labels = NUM_LABELS[cfg.task], ) ``` However, it gives me the following stacktrace: ---------------------------------------------------------------------------------------------------------- Traceback (most recent call last): File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/torch/serialization.py", line 186, in _check_seekable f.seek(f.tell()) AttributeError: 'NoneType' object has no attribute 'seek' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/transformers/modeling_utils.py", line 516, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/torch/serialization.py", line 368, in load return _load(f, map_location, pickle_module) File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/torch/serialization.py", line 517, in _load _check_seekable(f) File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/torch/serialization.py", line 189, in _check_seekable raise_err_msg(["seek", "tell"], e) File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/torch/serialization.py", line 182, in raise_err_msg raise type(e)(msg) AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "main.py", line 208, in <module> num_labels = NUM_LABELS[cfg.task], File "/scratch/wz1232/anaconda3/envs/mixmatch/lib/python3.6/site-packages/transformers/modeling_utils.py", line 519, in from_pretrained "Unable to load weights from pytorch checkpoint file. " OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ----------------------------------------------------------------------------------------------------------- It doesn't occur when I import **RobertaForSequenceClassification** straight from the library, as opposed to from my own file containing the copied code. Why is this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4988/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4987
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4987/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4987/comments
https://api.github.com/repos/huggingface/transformers/issues/4987/events
https://github.com/huggingface/transformers/pull/4987
638,359,035
MDExOlB1bGxSZXF1ZXN0NDM0MTQ5ODAy
4,987
Fix Inconsistent NER Grouping (Pipeline)
{ "login": "enzoampil", "id": 39557688, "node_id": "MDQ6VXNlcjM5NTU3Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enzoampil", "html_url": "https://github.com/enzoampil", "followers_url": "https://api.github.com/users/enzoampil/followers", "following_url": "https://api.github.com/users/enzoampil/following{/other_user}", "gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}", "starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions", "organizations_url": "https://api.github.com/users/enzoampil/orgs", "repos_url": "https://api.github.com/users/enzoampil/repos", "events_url": "https://api.github.com/users/enzoampil/events{/privacy}", "received_events_url": "https://api.github.com/users/enzoampil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=h1) Report\n> Merging [#4987](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `1.40%`.\n> The diff coverage is `91.66%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4987/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4987 +/- ##\n==========================================\n- Coverage 77.83% 76.43% -1.41% \n==========================================\n Files 141 141 \n Lines 24634 24638 +4 \n==========================================\n- Hits 19175 18832 -343 \n- Misses 5459 5806 +347 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.16% <91.66%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `85.75% <0.00%> (-7.85%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.02% <0.00%> (-2.18%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.30% <0.00%> (-1.54%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (-0.46%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.55% <0.00%> (-0.41%)` | :arrow_down: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/4987/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=footer). Last update [58cca47...05f50d9](https://codecov.io/gh/huggingface/transformers/pull/4987?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Looks great, but this sort of code/feature also looks like a perfect candidate for more unit-testing coverage.\r\n\r\nWhat do you think?", "Agree, can add these as test cases in [test_pipelines](https://github.com/huggingface/transformers/blob/master/tests/test_pipelines.py).", "That would be great @enzoampil!", "@julien-c I've added the original issue bug as a test case (Number 1 in the original post). Do note that I only included it in the `torch` version because `mrm8488/bert-spanish-cased-finetuned-ner` seems to only work for torch. Please let me know if this is enough for this PR.\r\n\r\nFor future PRs to add new test cases coming from issues found on top of this (e.g. those from issue #5077), I was hoping to get some guidance on how we'd include them to the test coverage without making it too heavy. For context, different cases are typically based on different models, which means we'll have to run separate models to add them as test cases.", "I think we should try to make the tests more unitary, meaning that for instance you would feed them fixed model outputs (no actual forward pass) and check that the actual formatted output is correct.\r\n\r\nThis might require splitting the call method in smaller more testable functions, which is totally fine IMO.", "I see what you mean. Yes, that makes more sense than running different models. Will work on this.", "@julien-c @LysandreJik I've performed the following adjustments to the PR:\r\n\r\n1. **I've separated the `group_entities` function from the raw NER forward pass altogether so that it's easy to run tests that feed fixed model outputs and check that the actual formatted output is correct.**\r\n\r\n`group_entities` now takes as an argument a list[dict] of raw NER model outputs, and converts them to the *grouped* equivalent.\r\n\r\n2. **I've added a new `NerPipelineTests` class in `test_pipelines` which contains all the NER related tests, and includes new tests for the `group_entities` function.**\r\n\r\nThe test simply confirms if the expected formatted output (grouped) is equivalent to the actual formatted output given the raw model outputs. For the test cases, I used the two samples from the original PR post. It should be straight forward to continue adding test cases moving forward.\r\n\r\nPlease do let me know what you guys think! :smile:", "Yes, looks good. I would add some typings to (at least) the `group_entities` and `group_sub_entities` but we can do that in a subsequent PR.", "@LysandreJik @julien-c Thanks for the feedback. I've added typings for the `group_entities` and `group_sub_entities` functions :smile: " ]
1,592
1,594
1,594
CONTRIBUTOR
null
### This PR solves issue #4816 by: 1. Applying entity grouping to similar entity types with different prefixes (i.e. `B` and `I`) 2. Ensuring that separate entities at the last filtered index are no longer excluded from grouping. Running the sample script below (based on reference issue #4816) returns the expected results. Do note that the `entity_group` is based on the `entity_type` of the first entity in the group. ``` from transformers import pipeline NER_MODEL = "mrm8488/bert-spanish-cased-finetuned-ner" nlp_ner = pipeline("ner", model=NER_MODEL, grouped_entities=True, tokenizer=(NER_MODEL, {"use_fast": False})) t = """Consuelo Araújo Noguera, ministra de cultura del presidente Andrés Pastrana (1998.2002) fue asesinada por las Farc luego de haber permanecido secuestrada por algunos meses.""" nlp_ner(t) [{'entity_group': 'B-PER', 'score': 0.9710702640669686, 'word': 'Consuelo Araújo Noguera'}, {'entity_group': 'B-PER', 'score': 0.9997273534536362, 'word': 'Andrés Pastrana'}, {'entity_group': 'B-ORG', 'score': 0.8589080572128296, 'word': 'Farc'}] ``` I also ran another test to ensure that number 2 (separate entity at the last index) is working properly. I confirmed that it is working properly now. ``` nlp = pipeline('ner', grouped_entities=False) nlp("Enzo works at the the UN") [{'entity': 'I-PER', 'index': 1, 'score': 0.9968166351318359, 'word': 'En'}, {'entity': 'I-PER', 'index': 2, 'score': 0.9957635998725891, 'word': '##zo'}, {'entity': 'I-ORG', 'index': 7, 'score': 0.9986497163772583, 'word': 'UN'}] nlp2 = pipeline('ner', grouped_entities=True) nlp2("Enzo works at the the UN") [{'entity_group': 'I-PER', 'score': 0.9962901175022125, 'word': 'Enzo'}, {'entity_group': 'I-ORG', 'score': 0.9986497163772583, 'word': 'UN'}] ``` You can test these out yourself in this colab [notebook](https://colab.research.google.com/drive/1D0xK7MSOQcxOCAe8hpnpFVqdoKrmejSS?usp=sharing). cc @dav009 @mfuntowicz
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4987/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4987", "html_url": "https://github.com/huggingface/transformers/pull/4987", "diff_url": "https://github.com/huggingface/transformers/pull/4987.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4987.patch", "merged_at": 1594239498000 }
https://api.github.com/repos/huggingface/transformers/issues/4986
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4986/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4986/comments
https://api.github.com/repos/huggingface/transformers/issues/4986/events
https://github.com/huggingface/transformers/issues/4986
638,343,631
MDU6SXNzdWU2MzgzNDM2MzE=
4,986
BertTokenizer: ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
{ "login": "mariusjohan", "id": 49961316, "node_id": "MDQ6VXNlcjQ5OTYxMzE2", "avatar_url": "https://avatars.githubusercontent.com/u/49961316?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariusjohan", "html_url": "https://github.com/mariusjohan", "followers_url": "https://api.github.com/users/mariusjohan/followers", "following_url": "https://api.github.com/users/mariusjohan/following{/other_user}", "gists_url": "https://api.github.com/users/mariusjohan/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariusjohan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariusjohan/subscriptions", "organizations_url": "https://api.github.com/users/mariusjohan/orgs", "repos_url": "https://api.github.com/users/mariusjohan/repos", "events_url": "https://api.github.com/users/mariusjohan/events{/privacy}", "received_events_url": "https://api.github.com/users/mariusjohan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The mistake is on me. I forgot to download the tokenizer😂", "I am getting the same error. What exactly do you mean by download the tokenizer? Doesn't it come with the transformers package?", "I think what he meant was that use used the class, and not the instance, to encode text. You should always initialize the class:\r\n\r\n```py\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n# or\r\ntokenizer = BertTokenizer(vocabfile)\r\n\r\n# now you can encode\r\ntext = 'A quick brown fox jumps over' # Just a dummy text\r\nmodel_inputs = tokenizer.encode_plus(text)\r\n```", "@LysandreJik, while I have you. I know this aint the right place to ask you, but.\n\nI’ve seen that you’re about to release the Electra modeling for question answering, and I’ve written a small script for training the electra discriminator for question answering, and I’m about to train the model. \nso Would it be useful for you if I trained the model, or are you already doing that?", "Hi @mariusjohan, we welcome all models here :) The [hub](https://huggingface.co/models) is a very easy way to share models. The way you're training it will surely be different to other trainings, so sharing it on the hub with details of how you trained it is always welcome!", "Ok, this is still not working for me. I am running the run_squad.py script and I keep getting the error.\r\n\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/pool.py\", line 119, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/pool.py\", line 44, in mapstar\r\n return list(map(*args))\r\n File \"/kriviv-10T/transformers/transformers/src/transformers/data/processors/squad.py\", line 142, in squad_convert_example_to_features\r\n return_token_type_ids=True,\r\n File \"/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils_base.py\", line 1521, in encode_plus\r\n **kwargs,\r\n File \"/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils.py\", line 356, in _encode_plus\r\n second_ids = get_input_ids(text_pair) if text_pair is not None else None\r\n File \"/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils.py\", line 343, in get_input_ids\r\n f\"Input {text} is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.\"\r\nValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.", "The reason I got the error, was because I forgot to initialize the tokenize module, and therefore it thinks the self argument is the input_ids and then you’re not giving it the real input_ids argument. And ofc, the system was way complex than the example I gave, so maybe try to check how the tokenization module is giving. Maybe also check your inputs and so on if you haven’t already. Sadly I can first fix it in a few hours.", "@vkrishnamurthy11 Did it help?", "I'm still facing the same issue:\r\nValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.\r\nWhile trying to run run_squad.py. I'm trying to train and test it with:\r\nhttps://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\r\nhttps://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json \r\n ", "Facing the same issue as Sarang when training on Squad using run_squad.py . Is this a known bug?", "Same issue here when running squad_convert_examples_to_features in my own code. ", "I don't know if it helps, but the reason was because I failed to use _**.from_pretrained**_ function. Maybe check for that. So maybe print out the **_self_** argument", "I resolved this issue by updating all packages I was using for training to the newest version. In my experience you need to have:\r\n1.9.0 - torch\r\n0.10.0 - torchtext\r\n4.11.3 - transformers\r\nor newer...\r\n\r\nPS: You can check the version you are currently using with:\r\nprint(torch.__version__)\r\nprint(torchtext.__version__)\r\nprint(transformers.__version__)", "> I resolved this issue by updating all packages I was using for training to the newest version. In my experience you need to have: 1.9.0 - torch 0.10.0 - torchtext 4.11.3 - transformers or newer...\r\n> \r\n> PS: You can check the version you are currently using with: print(torch.**version**) print(torchtext.**version**) print(transformers.**version**)\r\n\r\nimport torch\r\nimport torchtext\r\nimport transformers\r\nimport numpy as np\r\nimport os\r\nimport collections\r\n\r\nos.makedirs('./data', exist_ok=True)\r\ntrain_dataset, test_dataset = torchtext.datasets.AG_NEWS(root='./data')\r\nclasses = ['World', 'Sports', 'Business', 'Sci/Tech']\r\n\r\ntrain_dataset = list(train_dataset)\r\ntest_dataset = list(test_dataset)\r\n\r\nbert_model = 'bert-base-uncased'\r\n\r\ntokenizer = transformers.BertTokenizer.from_pretrained(bert_model)\r\n\r\nMAX_SEQ_LEN = 128\r\nPAD_INDEX = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)\r\nUNK_INDEX = tokenizer.convert_tokens_to_ids(tokenizer.unk_token)\r\n\r\n\r\ndef pad_bert(b):\r\n # b is the list of tuples of length batch_size\r\n # - first element of a tuple = label,\r\n # - second = feature (text sequence)\r\n # build vectorized sequence\r\n v = [tokenizer.encode(x[1]) for x in b]\r\n # compute max length of a sequence in this minibatch\r\n l = max(map(len, v))\r\n return ( # tuple of two tensors - labels and features\r\n torch.LongTensor([t[0] for t in b]),\r\n torch.stack([torch.nn.functional.pad(torch.tensor(t), (0, l - len(t)), mode='constant', value=0) for t in v])\r\n )\r\n\r\n\r\ntrain_loader = torch.utils.data.DataLoader(train_dataset, batch_size=8, collate_fn=pad_bert, shuffle=True)\r\ntest_loader = torch.utils.data.DataLoader(test_dataset, batch_size=8, collate_fn=pad_bert)\r\n\r\nmodel = transformers.BertForSequenceClassification.from_pretrained(bert_model, num_labels=4)\r\n\r\noptimizer = torch.optim.Adam(model.parameters(), lr=2e-5)\r\n\r\nreport_freq = 50\r\niterations = 500 # make this larger to train for longer time!\r\n\r\nmodel.train()\r\n\r\ni, c = 0, 0\r\nacc_loss = 0\r\nacc_acc = 0\r\n\r\nfor labels, texts in train_loader:\r\n labels = labels - 1 # get labels in the range 0-3\r\n texts = texts\r\n loss, out = model(texts, labels=labels)[:2]\r\n labs = out.argmax(dim=1)\r\n acc = torch.mean((labs == labels).type(torch.float32))\r\n optimizer.zero_grad()\r\n loss.backward()\r\n optimizer.step()\r\n acc_loss += loss\r\n acc_acc += acc\r\n i += 1\r\n c += 1\r\n if i % report_freq == 0:\r\n print(f\"Loss = {acc_loss.item() / c}, Accuracy = {acc_acc.item() / c}\")\r\n c = 0\r\n acc_loss = 0\r\n acc_acc = 0\r\n iterations -= 1\r\n if not iterations:\r\n break\r\n\r\nmodel.eval()\r\niterations = 100\r\nacc = 0\r\ni = 0\r\nfor labels, texts in test_loader:\r\n labels = labels - 1\r\n texts = texts\r\n _, out = model(texts, labels=labels)[:2]\r\n labs = out.argmax(dim=1)\r\n acc += torch.mean((labs == labels).type(torch.float32))\r\n i += 1\r\n if i > iterations: break\r\n\r\nprint(f\"Final accuracy: {acc.item() / i}\")", "> Ok, this is still not working for me. I am running the run_squad.py script and I keep getting the error.\r\n> \r\n> Traceback (most recent call last): File \"/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/pool.py\", line 119, in worker result = (True, func(*args, **kwds)) File \"/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/multiprocessing/pool.py\", line 44, in mapstar return list(map(*args)) File \"/kriviv-10T/transformers/transformers/src/transformers/data/processors/squad.py\", line 142, in squad_convert_example_to_features return_token_type_ids=True, File \"/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils_base.py\", line 1521, in encode_plus **kwargs, File \"/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils.py\", line 356, in _encode_plus second_ids = get_input_ids(text_pair) if text_pair is not None else None File \"/kriviv-10T/transformers/transformers/src/transformers/tokenization_utils.py\", line 343, in get_input_ids f\"Input {text} is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.\" ValueError: Input [] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.\r\n\r\nCheck whether your input is empty. I got similar errors and found when passing empty string to tokenizer, it will get this error.", "hi, @mariusjohan, where this file? `vocabfile`", "IS THERE A RESOLVE??!\r\n", "## One Possible Reason for 'ValueError: Input is not valid' (caused by pandas)\r\n\r\nIn my case, I got this problem, because in the dataset, one sample row is 'None' (Type string).\r\nHowever when I load dataset, **the pandas will automatically transform the str 'None' into Nonetype 'nan',**\r\nand cause the ValueError when I doing tokenization. \r\n**Same problem will occur when you use huggingface load_dataset or dataset.from_csv function**, because they seems to actually use pandas for reading files.\r\n\r\nHere is my code meet problem:\r\n```\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\ntestset = pd.read_csv('./rotate_tomato/test.tsv', sep='\\t') \r\ntestset = Dataset.from_pandas(testset)\r\nmodel = \"distilbert-base-uncased\"\r\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\n\r\ndef tokz(x):\r\n return tokenizer(x['Phrase'], padding=True, truncation=True, return_tensors=\"pt\")\r\n\r\ntestset_tokz = testset.map(tokz, batched=True)\r\n```\r\n\r\n`ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.`\r\n\r\nI recommend to check if your dataset has special string like 'None', 'NA' etc....\r\n\r\n## My solution\r\n\r\n`testset = pd.read_csv('./rotate_tomato/test.tsv', sep='\\t', keep_default_na=False)`\r\n\r\nI simply set keep_default_na=False to prevent pandas to detect na values and transform then into Nonetype...\r\nAnd then everything goes well." ]
1,592
1,694
1,592
NONE
null
# 🐛 Bug ## Information Tokenizer I am using is BertTokenizer and I've also tried using AlbertTokenizer, but it does not have any effect. So I'm thinking that the bug is in the base tokenizer Language I am using the model on is English, but I don't believe that's the issue. The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Version: ```transformers==2.11.0``` 2. Run this code ```python from transformers import BertModel, BertTokenizer text = 'A quick brown fox jumps over' # Just a dummy text BertTokenizer.encode_plus( text.split(' '), None, add_special_tokens = True, max_length = 512) ``` 3. This should be the error ``` Traceback (most recent call last): File "classification.py", line 23, in <module> max_length = 512) File "D:\Programmering\Python\lib\site-packages\transformers\tokenization_utils.py", line 1576, in encode_plus first_ids = get_input_ids(text) File "D:\Programmering\Python\lib\site-packages\transformers\tokenization_utils.py", line 1556, in get_input_ids "Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers." ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers. ``` And yes, I've tried just inputting a string, and I still got the same error. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I want the encoder_plus function to return an encoded version of the input sequence. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Windows - Python version: 3.7.4 - PyTorch version (GPU?): 1.5.0+cpu - Tensorflow version (GPU?): (Not used) - Using GPU in script?: Nope - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4986/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4985
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4985/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4985/comments
https://api.github.com/repos/huggingface/transformers/issues/4985/events
https://github.com/huggingface/transformers/issues/4985
638,337,697
MDU6SXNzdWU2MzgzMzc2OTc=
4,985
Word Embedding input to GPT-2
{ "login": "jinga-lala", "id": 34444901, "node_id": "MDQ6VXNlcjM0NDQ0OTAx", "avatar_url": "https://avatars.githubusercontent.com/u/34444901?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jinga-lala", "html_url": "https://github.com/jinga-lala", "followers_url": "https://api.github.com/users/jinga-lala/followers", "following_url": "https://api.github.com/users/jinga-lala/following{/other_user}", "gists_url": "https://api.github.com/users/jinga-lala/gists{/gist_id}", "starred_url": "https://api.github.com/users/jinga-lala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jinga-lala/subscriptions", "organizations_url": "https://api.github.com/users/jinga-lala/orgs", "repos_url": "https://api.github.com/users/jinga-lala/repos", "events_url": "https://api.github.com/users/jinga-lala/events{/privacy}", "received_events_url": "https://api.github.com/users/jinga-lala/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can pass embeddings to the model by using the `input_embeds` keyword argument:\r\n```\r\nresult = model(input_embeds=...)\r\n```\r\nThey should be if shape `batch_size, sequence_length, hidden_size)`.", "Hi @sgugger \r\nI tried the following code snippet but it is giving an error\r\n<pre>\r\n from transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n modelGPT = GPT2LMHeadModel.from_pretrained('gpt2')\r\n modelGPT(input_embeds=out[2][-1]) #where out[2][-1] is of shape [64, 60, 768] --> [batch_size, sequence_length, hidden_size]\r\n</pre>\r\n\r\nError:\r\n<pre>\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-81-40a402b51d0d> in <module>()\r\n----> 3 modelGPT(input_embeds=out[2][-1])\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 548 result = self._slow_forward(*input, **kwargs)\r\n 549 else:\r\n--> 550 result = self.forward(*input, **kwargs)\r\n 551 for hook in self._forward_hooks.values():\r\n 552 hook_result = hook(self, input, result)\r\n\r\nTypeError: forward() got an unexpected keyword argument 'input_embeds'\r\n</pre>", "Sorry I made a typo, I meant `inputs_embeds` (with an s).", "Worked like a charm! Thanks" ]
1,592
1,592
1,592
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> Is there any way in which we can give custom word embeddings as input to GPT-2 instead of tokenized words?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4985/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4984
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4984/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4984/comments
https://api.github.com/repos/huggingface/transformers/issues/4984/events
https://github.com/huggingface/transformers/issues/4984
638,316,830
MDU6SXNzdWU2MzgzMTY4MzA=
4,984
FillMaskPipeline return word-piece
{ "login": "orena1", "id": 8983713, "node_id": "MDQ6VXNlcjg5ODM3MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/8983713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orena1", "html_url": "https://github.com/orena1", "followers_url": "https://api.github.com/users/orena1/followers", "following_url": "https://api.github.com/users/orena1/following{/other_user}", "gists_url": "https://api.github.com/users/orena1/gists{/gist_id}", "starred_url": "https://api.github.com/users/orena1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orena1/subscriptions", "organizations_url": "https://api.github.com/users/orena1/orgs", "repos_url": "https://api.github.com/users/orena1/repos", "events_url": "https://api.github.com/users/orena1/events{/privacy}", "received_events_url": "https://api.github.com/users/orena1/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I do not think generally there's a way of knowing that a given token is usually a sub-token vs. a complete \"word\"\r\n\r\nMaybe a heuristic could be to do some kind of greedy decoding, iteratively adding a second `<mask>` after the first filled one and checking if the output score is above a certain threshold/above the previous one.", "@julien-c @orena1 So knowing for one sub-token whether it is complete or not is on at the \"token\" level. What about a span text?\r\n\r\nThe closest I have found it https://github.com/huggingface/transformers/issues/3972", "Hi @Diego999, I actually did not find a way to test/train span text on BART, although the paper mention that they train the model using span-text.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
CONTRIBUTOR
null
# 🚀 . Feature request Not sure exactly if this is a bug/feature request/ or just me not understanding correctly :). I am trying to use the [FillMaskPipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L739) but as far as I understand the pipeline returns a single word-piece for a \<mask\>. But the mask could be of a word or even a text-spans (for the case of BART). Sample code: ```python from transformers import pipeline nlp =pipeline("fill-mask",model='bart-large') nlp(f'Their expression was divine; and as they glanced at me timidly but with parted' f' lips in great <mask>, I forgot all thoughts of their conversion in feelings ' f'that were far more earthly.') #missing word is bewilderment, this is from librispeech .. {'sequence': '<s> Their expression was divine; and as they glanced at me timidly but with parted lips in great bewild, I forgot all thoughts of their conversion in feelings that were far more earthly.</s>', 'score': 7.772801473038271e-05, 'token': 33304}, print(tokenizer.decode(33304)) ' bewild' ``` As can be seen above, one of the outputs is "bewild" which is the first word-piece in bewilderment: ```python [tokenizer.decode(i)+' ' for i in tokenizer.batch_encode_plus(['bewilderment'])['input_ids'][0]] ['<s> ', ' bewild ', 'er ', 'ment ', '</s> '] ``` ## Your contribution I assume we could go to the next output as see if it is a word-piece and if it is add it to the first token. not sure exactly if this is correct, specially when it seems that the size of the output is exactly as the number of input word-pieces. Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4984/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4984/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4983
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4983/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4983/comments
https://api.github.com/repos/huggingface/transformers/issues/4983/events
https://github.com/huggingface/transformers/issues/4983
638,312,233
MDU6SXNzdWU2MzgzMTIyMzM=
4,983
Getting very bad F1 Scores when training SQUAD v2.0 with robertadistil-base
{ "login": "manishiitg", "id": 1370315, "node_id": "MDQ6VXNlcjEzNzAzMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1370315?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manishiitg", "html_url": "https://github.com/manishiitg", "followers_url": "https://api.github.com/users/manishiitg/followers", "following_url": "https://api.github.com/users/manishiitg/following{/other_user}", "gists_url": "https://api.github.com/users/manishiitg/gists{/gist_id}", "starred_url": "https://api.github.com/users/manishiitg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manishiitg/subscriptions", "organizations_url": "https://api.github.com/users/manishiitg/orgs", "repos_url": "https://api.github.com/users/manishiitg/repos", "events_url": "https://api.github.com/users/manishiitg/events{/privacy}", "received_events_url": "https://api.github.com/users/manishiitg/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> https://stackoverflow.com/questions/62370365/getting-very-bad-f1-scores-when-training-squad-v2-0-with-robertadistil-base Here is my notebook https://github.com/manishiitg/ML_Experiments/blob/master/squad_huggingface_experiment_with_Trainer_TPU.ipynb I am getting very bad f1 scores. Any help on what i am doing wrong?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4983/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4982
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4982/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4982/comments
https://api.github.com/repos/huggingface/transformers/issues/4982/events
https://github.com/huggingface/transformers/issues/4982
638,305,570
MDU6SXNzdWU2MzgzMDU1NzA=
4,982
Why run_language_modelling.py does not use segment embeddings or language embeddings?
{ "login": "HuihuiChyan", "id": 29159002, "node_id": "MDQ6VXNlcjI5MTU5MDAy", "avatar_url": "https://avatars.githubusercontent.com/u/29159002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HuihuiChyan", "html_url": "https://github.com/HuihuiChyan", "followers_url": "https://api.github.com/users/HuihuiChyan/followers", "following_url": "https://api.github.com/users/HuihuiChyan/following{/other_user}", "gists_url": "https://api.github.com/users/HuihuiChyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/HuihuiChyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HuihuiChyan/subscriptions", "organizations_url": "https://api.github.com/users/HuihuiChyan/orgs", "repos_url": "https://api.github.com/users/HuihuiChyan/repos", "events_url": "https://api.github.com/users/HuihuiChyan/events{/privacy}", "received_events_url": "https://api.github.com/users/HuihuiChyan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
NONE
null
I am trying to train Bert and XLM on my own data. But I found that it seems run_language_modelling.py only tokenizes input text into ids, but it doesn't create token_type_ids or langs for Bert or XLM. `batch_encoding = tokenizer.batch_encode_plus(lines, add_special_tokens=True, max_length=block_size)` `self.examples = batch_encoding["input_ids"]` As you can see from the codes, the examples only contain input_ids. I set breakpoints in the modeling_bert.py and modeling_xlm.py, and there is no token_type_ids or langs as a part of input to Bert or XLM. For the downstream tasks, Bert and XLM always use segment embeddings or language embeddings as a part of their input. Why don't we use them in the pretraining step? If we do not use segment embeddings or language embeddings during pretraining, but use them in fine-tuning, shouldn't that cause bias? Sorry if misunderstood something, but I am quite confused now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4982/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4981
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4981/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4981/comments
https://api.github.com/repos/huggingface/transformers/issues/4981/events
https://github.com/huggingface/transformers/pull/4981
638,276,010
MDExOlB1bGxSZXF1ZXN0NDM0MDg5MTUz
4,981
Create README.md
{ "login": "manishiitg", "id": 1370315, "node_id": "MDQ6VXNlcjEzNzAzMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1370315?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manishiitg", "html_url": "https://github.com/manishiitg", "followers_url": "https://api.github.com/users/manishiitg/followers", "following_url": "https://api.github.com/users/manishiitg/following{/other_user}", "gists_url": "https://api.github.com/users/manishiitg/gists{/gist_id}", "starred_url": "https://api.github.com/users/manishiitg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manishiitg/subscriptions", "organizations_url": "https://api.github.com/users/manishiitg/orgs", "repos_url": "https://api.github.com/users/manishiitg/repos", "events_url": "https://api.github.com/users/manishiitg/events{/privacy}", "received_events_url": "https://api.github.com/users/manishiitg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,592
1,592
1,592
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4981/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4981", "html_url": "https://github.com/huggingface/transformers/pull/4981", "diff_url": "https://github.com/huggingface/transformers/pull/4981.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4981.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4980
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4980/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4980/comments
https://api.github.com/repos/huggingface/transformers/issues/4980/events
https://github.com/huggingface/transformers/issues/4980
638,199,177
MDU6SXNzdWU2MzgxOTkxNzc=
4,980
keras
{ "login": "jeffreyjohnens", "id": 15865099, "node_id": "MDQ6VXNlcjE1ODY1MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/15865099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeffreyjohnens", "html_url": "https://github.com/jeffreyjohnens", "followers_url": "https://api.github.com/users/jeffreyjohnens/followers", "following_url": "https://api.github.com/users/jeffreyjohnens/following{/other_user}", "gists_url": "https://api.github.com/users/jeffreyjohnens/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeffreyjohnens/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeffreyjohnens/subscriptions", "organizations_url": "https://api.github.com/users/jeffreyjohnens/orgs", "repos_url": "https://api.github.com/users/jeffreyjohnens/repos", "events_url": "https://api.github.com/users/jeffreyjohnens/events{/privacy}", "received_events_url": "https://api.github.com/users/jeffreyjohnens/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,592
1,592
1,592
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4980/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4979
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4979/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4979/comments
https://api.github.com/repos/huggingface/transformers/issues/4979/events
https://github.com/huggingface/transformers/pull/4979
638,186,591
MDExOlB1bGxSZXF1ZXN0NDM0MDIzODgx
4,979
Add Code Coverage and Black badges to README
{ "login": "bharatr21", "id": 13381361, "node_id": "MDQ6VXNlcjEzMzgxMzYx", "avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bharatr21", "html_url": "https://github.com/bharatr21", "followers_url": "https://api.github.com/users/bharatr21/followers", "following_url": "https://api.github.com/users/bharatr21/following{/other_user}", "gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}", "starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions", "organizations_url": "https://api.github.com/users/bharatr21/orgs", "repos_url": "https://api.github.com/users/bharatr21/repos", "events_url": "https://api.github.com/users/bharatr21/events{/privacy}", "received_events_url": "https://api.github.com/users/bharatr21/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=h1) Report\n> Merging [#4979](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/403d3098572ac308416653648456a940860da39e&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4979/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4979 +/- ##\n==========================================\n- Coverage 77.20% 77.20% -0.01% \n==========================================\n Files 128 128 \n Lines 21851 21851 \n==========================================\n- Hits 16870 16869 -1 \n- Misses 4981 4982 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4979/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=footer). Last update [403d309...5be7204](https://codecov.io/gh/huggingface/transformers/pull/4979?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,598
1,598
CONTRIBUTOR
null
Add Black badge to README based on this: https://github.com/huggingface/transformers/blob/403d3098572ac308416653648456a940860da39e/.circleci/config.yml#L101
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4979/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4979", "html_url": "https://github.com/huggingface/transformers/pull/4979", "diff_url": "https://github.com/huggingface/transformers/pull/4979.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4979.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4978
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4978/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4978/comments
https://api.github.com/repos/huggingface/transformers/issues/4978/events
https://github.com/huggingface/transformers/pull/4978
638,155,038
MDExOlB1bGxSZXF1ZXN0NDM0MDAxMzQz
4,978
Output hidden states
{ "login": "drjosephliu", "id": 22230085, "node_id": "MDQ6VXNlcjIyMjMwMDg1", "avatar_url": "https://avatars.githubusercontent.com/u/22230085?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drjosephliu", "html_url": "https://github.com/drjosephliu", "followers_url": "https://api.github.com/users/drjosephliu/followers", "following_url": "https://api.github.com/users/drjosephliu/following{/other_user}", "gists_url": "https://api.github.com/users/drjosephliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/drjosephliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drjosephliu/subscriptions", "organizations_url": "https://api.github.com/users/drjosephliu/orgs", "repos_url": "https://api.github.com/users/drjosephliu/repos", "events_url": "https://api.github.com/users/drjosephliu/events{/privacy}", "received_events_url": "https://api.github.com/users/drjosephliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=h1) Report\n> Merging [#4978](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68e19f1c228c92d5d800533f558faff24b57127a&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `93.85%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4978/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4978 +/- ##\n==========================================\n+ Coverage 77.93% 77.94% +0.01% \n==========================================\n Files 137 137 \n Lines 23475 23511 +36 \n==========================================\n+ Hits 18295 18326 +31 \n- Misses 5180 5185 +5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `22.11% <ø> (ø)` | |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.78% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `25.82% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `74.48% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `79.85% <64.28%> (-0.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `93.20% <85.71%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `80.43% <100.00%> (-0.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.23% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.21% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <100.00%> (ø)` | |\n| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/4978/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=footer). Last update [68e19f1...ddaeb44](https://codecov.io/gh/huggingface/transformers/pull/4978?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@drjosephliu - this is really great work! I added two additional tests for output hidden states that have to pass for all models. Can you fix the remaining models? :-) I think after that we are good to merge!", "This is great, looking forward to having this in master:)", "Just pushed the fixes. Fingers crossed for a hole in one here !", "Looks like one more merge conflict :(. Otherwise LGTM, epic contribution! Also patrick has left at least one unresolved comment w.r.t Pipfile", "I added the refactorisation for mobilebert, so the relevant tests should be passing now. However, there are a bunch of tokenisation tests which are now failing instead and I can't figure out why as I'm pretty certain the changes I made didn't affect the tokenisers whatsoever. FWIW, those tests were already failing on master, so I hope they're unrelated to the changes I've made. Anyways, let me know if there are any additional edits that need to be made.\r\n\r\nP.S. what a coincidence to bump into you here @sshleifer ! Glad to see you settling in here.", "Haha small world!\r\n\r\nI think there is a git issue causing it to appear that 188 files are changed in the PR.\r\n\r\nI don't think it's horribly damaging, (the LHS looks wrong, the RHS looks correct in the diff viewer), but if you have an easy way to resolve it that would be nice.\r\n", "Well crap, it seems like it's gotten worse. I'm out of ideas here, because I've `git fetch upstream` and `git merge upstream/master`, so it's telling me everything's up to date. It seems like it's comparing against an old version of master that's a couple commits ago. Perhaps I can try deleting my local master, creating and pulling a new master, merging output_hidden_states into master and then push and make a new PR. I'm wondering if that will work? Unless you have any other ideas.", "Ok, so with a bit of git ninja'ing, it's no longer showing ~200 files changed. Hoping this works now.", "@drjosephliu - Amazing work! Thank's a lot for this. Really helps the library to become more flexible :-) \r\n\r\nThe PR is good to merge for me.\r\n\r\nPinging @LysandreJik to verify since it's a big one. Should be merged today though I hope :-) ", "Awesome, no problem and great working with y'all ! " ]
1,592
1,592
1,592
CONTRIBUTOR
null
Attempts to close issue #3879 by refactoring all models to take in an extra argument `output_hidden_states` in the `forward()` method. @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4978/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4978/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4978", "html_url": "https://github.com/huggingface/transformers/pull/4978", "diff_url": "https://github.com/huggingface/transformers/pull/4978.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4978.patch", "merged_at": 1592835045000 }
https://api.github.com/repos/huggingface/transformers/issues/4977
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4977/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4977/comments
https://api.github.com/repos/huggingface/transformers/issues/4977/events
https://github.com/huggingface/transformers/pull/4977
638,142,142
MDExOlB1bGxSZXF1ZXN0NDMzOTkyMDQ1
4,977
[model card] model card for bart-large-finetuned-squadv1
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=h1) Report\n> Merging [#4977](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e&el=desc) will **decrease** coverage by `0.70%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4977/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4977 +/- ##\n==========================================\n- Coverage 77.26% 76.56% -0.71% \n==========================================\n Files 128 128 \n Lines 21851 21851 \n==========================================\n- Hits 16884 16730 -154 \n- Misses 4967 5121 +154 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.56% <0.00%> (-2.58%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.08% <0.00%> (-1.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `72.80% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=footer). Last update [ca5e1cd...e9e1a0d](https://codecov.io/gh/huggingface/transformers/pull/4977?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "could you also add a metadata link to the dataset, as demonstrated in https://github.com/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e? Thanks!", "> could you also add a metadata link to the dataset, as demonstrated in [ca5e1cd](https://github.com/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e)? Thanks!\r\n\r\nSure", "Great, thanks @patil-suraj ", "with link to dataset: https://huggingface.co/valhalla/bart-large-finetuned-squadv1" ]
1,592
1,592
1,592
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4977/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4977", "html_url": "https://github.com/huggingface/transformers/pull/4977", "diff_url": "https://github.com/huggingface/transformers/pull/4977.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4977.patch", "merged_at": 1592213982000 }
https://api.github.com/repos/huggingface/transformers/issues/4976
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4976/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4976/comments
https://api.github.com/repos/huggingface/transformers/issues/4976/events
https://github.com/huggingface/transformers/pull/4976
638,138,200
MDExOlB1bGxSZXF1ZXN0NDMzOTg5MjM4
4,976
Fix parameter 'output_attentions' docstring
{ "login": "ZhuBaohe", "id": 35796307, "node_id": "MDQ6VXNlcjM1Nzk2MzA3", "avatar_url": "https://avatars.githubusercontent.com/u/35796307?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhuBaohe", "html_url": "https://github.com/ZhuBaohe", "followers_url": "https://api.github.com/users/ZhuBaohe/followers", "following_url": "https://api.github.com/users/ZhuBaohe/following{/other_user}", "gists_url": "https://api.github.com/users/ZhuBaohe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhuBaohe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhuBaohe/subscriptions", "organizations_url": "https://api.github.com/users/ZhuBaohe/orgs", "repos_url": "https://api.github.com/users/ZhuBaohe/repos", "events_url": "https://api.github.com/users/ZhuBaohe/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhuBaohe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=h1) Report\n> Merging [#4976](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4976/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4976 +/- ##\n=======================================\n Coverage 77.26% 77.26% \n=======================================\n Files 128 128 \n Lines 21851 21851 \n=======================================\n Hits 16884 16884 \n Misses 4967 4967 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `80.48% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.26% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.21% <ø> (ø)` | |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <ø> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.50% <ø> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `78.16% <ø> (ø)` | |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.43% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <ø> (ø)` | |\n| ... and [26 more](https://codecov.io/gh/huggingface/transformers/pull/4976/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=footer). Last update [ca5e1cd...a921f63](https://codecov.io/gh/huggingface/transformers/pull/4976?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Great! Thanks a lot @ZhuBaohe !" ]
1,592
1,592
1,592
CONTRIBUTOR
null
This PR fixes parameter 'output_attentions' docstring as follow: 1. Remove duplicate parameter docstring in class OpenAIGPTModel. 2. Fix docstring due to web page display problems.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4976/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4976", "html_url": "https://github.com/huggingface/transformers/pull/4976", "diff_url": "https://github.com/huggingface/transformers/pull/4976.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4976.patch", "merged_at": 1592163374000 }
https://api.github.com/repos/huggingface/transformers/issues/4975
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4975/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4975/comments
https://api.github.com/repos/huggingface/transformers/issues/4975/events
https://github.com/huggingface/transformers/pull/4975
638,117,028
MDExOlB1bGxSZXF1ZXN0NDMzOTc0MTMx
4,975
Create README.md
{ "login": "ipuneetrathore", "id": 10783869, "node_id": "MDQ6VXNlcjEwNzgzODY5", "avatar_url": "https://avatars.githubusercontent.com/u/10783869?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ipuneetrathore", "html_url": "https://github.com/ipuneetrathore", "followers_url": "https://api.github.com/users/ipuneetrathore/followers", "following_url": "https://api.github.com/users/ipuneetrathore/following{/other_user}", "gists_url": "https://api.github.com/users/ipuneetrathore/gists{/gist_id}", "starred_url": "https://api.github.com/users/ipuneetrathore/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ipuneetrathore/subscriptions", "organizations_url": "https://api.github.com/users/ipuneetrathore/orgs", "repos_url": "https://api.github.com/users/ipuneetrathore/repos", "events_url": "https://api.github.com/users/ipuneetrathore/events{/privacy}", "received_events_url": "https://api.github.com/users/ipuneetrathore/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=h1) Report\n> Merging [#4975](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca5e1cdf8e314288bd0242a531815a6c75d8178e&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4975/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4975 +/- ##\n==========================================\n- Coverage 77.26% 77.26% -0.01% \n==========================================\n Files 128 128 \n Lines 21851 21851 \n==========================================\n- Hits 16884 16883 -1 \n- Misses 4967 4968 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=footer). Last update [ca5e1cd...03bddda](https://codecov.io/gh/huggingface/transformers/pull/4975?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,592
1,592
1,592
CONTRIBUTOR
null
Adding readme file for the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4975/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4975", "html_url": "https://github.com/huggingface/transformers/pull/4975", "diff_url": "https://github.com/huggingface/transformers/pull/4975.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4975.patch", "merged_at": 1592225031000 }
https://api.github.com/repos/huggingface/transformers/issues/4974
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4974/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4974/comments
https://api.github.com/repos/huggingface/transformers/issues/4974/events
https://github.com/huggingface/transformers/pull/4974
638,116,201
MDExOlB1bGxSZXF1ZXN0NDMzOTczNTI4
4,974
Patch 4
{ "login": "ipuneetrathore", "id": 10783869, "node_id": "MDQ6VXNlcjEwNzgzODY5", "avatar_url": "https://avatars.githubusercontent.com/u/10783869?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ipuneetrathore", "html_url": "https://github.com/ipuneetrathore", "followers_url": "https://api.github.com/users/ipuneetrathore/followers", "following_url": "https://api.github.com/users/ipuneetrathore/following{/other_user}", "gists_url": "https://api.github.com/users/ipuneetrathore/gists{/gist_id}", "starred_url": "https://api.github.com/users/ipuneetrathore/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ipuneetrathore/subscriptions", "organizations_url": "https://api.github.com/users/ipuneetrathore/orgs", "repos_url": "https://api.github.com/users/ipuneetrathore/repos", "events_url": "https://api.github.com/users/ipuneetrathore/events{/privacy}", "received_events_url": "https://api.github.com/users/ipuneetrathore/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Merging readme file. " ]
1,592
1,592
1,592
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4974/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4974", "html_url": "https://github.com/huggingface/transformers/pull/4974", "diff_url": "https://github.com/huggingface/transformers/pull/4974.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4974.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4973
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4973/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4973/comments
https://api.github.com/repos/huggingface/transformers/issues/4973/events
https://github.com/huggingface/transformers/issues/4973
638,109,341
MDU6SXNzdWU2MzgxMDkzNDE=
4,973
How to make my own dataset to use BART summarization?
{ "login": "laetokang", "id": 49485939, "node_id": "MDQ6VXNlcjQ5NDg1OTM5", "avatar_url": "https://avatars.githubusercontent.com/u/49485939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laetokang", "html_url": "https://github.com/laetokang", "followers_url": "https://api.github.com/users/laetokang/followers", "following_url": "https://api.github.com/users/laetokang/following{/other_user}", "gists_url": "https://api.github.com/users/laetokang/gists{/gist_id}", "starred_url": "https://api.github.com/users/laetokang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laetokang/subscriptions", "organizations_url": "https://api.github.com/users/laetokang/orgs", "repos_url": "https://api.github.com/users/laetokang/repos", "events_url": "https://api.github.com/users/laetokang/events{/privacy}", "received_events_url": "https://api.github.com/users/laetokang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,592
1,592
1,592
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hello I'm trying to use BART summarization model. I have a dataset that take the form of dataframe which has two columns 'document' and 'summary'. Q1. I read this Readme.md. > line.this should make a directory called cnn_dm/ with files like test.source. To use your own data, copy that files format. Each article to be summarized is on its own I don't understand this sentences well. So I just saw the cnn datasets(train.source, train.target, test.source, test.target). **How can i distinguish between each documents?** Q2. **How can i change my own datasets into the cnn files format?** <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4973/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4972
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4972/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4972/comments
https://api.github.com/repos/huggingface/transformers/issues/4972/events
https://github.com/huggingface/transformers/issues/4972
638,108,893
MDU6SXNzdWU2MzgxMDg4OTM=
4,972
Run run_tf_glue.py has bugs
{ "login": "ltang666", "id": 3445470, "node_id": "MDQ6VXNlcjM0NDU0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/3445470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ltang666", "html_url": "https://github.com/ltang666", "followers_url": "https://api.github.com/users/ltang666/followers", "following_url": "https://api.github.com/users/ltang666/following{/other_user}", "gists_url": "https://api.github.com/users/ltang666/gists{/gist_id}", "starred_url": "https://api.github.com/users/ltang666/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ltang666/subscriptions", "organizations_url": "https://api.github.com/users/ltang666/orgs", "repos_url": "https://api.github.com/users/ltang666/repos", "events_url": "https://api.github.com/users/ltang666/events{/privacy}", "received_events_url": "https://api.github.com/users/ltang666/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,592
1,593
1,593
NONE
null
do as https://github.com/huggingface/transformers/tree/master/examples/text-classification suggests, but has following bugs showing **/home/admin/tensorflow_datasets/glue/cola/1.0.0/glue-train.tfrecord-00000-of-00001; No such file or directory** my dataset_info.json copy from https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/testing/metadata/glue/cola/1.0.0/dataset_info.json `06/13/2020 14:21:11 - INFO - absl - Field info.citation from disk and from code do not match. Keeping the one from code. 06/13/2020 14:21:11 - INFO - absl - Field info.location from disk and from code do not match. Keeping the one from code. 06/13/2020 14:21:11 - INFO - absl - Reusing dataset glue (/home/admin/tensorflow_datasets/glue/cola/1.0.0) 06/13/2020 14:21:11 - INFO - absl - Constructing tf.data.Dataset for split train, from /home/admin/tensorflow_datasets/glue/cola/1.0.0 dddddddddddddd: <PrefetchDataset shapes: {idx: (), label: (), sentence: ()}, types: {idx: tf.int32, label: tf.int64, sentence: tf.string}> 2020-06-13 14:21:12.132176: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at iterator_ops.cc:929 : Not found: /home/admin/tensorflow_datasets/glue/cola/1.0.0/glue-train.tfrecord-00000-of-00001; No such file or directory Traceback (most recent call last): File "./examples/text-classification/run_tf_glue.py", line 229, in <module> main() File "./examples/text-classification/run_tf_glue.py", line 175, in main if training_args.do_train File "./examples/text-classification/run_tf_glue.py", line 55, in get_tfds return glue_convert_examples_to_features(ds, tokenizer, max_seq_length, task_name) File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 62, in glue_convert_examples_to_features return _tf_glue_convert_examples_to_features(examples, tokenizer, max_length=max_length, task=task) File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 79, in _tf_glue_convert_examples_to_features examples = [processor.tfds_map(processor.get_example_from_tensor_dict(example)) for example in examples] File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/transformers/data/processors/glue.py", line 79, in <listcomp> examples = [processor.tfds_map(processor.get_example_from_tensor_dict(example)) for example in examples] File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/tensorflow_core/python/data/ops/iterator_ops.py", line 622, in __next__ return self.next() File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/tensorflow_core/python/data/ops/iterator_ops.py", line 666, in next return self._next_internal() File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/tensorflow_core/python/data/ops/iterator_ops.py", line 651, in _next_internal output_shapes=self._flat_output_shapes) File "/export/sdb/test/tools/anaconda3/envs/tf2env/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_dataset_ops.py", line 2673, in iterator_get_next_sync _six.raise_from(_core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.NotFoundError: /home/admin/tensorflow_datasets/glue/cola/1.0.0/glue-train.tfrecord-00000-of-00001; No such file or directory [Op:IteratorGetNextSync] run.sh: line 15: 15230 Segmentation fault (core dumped) CUDA_VISIBLE_DEVICES=2 python ./examples/text-classification/run_tf_glue.py --model_name_or_path bert-base-uncased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 12 --per_device_eval_batch_size=8 --per_device_train_batch_size=8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /home/admin/test/transformers/tmp/$TASK_NAME/ `
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4972/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4971
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4971/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4971/comments
https://api.github.com/repos/huggingface/transformers/issues/4971/events
https://github.com/huggingface/transformers/issues/4971
638,093,892
MDU6SXNzdWU2MzgwOTM4OTI=
4,971
huggingface distillbert classification using multiprocessing
{ "login": "avinash-indix", "id": 28525421, "node_id": "MDQ6VXNlcjI4NTI1NDIx", "avatar_url": "https://avatars.githubusercontent.com/u/28525421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinash-indix", "html_url": "https://github.com/avinash-indix", "followers_url": "https://api.github.com/users/avinash-indix/followers", "following_url": "https://api.github.com/users/avinash-indix/following{/other_user}", "gists_url": "https://api.github.com/users/avinash-indix/gists{/gist_id}", "starred_url": "https://api.github.com/users/avinash-indix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinash-indix/subscriptions", "organizations_url": "https://api.github.com/users/avinash-indix/orgs", "repos_url": "https://api.github.com/users/avinash-indix/repos", "events_url": "https://api.github.com/users/avinash-indix/events{/privacy}", "received_events_url": "https://api.github.com/users/avinash-indix/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hi, did you find a solution to this problem?" ]
1,592
1,595
1,592
NONE
null
I am trying to use torch multiprocessing to parallelize the predictions from two separate huggingface distillbert classification models. It seems to be deadlocked at the prediction step. I am using python 3.6.5, torch 1.5.0 and huggingface transformers version 2.11.0. The output from running the code is ``` Tree enc done Begin tree prediction<------(Comment: Both begin tree End tree predictions<------- and end tree predictions) 0.03125429153442383 Dn prediction Dn enc done Begin dn predictions<------(Comment: Both begin dn End dn predictions<------- and end dn predictions) 0.029727697372436523 ----------Done sequential predictions------------- --------Start Parallel predictions-------------- Tree prediction Tree enc done Begin tree prediction. <------(Comment: Process is deadlocked after this) Dn prediction Dn enc done Begin dn predictions. <-------(Comment: Process is deadlocked after this) ``` and the code is ``` def predict(sentences =[], tokenizer=tokenizer,models=(tree_model,dn_model,None)): MAX_SENTENCE_LENGTH = 16 start = time.time() input_ids = [] attention_masks = [] predictions = [] tree_model = models[0] dn_model = models[1] if models[0]: print("Tree prediction") if models[1]: print("Dn prediction") for sent in sentences: encoded_dict = tokenizer.encode_plus( sent, add_special_tokens = True, max_length = MAX_SENTENCE_LENGTH, pad_to_max_length = True, return_attention_mask = True, return_tensors = 'pt', ) # Add the encoded sentence to the list. input_ids.append(encoded_dict['input_ids']) # And its attention mask (simply differentiates padding from non-padding). attention_masks.append(encoded_dict['attention_mask']) if tree_model: print("Tree enc done") if dn_model: print("Dn enc done") # Convert the lists into tensors. new_input_ids = torch.cat(input_ids, dim=0) new_attention_masks = torch.cat(attention_masks, dim=0) with torch.no_grad(): # Forward pass, calculate logit predictions if tree_model: print("Begin tree prediction") outputs = tree_model(new_input_ids, attention_mask=new_attention_masks) print("End tree predictions") else: print("Begin dn predictions") outputs = dn_model(new_input_ids, attention_mask=new_attention_masks) print("End dn predictions") logits = outputs[0] logits = logits.detach().cpu() print(time.time()-start) predictions = logits return predictions def get_tree_prediction(sentence, tokenizer=tokenizer,models=(tree_model,dn_model, None)): return predict(sentences =[sentence], tokenizer=tokenizer,models=models) def get_dn_prediction(sentence, tokenizer=tokenizer,models=(tree_model,dn_model, None)): return predict(sentences =[sentence], tokenizer=tokenizer,models=models) if __name__ == '__main__': sentence = "hello world" processes = [] get_tree_prediction(sentence, tokenizer, (tree_model,None,None)) get_dn_prediction(sentence, tokenizer, (None,dn_model,None)) print("----------Done sequential predictions-------------") print('\n--------Start Parallel predictions--------------') tr_p = mp.Process(target=get_tree_prediction, args=(sentence, tokenizer, (tree_model,None,None))) tr_p.start() processes.append(tr_p) dn_p = mp.Process(target=get_dn_prediction, args=(sentence, tokenizer, (None,dn_model,None))) dn_p.start() processes.append(dn_p) for p in processes: p.join() ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4971/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4970
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4970/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4970/comments
https://api.github.com/repos/huggingface/transformers/issues/4970/events
https://github.com/huggingface/transformers/pull/4970
638,092,766
MDExOlB1bGxSZXF1ZXN0NDMzOTU2MzY0
4,970
Handle unexpected weights in checkpoint loading (tf to pytorch)
{ "login": "leogao2", "id": 54557097, "node_id": "MDQ6VXNlcjU0NTU3MDk3", "avatar_url": "https://avatars.githubusercontent.com/u/54557097?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leogao2", "html_url": "https://github.com/leogao2", "followers_url": "https://api.github.com/users/leogao2/followers", "following_url": "https://api.github.com/users/leogao2/following{/other_user}", "gists_url": "https://api.github.com/users/leogao2/gists{/gist_id}", "starred_url": "https://api.github.com/users/leogao2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leogao2/subscriptions", "organizations_url": "https://api.github.com/users/leogao2/orgs", "repos_url": "https://api.github.com/users/leogao2/repos", "events_url": "https://api.github.com/users/leogao2/events{/privacy}", "received_events_url": "https://api.github.com/users/leogao2/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,592
1,599
1,599
CONTRIBUTOR
null
If checkpoint has additional weights, then the current code will fail to load ignoring those weights.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4970/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4970", "html_url": "https://github.com/huggingface/transformers/pull/4970", "diff_url": "https://github.com/huggingface/transformers/pull/4970.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4970.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4969
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4969/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4969/comments
https://api.github.com/repos/huggingface/transformers/issues/4969/events
https://github.com/huggingface/transformers/issues/4969
638,050,018
MDU6SXNzdWU2MzgwNTAwMTg=
4,969
Request: pretrained distilgpt2-medium, distilgpt2-large models
{ "login": "joeyism", "id": 7503144, "node_id": "MDQ6VXNlcjc1MDMxNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7503144?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeyism", "html_url": "https://github.com/joeyism", "followers_url": "https://api.github.com/users/joeyism/followers", "following_url": "https://api.github.com/users/joeyism/following{/other_user}", "gists_url": "https://api.github.com/users/joeyism/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeyism/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeyism/subscriptions", "organizations_url": "https://api.github.com/users/joeyism/orgs", "repos_url": "https://api.github.com/users/joeyism/repos", "events_url": "https://api.github.com/users/joeyism/events{/privacy}", "received_events_url": "https://api.github.com/users/joeyism/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I'd also be interested in this. The current distilgpt2 is great for use-cases that need cheap/fast compute, but distilled versions of the larger gpt2 models (medium, large, xl) would also be super useful. For example, I am able to fit up to gpt2-large on my GPU, but I'm unable to fit gpt2-xl, which means I can't use it. If there was a distilled version of gpt2-xl which was smaller, that might make it usable for more people.\r\n\r\nAre there any plans to distill any larger versions of gpt2?\r\n\r\nThanks!", "Yes we can probably work on that.\r\nThere is a bit of work + exploration to do: it is possible that we'll have to use model parallelism tricks to be able to train it in a reasonable time (I haven't checked yet).\r\nApplying the distillation to gpt2-xl the way we did for distilgpt2 (same ratios) would still result in a model that is bigger than gpt2-medium (24L, 1600 hidden dim). Would that fit your use-case?\r\n\r\n(sorry for the delayed answer, I don't usually check issues without being pinged/tagged).", "> Applying the distillation to gpt2-xl the way we did for distilgpt2 (same ratios) would still result in a model that is bigger than gpt2-medium (24L, 1600 hidden dim). Would that fit your use-case?\r\n\r\nYes, if we could squish the performance of gpt2-xl into something sized between gpt2-medium and gpt2-large, that would be really useful!", "> Yes we can probably work on that.\r\n> There is a bit of work + exploration to do: it is possible that we'll have to use model parallelism tricks to be able to train it in a reasonable time (I haven't checked yet).\r\n> Applying the distillation to gpt2-xl the way we did for distilgpt2 (same ratios) would still result in a model that is bigger than gpt2-medium (24L, 1600 hidden dim). Would that fit your use-case?\r\n> \r\n> (sorry for the delayed answer, I don't usually check issues without being pinged/tagged).\r\n\r\nEven a distilgpt2-large would work for my use case", "I am also interested in a distilled version of the larger models. For our use-case, this would go a long way to improving cost/performance/feasibility.\r\n", "Bumping this - any word on availability of the medium/large distilled models ?", "> Bumping this - any word on availability of the medium/large distilled models ?\r\n\r\nI am currently working on it! :)\r\n", "any news on this?", "Any news on this ? 😊", "I would be extremely interested in having GPT2-XL distilled to the size of GPT2-L or smaller. Consumer-grade GPUs currently top out at around 8GB VRAM, which is enough to run inference using GPT2-L but is not enough for GPT2-XL. Unless you can find a beefier GPU than that, it will only become possible to efficiently run GPT2-XL on a desktop PC when someone trains a distilled model." ]
1,592
1,652
1,598
NONE
null
# Plans for distilgpt2-medium and distilgpt2-large ## Motivation While distilgpt2 is useful, I was wondering if there are any plans to create a distilgpt2-medium and distilgpt2-large. I'm also wondering how the result of distilgpt2-medium compare to gpt2, and distilgpt2-large compare to gpt2-medium, in size and performance. Maybe it's not even worth it to have those pretrained, if distilgpt2-medium is larger than gpt2 and perform worse.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4969/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4969/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4968
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4968/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4968/comments
https://api.github.com/repos/huggingface/transformers/issues/4968/events
https://github.com/huggingface/transformers/pull/4968
638,045,517
MDExOlB1bGxSZXF1ZXN0NDMzOTE5MzY4
4,968
Eli5 examples
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=h1) Report\n> Merging [#4968](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/439aa1d6e9c953069f75fc23c737221d0df2c977&el=desc) will **increase** coverage by `0.88%`.\n> The diff coverage is `48.36%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4968/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4968 +/- ##\n==========================================\n+ Coverage 76.45% 77.34% +0.88% \n==========================================\n Files 130 133 +3 \n Lines 22024 22146 +122 \n==========================================\n+ Hits 16839 17128 +289 \n+ Misses 5185 5018 -167 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.33% <ø> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.12% <ø> (ø)` | |\n| [src/transformers/modeling\\_retribert.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZXRyaWJlcnQucHk=) | `34.24% <34.24%> (ø)` | |\n| [src/transformers/configuration\\_retribert.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JldHJpYmVydC5weQ==) | `34.78% <34.78%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.16% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.02% <100.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.76% <100.00%> (+0.17%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.67% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/tokenization\\_retribert.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmV0cmliZXJ0LnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/4968/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=footer). Last update [439aa1d...f05664d](https://codecov.io/gh/huggingface/transformers/pull/4968?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Finally managed to add the doc after a bit of a rebasing hell :) \r\n\r\nWill merge tomorrow morning if there aren't any further comments.", "@yjernite Tried to use your bart model but I cant load the decoder.\r\nThere are only pytorch model and config.json uploaded to the model-hub", "> There are only pytorch model and config.json uploaded to the model-hub\r\n\r\nHow did you load the model, could you add some minimum reproduction code?\r\nAlso, this might be better as an issue :) \r\n" ]
1,592
1,592
1,592
MEMBER
null
This PR adds Explain Like I'm Five scripts and models to Transformers. The `examples/eli5` folder contains training code for the dense retriever and to fine-tune a BART model, the jupyter notebook for the [blog post](https://yjernite.github.io/lfqa.html), and the code for the live demo. The RetriBert model implements the dense passage retriever. It's basically a wrapper for two Bert models and projection matrices, but it does gradient checkpointing in a way that is very different from [a concurrent PR](https://github.com/huggingface/transformers/pull/4659) and I thought it would be easier to write its own class for now and see if we can merge later. The Bart files are only modified to add a reference to the ELI5 fine-tuned model on the model repo.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4968/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4968", "html_url": "https://github.com/huggingface/transformers/pull/4968", "diff_url": "https://github.com/huggingface/transformers/pull/4968.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4968.patch", "merged_at": 1592339819000 }
https://api.github.com/repos/huggingface/transformers/issues/4967
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4967/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4967/comments
https://api.github.com/repos/huggingface/transformers/issues/4967/events
https://github.com/huggingface/transformers/issues/4967
638,043,727
MDU6SXNzdWU2MzgwNDM3Mjc=
4,967
Add Linformer model
{ "login": "AaronFriel", "id": 788800, "node_id": "MDQ6VXNlcjc4ODgwMA==", "avatar_url": "https://avatars.githubusercontent.com/u/788800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AaronFriel", "html_url": "https://github.com/AaronFriel", "followers_url": "https://api.github.com/users/AaronFriel/followers", "following_url": "https://api.github.com/users/AaronFriel/following{/other_user}", "gists_url": "https://api.github.com/users/AaronFriel/gists{/gist_id}", "starred_url": "https://api.github.com/users/AaronFriel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AaronFriel/subscriptions", "organizations_url": "https://api.github.com/users/AaronFriel/orgs", "repos_url": "https://api.github.com/users/AaronFriel/repos", "events_url": "https://api.github.com/users/AaronFriel/events{/privacy}", "received_events_url": "https://api.github.com/users/AaronFriel/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Here is an pytorch implementation\r\nhttps://github.com/tatp22/linformer-pytorch", "Just another implementation by the authors\r\nhttps://github.com/facebookresearch/pytext/pull/1407\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Any Tensorflow implementation?" ]
1,592
1,606
1,601
NONE
null
# 🌟 New model addition ## Model description ### Linformer: Self-Attention with Linear Complexity Paper published June 9th on ArXiv: https://arxiv.org/abs/2006.04768 Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses O(n²) time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from O(n²) to O(n) in both time and space. The resulting linear transformer, the **Linformer**, performs on par with standard Transformer models, while being much more memory- and time-efficient. ## Open source status * [ ] the model implementation is available: (give details) * [ ] the model weights are available: (give details) * [x] who are the authors: Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4967/reactions", "total_count": 56, "+1": 5, "-1": 0, "laugh": 0, "hooray": 16, "confused": 0, "heart": 0, "rocket": 24, "eyes": 11 }
https://api.github.com/repos/huggingface/transformers/issues/4967/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4966
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4966/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4966/comments
https://api.github.com/repos/huggingface/transformers/issues/4966/events
https://github.com/huggingface/transformers/issues/4966
638,012,658
MDU6SXNzdWU2MzgwMTI2NTg=
4,966
Spanbert TACRED model not found, despite model card
{ "login": "michaelroyzen", "id": 45830328, "node_id": "MDQ6VXNlcjQ1ODMwMzI4", "avatar_url": "https://avatars.githubusercontent.com/u/45830328?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelroyzen", "html_url": "https://github.com/michaelroyzen", "followers_url": "https://api.github.com/users/michaelroyzen/followers", "following_url": "https://api.github.com/users/michaelroyzen/following{/other_user}", "gists_url": "https://api.github.com/users/michaelroyzen/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelroyzen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelroyzen/subscriptions", "organizations_url": "https://api.github.com/users/michaelroyzen/orgs", "repos_url": "https://api.github.com/users/michaelroyzen/repos", "events_url": "https://api.github.com/users/michaelroyzen/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelroyzen/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Vocab file for that model is indeed missig, /cc @mrm8488 \r\n\r\nBut in the meantime I think you can use:\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\"SpanBERT/spanbert-base-cased\")\r\n```", "I will upload it ASAP. Thank you for letting me know!", "Tokenizer files have been uploaded!!! @michaelroyzen ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
NONE
null
I am trying to load this model: https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred in Transformers 2.11.0. My exact code is: ` tokenizer = AutoTokenizer.from_pretrained("mrm8488/spanbert-large-finetuned-tacred") model = AutoModel.from_pretrained("mrm8488/spanbert-large-finetuned-tacred")`. However, I get a 'not found' error: `OSError: Model name 'mrm8488/spanbert-base-finetuned-tacred' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/spanbert-base-finetuned-tacred' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url. `. Guidance would be appreciated. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4966/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4965
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4965/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4965/comments
https://api.github.com/repos/huggingface/transformers/issues/4965/events
https://github.com/huggingface/transformers/issues/4965
638,010,501
MDU6SXNzdWU2MzgwMTA1MDE=
4,965
How to Paraphrase with GPT2?
{ "login": "BigSalmon2", "id": 61605789, "node_id": "MDQ6VXNlcjYxNjA1Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BigSalmon2", "html_url": "https://github.com/BigSalmon2", "followers_url": "https://api.github.com/users/BigSalmon2/followers", "following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}", "gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}", "starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions", "organizations_url": "https://api.github.com/users/BigSalmon2/orgs", "repos_url": "https://api.github.com/users/BigSalmon2/repos", "events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}", "received_events_url": "https://api.github.com/users/BigSalmon2/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "You should rather use a seq2seq model for paraphrasing like T5 or BART. But if you want to do it using GPT-2 then maybe you can use this format\r\n\r\n`input: input_text paraphrase: parahrase_text`\r\n\r\n while training, set attention mask to 0 on the paraphrased text\r\n\r\nand when generating just pass `input: input_text paraphrase: ` and sample till the `eos` token", "Thank you. I'll be sure to try that! I know there are datasets online that I can use, but are there edits I should make to ensure that I get a suitable output? I've read something about cheating, and I want to avoid that. (I'm sorry if I am using the wrong terminology)", "Can we use `T5` or `BART` like we would `GPT-2`?", "hi @shamoons , yes you can, have a look at this https://madewithml.com/projects/1094/paraphrase-any-question-with-t5-text-to-text-transformer/", "This looks like it’s only for questions. What about arbitrary sentences?", "> This looks like it’s only for questions. What about arbitrary sentences?\r\n\r\nYou can do it with arbitrary sentences as well, but you'll need to fine-tune it yourself.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "> hi @shamoons , yes you can, have a look at this https://madewithml.com/projects/1094/paraphrase-any-question-with-t5-text-to-text-transformer/\r\nThis url is gone.\r\n", "@kingglory \r\n\r\nThis is the [url](https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer-) for the repo of that project.\r\n\r\nAlso there are quite a few paraphrase models on the [hub](https://huggingface.co/models?search=paraphrase) that you can try" ]
1,591
1,618
1,600
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4965/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4964
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4964/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4964/comments
https://api.github.com/repos/huggingface/transformers/issues/4964/events
https://github.com/huggingface/transformers/issues/4964
638,003,308
MDU6SXNzdWU2MzgwMDMzMDg=
4,964
GPT: Weights not being initialized
{ "login": "phaniram-sayapaneni", "id": 20259719, "node_id": "MDQ6VXNlcjIwMjU5NzE5", "avatar_url": "https://avatars.githubusercontent.com/u/20259719?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phaniram-sayapaneni", "html_url": "https://github.com/phaniram-sayapaneni", "followers_url": "https://api.github.com/users/phaniram-sayapaneni/followers", "following_url": "https://api.github.com/users/phaniram-sayapaneni/following{/other_user}", "gists_url": "https://api.github.com/users/phaniram-sayapaneni/gists{/gist_id}", "starred_url": "https://api.github.com/users/phaniram-sayapaneni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phaniram-sayapaneni/subscriptions", "organizations_url": "https://api.github.com/users/phaniram-sayapaneni/orgs", "repos_url": "https://api.github.com/users/phaniram-sayapaneni/repos", "events_url": "https://api.github.com/users/phaniram-sayapaneni/events{/privacy}", "received_events_url": "https://api.github.com/users/phaniram-sayapaneni/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Never mind, resolved now!!\r\nThanks!! ", "How did you resolve?" ]
1,591
1,593
1,591
NONE
null
Hello covid-19 survivors, I have been trying to use GPT for token classification, however currently there is none from hugging face, hence I copied your code from berttokenclassification and stitched the below code. But it says all the weights not initialized. Did I make a mistake, please help me!!! ` class GPTClassifier(OpenAIGPTPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = 2 self.gpt = OpenAIGPTModel(config) self.dropout = nn.Dropout(0.1) self.classifier = nn.Linear(768, 2) self.init_weights() @add_start_docstrings( """GPT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """, ) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, ): outputs = self.gpt( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, ) sequence_output = outputs[0] sequence_output = self.dropout(sequence_output) logits = self.classifier(sequence_output) outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here if labels is not None: loss_fct = CrossEntropyLoss() # Only keep active parts of the loss if attention_mask is not None: active_loss = attention_mask.view(-1) == 1 active_logits = logits.view(-1, self.num_labels) active_labels = torch.where( active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels) ) loss = loss_fct(active_logits, active_labels) else: loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) outputs = (loss,) + outputs return outputs # (loss), scores, (hidden_states), (attentions) ` weights not initalized: `Weights of GPTClassifier not initialized from pretrained model: ['gpt.tokens_embed.weight', 'gpt.positions_embed.weight', 'gpt.h.0.attn.bias', 'gpt.h.0.attn.c_attn.weight', 'gpt.h.0.attn.c_attn.bias', 'gpt.h.0.attn.c_proj.weight', 'gpt.h.0.attn.c_proj.bias', 'gpt.h.0.ln_1.weight', 'gpt.h.0.ln_1.bias', 'gpt.h.0.mlp.c_fc.weight', 'gpt.h.0.mlp.c_fc.bias', 'gpt.h.0.mlp.c_proj.weight', 'gpt.h.0.mlp.c_proj.bias', 'gpt.h.0.ln_2.weight', 'gpt.h.0.ln_2.bias', 'gpt.h.1.attn.bias', 'gpt.h.1.attn.c_attn.weight', 'gpt.h.1.attn.c_attn.bias', 'gpt.h.1.attn.c_proj.weight', 'gpt.h.1.attn.c_proj.bias', 'gpt.h.1.ln_1.weight', 'gpt.h.1.ln_1.bias', 'gpt.h.1.mlp.c_fc.weight', 'gpt.h.1.mlp.c_fc.bias', 'gpt.h.1.mlp.c_proj.weight', 'gpt.h.1.mlp.c_proj.bias', 'gpt.h.1.ln_2.weight', 'gpt.h.1.ln_2.bias', 'gpt.h.2.attn.bias', 'gpt.h.2.attn.c_attn.weight', 'gpt.h.2.attn.c_attn.bias', 'gpt.h.2.attn.c_proj.weight', 'gpt.h.2.attn.c_proj.bias', 'gpt.h.2.ln_1.weight', 'gpt.h.2.ln_1.bias', 'gpt.h.2.mlp.c_fc.weight', 'gpt.h.2.mlp.c_fc.bias', 'gpt.h.2.mlp.c_proj.weight', 'gpt.h.2.mlp.c_proj.bias', 'gpt.h.2.ln_2.weight', 'gpt.h.2.ln_2.bias', 'gpt.h.3.attn.bias', 'gpt.h.3.attn.c_attn.weight', 'gpt.h.3.attn.c_attn.bias', 'gpt.h.3.attn.c_proj.weight', 'gpt.h.3.attn.c_proj.bias', 'gpt.h.3.ln_1.weight', 'gpt.h.3.ln_1.bias', 'gpt.h.3.mlp.c_fc.weight', 'gpt.h.3.mlp.c_fc.bias', 'gpt.h.3.mlp.c_proj.weight', 'gpt.h.3.mlp.c_proj.bias', 'gpt.h.3.ln_2.weight', 'gpt.h.3.ln_2.bias', 'gpt.h.4.attn.bias', 'gpt.h.4.attn.c_attn.weight', 'gpt.h.4.attn.c_attn.bias', 'gpt.h.4.attn.c_proj.weight', 'gpt.h.4.attn.c_proj.bias', 'gpt.h.4.ln_1.weight', 'gpt.h.4.ln_1.bias', 'gpt.h.4.mlp.c_fc.weight', 'gpt.h.4.mlp.c_fc.bias', 'gpt.h.4.mlp.c_proj.weight', 'gpt.h.4.mlp.c_proj.bias', 'gpt.h.4.ln_2.weight', 'gpt.h.4.ln_2.bias', 'gpt.h.5.attn.bias', 'gpt.h.5.attn.c_attn.weight', 'gpt.h.5.attn.c_attn.bias', 'gpt.h.5.attn.c_proj.weight', 'gpt.h.5.attn.c_proj.bias', 'gpt.h.5.ln_1.weight', 'gpt.h.5.ln_1.bias', 'gpt.h.5.mlp.c_fc.weight', 'gpt.h.5.mlp.c_fc.bias', 'gpt.h.5.mlp.c_proj.weight', 'gpt.h.5.mlp.c_proj.bias', 'gpt.h.5.ln_2.weight', 'gpt.h.5.ln_2.bias', 'gpt.h.6.attn.bias', 'gpt.h.6.attn.c_attn.weight', 'gpt.h.6.attn.c_attn.bias', 'gpt.h.6.attn.c_proj.weight', 'gpt.h.6.attn.c_proj.bias', 'gpt.h.6.ln_1.weight', 'gpt.h.6.ln_1.bias', 'gpt.h.6.mlp.c_fc.weight', 'gpt.h.6.mlp.c_fc.bias', 'gpt.h.6.mlp.c_proj.weight', 'gpt.h.6.mlp.c_proj.bias', 'gpt.h.6.ln_2.weight', 'gpt.h.6.ln_2.bias', 'gpt.h.7.attn.bias', 'gpt.h.7.attn.c_attn.weight', 'gpt.h.7.attn.c_attn.bias', 'gpt.h.7.attn.c_proj.weight', 'gpt.h.7.attn.c_proj.bias', 'gpt.h.7.ln_1.weight', 'gpt.h.7.ln_1.bias', 'gpt.h.7.mlp.c_fc.weight', 'gpt.h.7.mlp.c_fc.bias', 'gpt.h.7.mlp.c_proj.weight', 'gpt.h.7.mlp.c_proj.bias', 'gpt.h.7.ln_2.weight', 'gpt.h.7.ln_2.bias', 'gpt.h.8.attn.bias', 'gpt.h.8.attn.c_attn.weight', 'gpt.h.8.attn.c_attn.bias', 'gpt.h.8.attn.c_proj.weight', 'gpt.h.8.attn.c_proj.bias', 'gpt.h.8.ln_1.weight', 'gpt.h.8.ln_1.bias', 'gpt.h.8.mlp.c_fc.weight', 'gpt.h.8.mlp.c_fc.bias', 'gpt.h.8.mlp.c_proj.weight', 'gpt.h.8.mlp.c_proj.bias', 'gpt.h.8.ln_2.weight', 'gpt.h.8.ln_2.bias', 'gpt.h.9.attn.bias', 'gpt.h.9.attn.c_attn.weight', 'gpt.h.9.attn.c_attn.bias', 'gpt.h.9.attn.c_proj.weight', 'gpt.h.9.attn.c_proj.bias', 'gpt.h.9.ln_1.weight', 'gpt.h.9.ln_1.bias', 'gpt.h.9.mlp.c_fc.weight', 'gpt.h.9.mlp.c_fc.bias', 'gpt.h.9.mlp.c_proj.weight', 'gpt.h.9.mlp.c_proj.bias', 'gpt.h.9.ln_2.weight', 'gpt.h.9.ln_2.bias', 'gpt.h.10.attn.bias', 'gpt.h.10.attn.c_attn.weight', 'gpt.h.10.attn.c_attn.bias', 'gpt.h.10.attn.c_proj.weight', 'gpt.h.10.attn.c_proj.bias', 'gpt.h.10.ln_1.weight', 'gpt.h.10.ln_1.bias', 'gpt.h.10.mlp.c_fc.weight', 'gpt.h.10.mlp.c_fc.bias', 'gpt.h.10.mlp.c_proj.weight', 'gpt.h.10.mlp.c_proj.bias', 'gpt.h.10.ln_2.weight', 'gpt.h.10.ln_2.bias', 'gpt.h.11.attn.bias', 'gpt.h.11.attn.c_attn.weight', 'gpt.h.11.attn.c_attn.bias', 'gpt.h.11.attn.c_proj.weight', 'gpt.h.11.attn.c_proj.bias', 'gpt.h.11.ln_1.weight', 'gpt.h.11.ln_1.bias', 'gpt.h.11.mlp.c_fc.weight', 'gpt.h.11.mlp.c_fc.bias', 'gpt.h.11.mlp.c_proj.weight', 'gpt.h.11.mlp.c_proj.bias', 'gpt.h.11.ln_2.weight', 'gpt.h.11.ln_2.bias', 'classifier.weight', 'classifier.bias'] I0612 13:58:30.055731 4472821184 modeling_utils.py:460] Weights from pretrained model not used in GPTClassifier: ['tokens_embed.weight', 'positions_embed.weight', 'h.0.attn.bias', 'h.0.attn.c_attn.weight', 'h.0.attn.c_attn.bias', 'h.0.attn.c_proj.weight', 'h.0.attn.c_proj.bias', 'h.0.ln_1.weight', 'h.0.ln_1.bias', 'h.0.mlp.c_fc.weight', 'h.0.mlp.c_fc.bias', 'h.0.mlp.c_proj.weight', 'h.0.mlp.c_proj.bias', 'h.0.ln_2.weight', 'h.0.ln_2.bias', 'h.1.attn.bias', 'h.1.attn.c_attn.weight', 'h.1.attn.c_attn.bias', 'h.1.attn.c_proj.weight', 'h.1.attn.c_proj.bias', 'h.1.ln_1.weight', 'h.1.ln_1.bias', 'h.1.mlp.c_fc.weight', 'h.1.mlp.c_fc.bias', 'h.1.mlp.c_proj.weight', 'h.1.mlp.c_proj.bias', 'h.1.ln_2.weight', 'h.1.ln_2.bias', 'h.2.attn.bias', 'h.2.attn.c_attn.weight', 'h.2.attn.c_attn.bias', 'h.2.attn.c_proj.weight', 'h.2.attn.c_proj.bias', 'h.2.ln_1.weight', 'h.2.ln_1.bias', 'h.2.mlp.c_fc.weight', 'h.2.mlp.c_fc.bias', 'h.2.mlp.c_proj.weight', 'h.2.mlp.c_proj.bias', 'h.2.ln_2.weight', 'h.2.ln_2.bias', 'h.3.attn.bias', 'h.3.attn.c_attn.weight', 'h.3.attn.c_attn.bias', 'h.3.attn.c_proj.weight', 'h.3.attn.c_proj.bias', 'h.3.ln_1.weight', 'h.3.ln_1.bias', 'h.3.mlp.c_fc.weight', 'h.3.mlp.c_fc.bias', 'h.3.mlp.c_proj.weight', 'h.3.mlp.c_proj.bias', 'h.3.ln_2.weight', 'h.3.ln_2.bias', 'h.4.attn.bias', 'h.4.attn.c_attn.weight', 'h.4.attn.c_attn.bias', 'h.4.attn.c_proj.weight', 'h.4.attn.c_proj.bias', 'h.4.ln_1.weight', 'h.4.ln_1.bias', 'h.4.mlp.c_fc.weight', 'h.4.mlp.c_fc.bias', 'h.4.mlp.c_proj.weight', 'h.4.mlp.c_proj.bias', 'h.4.ln_2.weight', 'h.4.ln_2.bias', 'h.5.attn.bias', 'h.5.attn.c_attn.weight', 'h.5.attn.c_attn.bias', 'h.5.attn.c_proj.weight', 'h.5.attn.c_proj.bias', 'h.5.ln_1.weight', 'h.5.ln_1.bias', 'h.5.mlp.c_fc.weight', 'h.5.mlp.c_fc.bias', 'h.5.mlp.c_proj.weight', 'h.5.mlp.c_proj.bias', 'h.5.ln_2.weight', 'h.5.ln_2.bias', 'h.6.attn.bias', 'h.6.attn.c_attn.weight', 'h.6.attn.c_attn.bias', 'h.6.attn.c_proj.weight', 'h.6.attn.c_proj.bias', 'h.6.ln_1.weight', 'h.6.ln_1.bias', 'h.6.mlp.c_fc.weight', 'h.6.mlp.c_fc.bias', 'h.6.mlp.c_proj.weight', 'h.6.mlp.c_proj.bias', 'h.6.ln_2.weight', 'h.6.ln_2.bias', 'h.7.attn.bias', 'h.7.attn.c_attn.weight', 'h.7.attn.c_attn.bias', 'h.7.attn.c_proj.weight', 'h.7.attn.c_proj.bias', 'h.7.ln_1.weight', 'h.7.ln_1.bias', 'h.7.mlp.c_fc.weight', 'h.7.mlp.c_fc.bias', 'h.7.mlp.c_proj.weight', 'h.7.mlp.c_proj.bias', 'h.7.ln_2.weight', 'h.7.ln_2.bias', 'h.8.attn.bias', 'h.8.attn.c_attn.weight', 'h.8.attn.c_attn.bias', 'h.8.attn.c_proj.weight', 'h.8.attn.c_proj.bias', 'h.8.ln_1.weight', 'h.8.ln_1.bias', 'h.8.mlp.c_fc.weight', 'h.8.mlp.c_fc.bias', 'h.8.mlp.c_proj.weight', 'h.8.mlp.c_proj.bias', 'h.8.ln_2.weight', 'h.8.ln_2.bias', 'h.9.attn.bias', 'h.9.attn.c_attn.weight', 'h.9.attn.c_attn.bias', 'h.9.attn.c_proj.weight', 'h.9.attn.c_proj.bias', 'h.9.ln_1.weight', 'h.9.ln_1.bias', 'h.9.mlp.c_fc.weight', 'h.9.mlp.c_fc.bias', 'h.9.mlp.c_proj.weight', 'h.9.mlp.c_proj.bias', 'h.9.ln_2.weight', 'h.9.ln_2.bias', 'h.10.attn.bias', 'h.10.attn.c_attn.weight', 'h.10.attn.c_attn.bias', 'h.10.attn.c_proj.weight', 'h.10.attn.c_proj.bias', 'h.10.ln_1.weight', 'h.10.ln_1.bias', 'h.10.mlp.c_fc.weight', 'h.10.mlp.c_fc.bias', 'h.10.mlp.c_proj.weight', 'h.10.mlp.c_proj.bias', 'h.10.ln_2.weight', 'h.10.ln_2.bias', 'h.11.attn.bias', 'h.11.attn.c_attn.weight', 'h.11.attn.c_attn.bias', 'h.11.attn.c_proj.weight', 'h.11.attn.c_proj.bias', 'h.11.ln_1.weight', 'h.11.ln_1.bias', 'h.11.mlp.c_fc.weight', 'h.11.mlp.c_fc.bias', 'h.11.mlp.c_proj.weight', 'h.11.mlp.c_proj.bias', 'h.11.ln_2.weight', 'h.11.ln_2.bias']`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4964/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4963
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4963/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4963/comments
https://api.github.com/repos/huggingface/transformers/issues/4963/events
https://github.com/huggingface/transformers/issues/4963
637,947,824
MDU6SXNzdWU2Mzc5NDc4MjQ=
4,963
Cannot load optimizer and lr_scheduler states with TPU training
{ "login": "misrasaurabh1", "id": 1271289, "node_id": "MDQ6VXNlcjEyNzEyODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1271289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/misrasaurabh1", "html_url": "https://github.com/misrasaurabh1", "followers_url": "https://api.github.com/users/misrasaurabh1/followers", "following_url": "https://api.github.com/users/misrasaurabh1/following{/other_user}", "gists_url": "https://api.github.com/users/misrasaurabh1/gists{/gist_id}", "starred_url": "https://api.github.com/users/misrasaurabh1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/misrasaurabh1/subscriptions", "organizations_url": "https://api.github.com/users/misrasaurabh1/orgs", "repos_url": "https://api.github.com/users/misrasaurabh1/repos", "events_url": "https://api.github.com/users/misrasaurabh1/events{/privacy}", "received_events_url": "https://api.github.com/users/misrasaurabh1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Thanks @misrasaurabh1, I'll look into it.\r\nWe'll have to add such a test into the TPU CI once we have it (sooner rather than later).", "Any updates on getting rid of this error?\r\nMakes it hard to use TPUs because preemptible machines cannot be used in Google Cloud if there is no way to resume from checkpoints.\r\n\r\nThanks!", "I encountered the same issue which I found to be due to the fact that the script cannot map the optimizer to the\r\nproper tpu device, here's the line in question:\r\nhttps://github.com/huggingface/transformers/blob/d088d744adb4e5aa45262a34acab3ae9e81de169/src/transformers/trainer.py#L403\r\n\r\nMy solution was to replace \r\n\r\n```\r\noptimizer.load_state_dict(\r\n torch.load(os.path.join(model_path, \"optimizer.pt\"), map_location=self.args.device)\r\n )\r\n\r\n```\r\nby:\r\n\r\n```\r\nif is_torch_tpu_available():\r\n \r\n # load state_dict on CPU and then transfer object to xla device\r\n optimizer.load_state_dict(torch.load(os.path.join(model_path, \"optimizer.pt\")))\r\n xm.send_cpu_data_to_device(optimizer,xm.xla_device())\r\nelse:\r\n optimizer.load_state_dict(\r\n torch.load(os.path.join(model_path, \"optimizer.pt\"), map_location=self.args.device)\r\n )\r\n```\r\nthat seemed to have done the trick with torch-xla-nightly. hope this helps\r\n", "> I encountered the same issue which I found to be due to the fact that the script cannot map the optimizer to the\r\n> proper tpu device, here's the line in question:\r\n> \r\n> https://github.com/huggingface/transformers/blob/d088d744adb4e5aa45262a34acab3ae9e81de169/src/transformers/trainer.py#L403\r\n> \r\n> My solution was to replace\r\n> \r\n> ```\r\n> optimizer.load_state_dict(\r\n> torch.load(os.path.join(model_path, \"optimizer.pt\"), map_location=self.args.device)\r\n> )\r\n> ```\r\n> \r\n> by:\r\n> \r\n> ```\r\n> if is_torch_tpu_available():\r\n> \r\n> # load state_dict on CPU and then transfer object to xla device\r\n> optimizer.load_state_dict(torch.load(os.path.join(model_path, \"optimizer.pt\")))\r\n> xm.send_cpu_data_to_device(optimizer,xm.xla_device())\r\n> else:\r\n> optimizer.load_state_dict(\r\n> torch.load(os.path.join(model_path, \"optimizer.pt\"), map_location=self.args.device)\r\n> )\r\n> ```\r\n> \r\n> that seemed to have done the trick with torch-xla-nightly. hope this helps\r\n\r\nI tried:\r\n```python\r\nprint(device)\r\n# BUG: can't simply map to the XLA device at the moment\r\nif TPU_ACCELERATOR:\r\n # load state_dict on CPU and then transfer object to XLA device\r\n net.load_state_dict(torch.load(model_load_file, map_location=\"cpu\"))\r\n xm.send_cpu_data_to_device(net, device)\r\nelse:\r\n net.load_state_dict(torch.load(model_load_file, map_location=device))\r\n```\r\nand it failed with:\r\n```\r\nxla:4\r\n<ipython-input-4-bbc8442c330c> in <module>\r\n 96 # load state_dict on CPU and then transfer object to XLA device\r\n 97 net.load_state_dict(torch.load(model_load_file, map_location=\"cpu\"))\r\n---> 98 xm.send_cpu_data_to_device(net, device)\r\n 99 else:\r\n 100 net.load_state_dict(torch.load(model_load_file, map_location=device))\r\n\r\n/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py in send_cpu_data_to_device(data, device)\r\n 629 return type(v) == torch.Tensor and v.device.type == 'cpu'\r\n 630 \r\n--> 631 return ToXlaTensorArena(convert_fn, select_fn).transform(data)\r\n 632 \r\n 633 \r\n\r\n/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py in transform(self, inputs)\r\n 312 self._collect_tensors(inputs)\r\n 313 self._convert()\r\n--> 314 return self._replace_tensors(inputs)\r\n 315 \r\n 316 \r\n\r\n/opt/conda/lib/python3.7/site-packages/torch_xla/core/xla_model.py in _replace_tensors(self, inputs)\r\n 306 \r\n 307 return xu.for_each_instance_rewrite(inputs, lambda x: self._select_fn(x),\r\n--> 308 convert_fn)\r\n 309 \r\n 310 def transform(self, inputs):\r\n\r\n/opt/conda/lib/python3.7/site-packages/torch_xla/utils/utils.py in for_each_instance_rewrite(value, select_fn, fn)\r\n 197 def for_each_instance_rewrite(value, select_fn, fn):\r\n 198 rwmap = dict()\r\n--> 199 return _for_each_instance_rewrite(value, select_fn, fn, rwmap)\r\n 200 \r\n 201 \r\n\r\n/opt/conda/lib/python3.7/site-packages/torch_xla/utils/utils.py in _for_each_instance_rewrite(value, select_fn, fn, rwmap)\r\n 188 rwmap[id(value)] = result\r\n 189 for k in result.__dict__.keys():\r\n--> 190 v = _for_each_instance_rewrite(result.__dict__[k], select_fn, fn, rwmap)\r\n 191 result.__dict__[k] = v\r\n 192 else:\r\n\r\n/opt/conda/lib/python3.7/site-packages/torch_xla/utils/utils.py in _for_each_instance_rewrite(value, select_fn, fn, rwmap)\r\n 189 for k in result.__dict__.keys():\r\n 190 v = _for_each_instance_rewrite(result.__dict__[k], select_fn, fn, rwmap)\r\n--> 191 result.__dict__[k] = v\r\n 192 else:\r\n 193 rwmap[id(value)] = result\r\n\r\nTypeError: 'mappingproxy' object does not support item assignment\r\n```", "How about something simpler like\r\n```python\r\n# load state_dict on CPU and then transfer object to XLA device\r\nnet.load_state_dict(torch.load(model_load_file, map_location=\"cpu\"))\r\nnet.to(device)\r\n```\r\n?\r\n\r\nI think it does the job just fine.", "> I encountered the same issue which I found to be due to the fact that the script cannot map the optimizer to the\r\n> proper tpu device, here's the line in question:\r\n> https://github.com/huggingface/transformers/blob/d088d744adb4e5aa45262a34acab3ae9e81de169/src/transformers/trainer.py#L403\r\n> \r\n> My solution was to replace\r\n> \r\n> ```\r\n> optimizer.load_state_dict(\r\n> torch.load(os.path.join(model_path, \"optimizer.pt\"), map_location=self.args.device)\r\n> )\r\n> ```\r\n> \r\n> by:\r\n> \r\n> ```\r\n> if is_torch_tpu_available():\r\n> \r\n> # load state_dict on CPU and then transfer object to xla device\r\n> optimizer.load_state_dict(torch.load(os.path.join(model_path, \"optimizer.pt\")))\r\n> xm.send_cpu_data_to_device(optimizer,xm.xla_device())\r\n> else:\r\n> optimizer.load_state_dict(\r\n> torch.load(os.path.join(model_path, \"optimizer.pt\"), map_location=self.args.device)\r\n> )\r\n> ```\r\n> \r\n> that seemed to have done the trick with torch-xla-nightly. hope this helps\r\n\r\nThis works, however, the progress bar starts from 0, and then, just takes a load of time to come to the step where the checkpoint is present! How to tackle that? I am training on the cloud (tpu v 3.8) and using xla_spawn script to distribute training among cores", "@LysandreJik Any updates on this bug? this prevents resuming training from a checkpoint on TPUs", "I am also having the same problem on loading a model from TPU, and resume the training. Any solutions?" ]
1,591
1,603
1,603
CONTRIBUTOR
null
# 🐛 Bug When restarting training and loading the optimizer.pt and scheduler.pt, the training crashes as the existing code does not know how to load it with TPU. ## Information The stacktrace - ``` Exception in device=TPU:5: don't know how to restore data location of torch.FloatStorage (tagged with xla:0) Traceback (most recent call last): File "/home/saurabh/venv/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/home/saurabh/<retracted>", line 334, in _mp_fn main() File "/home/saurabh/<retracted>", line 303, in main trainer.train(model_path=model_path) File "/home/saurabh/venv/lib/python3.6/site-packages/transformers/trainer.py", line 386, in train torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device) File "/home/saurabh/venv/lib/python3.6/site-packages/torch/serialization.py", line 584, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/home/saurabh/venv/lib/python3.6/site-packages/torch/serialization.py", line 764, in _legacy_load result = unpickler.load() File "/home/saurabh/venv/lib/python3.6/site-packages/torch/serialization.py", line 720, in persistent_load deserialized_objects[root_key] = restore_location(obj, location) File "/home/saurabh/venv/lib/python3.6/site-packages/torch/serialization.py", line 802, in restore_location return default_restore_location(storage, str(map_location)) File "/home/saurabh/venv/lib/python3.6/site-packages/torch/serialization.py", line 179, in default_restore_location + location + ")") RuntimeError: don't know how to restore data location of torch.FloatStorage (tagged with xla:0) ``` This happens when loading a partially trained model. A reference implementation is this https://github.com/pytorch-tpu/fairseq/blob/tpu/fairseq/trainer.py#L195 With a discussion here https://github.com/pytorch/xla/issues/1343 Model I am using (Bert, XLNet ...): any model Language I am using the model on (English, Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Train any model on TPU, wait for a checkpoint to happen 2. move the tokenizer files to the checkpoint dir (another bug, where the trainer expects the tokenizer configs to be present at the same directory as checkpoint dir, that only happens at the very end of training, not at one of the earlier checkpoints) 3. Restart training again from the checkpoint on TPU <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Trainer loads the optimizer and scheduler to TPU and starts training. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 (master) - Platform: Linux-5.3.0-1026-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0a0+6bdfd6a (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: False - Using distributed or parallel set-up in script?: yes, 8 way with xla_spawn.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4963/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4962
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4962/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4962/comments
https://api.github.com/repos/huggingface/transformers/issues/4962/events
https://github.com/huggingface/transformers/issues/4962
637,946,366
MDU6SXNzdWU2Mzc5NDYzNjY=
4,962
How to implement differential learning rates and still ensure "weight_decay" = 0 for the the parameters it should?
{ "login": "ohmeow", "id": 14000, "node_id": "MDQ6VXNlcjE0MDAw", "avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ohmeow", "html_url": "https://github.com/ohmeow", "followers_url": "https://api.github.com/users/ohmeow/followers", "following_url": "https://api.github.com/users/ohmeow/following{/other_user}", "gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}", "starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions", "organizations_url": "https://api.github.com/users/ohmeow/orgs", "repos_url": "https://api.github.com/users/ohmeow/repos", "events_url": "https://api.github.com/users/ohmeow/events{/privacy}", "received_events_url": "https://api.github.com/users/ohmeow/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
CONTRIBUTOR
null
I notice that most models are trained with the following parameter groups: ``` no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": 1e-5, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] optimizer = AdamW(optimizer_grouped_parameters, lr=5e-3, eps=1e-9) ``` **Questions:** 1. Why do we set `weight_decay` = 0.0 for the `no_decay` named parameters? 2. Assuming we need to set `weight_decay` = 0.0 as such, how would we extend the above to allow for differential learning rates (which may be especially helpful for models that include both an encoder and decoder stack)? 3. Have there been any tests to demonstrate the relative effectiveness of differential learning rates for the various transformer models?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4962/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4961
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4961/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4961/comments
https://api.github.com/repos/huggingface/transformers/issues/4961/events
https://github.com/huggingface/transformers/issues/4961
637,910,097
MDU6SXNzdWU2Mzc5MTAwOTc=
4,961
Extending Encoder Decoder to GPT-2
{ "login": "satwikkottur", "id": 3007753, "node_id": "MDQ6VXNlcjMwMDc3NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/3007753?v=4", "gravatar_id": "", "url": "https://api.github.com/users/satwikkottur", "html_url": "https://github.com/satwikkottur", "followers_url": "https://api.github.com/users/satwikkottur/followers", "following_url": "https://api.github.com/users/satwikkottur/following{/other_user}", "gists_url": "https://api.github.com/users/satwikkottur/gists{/gist_id}", "starred_url": "https://api.github.com/users/satwikkottur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/satwikkottur/subscriptions", "organizations_url": "https://api.github.com/users/satwikkottur/orgs", "repos_url": "https://api.github.com/users/satwikkottur/repos", "events_url": "https://api.github.com/users/satwikkottur/events{/privacy}", "received_events_url": "https://api.github.com/users/satwikkottur/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "It's on the roadmap :-) ", "Thank you! Look forward to it :)", "Hi - I've actually been working on this myself the past couple days, should I submit a PR when finished? ", "That'd be great!", "Will do - likely sometime this week. ", "@djw1809 Any update on the PR? :)", "@patrickvonplaten Hello Patrick, I am watching with much interest EncodeDecoder from transformers :) . Any updates on supporting GPT2 with EncodeDecoder ?", "Got sidetracked with other research - coming back to it in several days,\nworking on my end, just need to play nice with the rest of the repo.\n\nOn Tue, Jul 7, 2020 at 3:32 PM Mihai Ilie <[email protected]> wrote:\n\n> @patrickvonplaten <https://github.com/patrickvonplaten> Hello Patrick, I\n> am watching with much interest EncodeDecoder from transformers :) . Any\n> updates on supporting GPT2 with EncodeDecoder ?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/4961#issuecomment-655170674>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AG3PODZXYPBB33F4CBSNZLDR2OO7VANCNFSM4N4QTZQA>\n> .\n>\n\n\n-- \nDylan Weber, Research Assistant | PhD Candidate\nSchool of Math and Statistical Sciences\nWXLR642/BYENG593 Arizona State University\n", "@djw1809 - also feel free to already open a PR with unfinished code yet so that I can take a look early on and help you :-) ", "Working on it now. Also linking this PR: #4483", "@patrickvonplaten Hello Patrick. \r\nAs I see from https://github.com/huggingface/transformers/commit/1d6e71e1167dea9e026391ec5a1a2d7ec33d22af current cross attention implimentation assume that encoder have same hidden size as GPT-2. I have encoder with hidden size 512 and want to combine it with GPT-2 medium with hidden size 1024. I have done it by Fairseq code and now want to do same by Huggingface. Could you update your solution to support any suitable encoder hidden space size?", "Hey @Squire-tomsk, \r\n\r\nI see what you mean - this would mean to add a new config param for each model that has cross-attention...is this common practice? Would be great if you could open a new issue for that :-) ", "Done https://github.com/huggingface/transformers/issues/6645", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,604
1,604
NONE
null
Adding GPT2 initialization for EncoderDecoder model as pointed out in the issue below. > Currently, only Bert works as a decoder. We might add GPT2 in a couple of weeks. Note that no model has `cross-attention` layers if it is not already an encoder-decoder model (like Bart or T5) and in this case it does not make sense to use the encoder-decoder wrapper. The model is initialized with random weights for the cross attention layers which will have to be fine-tuned. I agree, that this should be made clearer in the documentation! _Originally posted by @patrickvonplaten in https://github.com/huggingface/transformers/issues/4517#issuecomment-638058577_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4961/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4961/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4960
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4960/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4960/comments
https://api.github.com/repos/huggingface/transformers/issues/4960/events
https://github.com/huggingface/transformers/issues/4960
637,873,847
MDU6SXNzdWU2Mzc4NzM4NDc=
4,960
unexpected keyword argument 'lm_labels' when using BertModel as Decoder with EncoderDecoderModel
{ "login": "utkd", "id": 2101558, "node_id": "MDQ6VXNlcjIxMDE1NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/2101558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/utkd", "html_url": "https://github.com/utkd", "followers_url": "https://api.github.com/users/utkd/followers", "following_url": "https://api.github.com/users/utkd/following{/other_user}", "gists_url": "https://api.github.com/users/utkd/gists{/gist_id}", "starred_url": "https://api.github.com/users/utkd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/utkd/subscriptions", "organizations_url": "https://api.github.com/users/utkd/orgs", "repos_url": "https://api.github.com/users/utkd/repos", "events_url": "https://api.github.com/users/utkd/events{/privacy}", "received_events_url": "https://api.github.com/users/utkd/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I'm facing the same problem. Since #4874 it seems like it should be just `labels` instead of `lm_labels`. According to the documentation it should do masked language modeling-loss, but from my debugging it seems like it actually does next word prediction-loss.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
NONE
null
The `BertModel.forward()` method does not expect a `lm_labels` and `masked_lm_labels` arguments. Yet, it looks like the `EncoderDecoderModel.forward()` method calls it's decoder's `forward()` method with those arguments which throws a TypeError when a BertModel is used as a decoder. Am I using the BertModel incorrectly? I can get rid of the error by modifying the EncoderDecoderModel to not use those arguments for the decoder. Exact Error: ``` File "/Users/utkarsh/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/Users/utkarsh/Projects/ai4code/transformers/bert2bert/models.py", line 12, in forward dec_out, dec_cls, enc_out, enc_cls = self.bertmodel(input_ids=inputs, attention_mask=input_masks, decoder_input_ids=targets, decoder_attention_mask=target_masks) File "/Users/utkarsh/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/Users/utkarsh/anaconda3/envs/py37/lib/python3.7/site-packages/transformers/modeling_encoder_decoder.py", line 283, in forward **kwargs_decoder, File "/Users/utkarsh/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'lm_labels' ``` Relevant part of the code: ``` encoder = BertModel(enc_config) dec_config = BertConfig(...,is_decoder=True) decoder = BertModel(dec_config) model = EncoderDecoderModel(encoder=encoder, decoder=decoder) ``` ... `dec_out, dec_cls, enc_out, enc_cls = model(input_ids=inputs, attention_mask=input_masks, decoder_input_ids=targets, decoder_attention_mask=target_masks)`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4960/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4959
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4959/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4959/comments
https://api.github.com/repos/huggingface/transformers/issues/4959/events
https://github.com/huggingface/transformers/pull/4959
637,765,350
MDExOlB1bGxSZXF1ZXN0NDMzNjkxOTA5
4,959
Add AlbertForMultipleChoice
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=h1) Report\n> Merging [#4959](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02e5f79662d72cccdca81a47e3001a5f6d36e5b1&el=desc) will **increase** coverage by `0.76%`.\n> The diff coverage is `86.43%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4959/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4959 +/- ##\n==========================================\n+ Coverage 76.46% 77.23% +0.76% \n==========================================\n Files 128 128 \n Lines 21502 21818 +316 \n==========================================\n+ Hits 16442 16851 +409 \n+ Misses 5060 4967 -93 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `22.11% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `25.65% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `19.04% <0.00%> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <36.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.58% <55.55%> (-3.02%)` | :arrow_down: |\n| ... and [47 more](https://codecov.io/gh/huggingface/transformers/pull/4959/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=footer). Last update [02e5f79...ef1e404](https://codecov.io/gh/huggingface/transformers/pull/4959?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Very clean!" ]
1,591
1,591
1,591
COLLABORATOR
null
Cleaning up the rebase and opening a fresh PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4959/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4959", "html_url": "https://github.com/huggingface/transformers/pull/4959", "diff_url": "https://github.com/huggingface/transformers/pull/4959.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4959.patch", "merged_at": 1591986020000 }
https://api.github.com/repos/huggingface/transformers/issues/4958
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4958/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4958/comments
https://api.github.com/repos/huggingface/transformers/issues/4958/events
https://github.com/huggingface/transformers/issues/4958
637,683,576
MDU6SXNzdWU2Mzc2ODM1NzY=
4,958
Issue with an inline code comment
{ "login": "manishiitg", "id": 1370315, "node_id": "MDQ6VXNlcjEzNzAzMTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1370315?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manishiitg", "html_url": "https://github.com/manishiitg", "followers_url": "https://api.github.com/users/manishiitg/followers", "following_url": "https://api.github.com/users/manishiitg/following{/other_user}", "gists_url": "https://api.github.com/users/manishiitg/gists{/gist_id}", "starred_url": "https://api.github.com/users/manishiitg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manishiitg/subscriptions", "organizations_url": "https://api.github.com/users/manishiitg/orgs", "repos_url": "https://api.github.com/users/manishiitg/repos", "events_url": "https://api.github.com/users/manishiitg/events{/privacy}", "received_events_url": "https://api.github.com/users/manishiitg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,591
1,592
1,592
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: https://github.com/huggingface/transformers/blob/d4c2cb402d6674211726fd5f4803d1090664e438/examples/question-answering/run_squad.py#L322 This specific comment written is wrong. I wasted lot of time due to this comment. i and feature_index are not the same no :) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4958/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4957
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4957/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4957/comments
https://api.github.com/repos/huggingface/transformers/issues/4957/events
https://github.com/huggingface/transformers/issues/4957
637,619,157
MDU6SXNzdWU2Mzc2MTkxNTc=
4,957
Memory leakage with bert-large-uncased-whole-word-masking-finetuned-squad
{ "login": "ashispapu", "id": 9023527, "node_id": "MDQ6VXNlcjkwMjM1Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/9023527?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashispapu", "html_url": "https://github.com/ashispapu", "followers_url": "https://api.github.com/users/ashispapu/followers", "following_url": "https://api.github.com/users/ashispapu/following{/other_user}", "gists_url": "https://api.github.com/users/ashispapu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ashispapu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashispapu/subscriptions", "organizations_url": "https://api.github.com/users/ashispapu/orgs", "repos_url": "https://api.github.com/users/ashispapu/repos", "events_url": "https://api.github.com/users/ashispapu/events{/privacy}", "received_events_url": "https://api.github.com/users/ashispapu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, @ashispapu, you might wanna wrap the inference code with\r\n`with torch.no_grad():`", "Thanks for the response. Should not it be grad disabled by default like other models in transformer during inference.", "I don't think so, how would the model know that you are doing training or inference. You can use `pipeline` for question answering, that takes care of such things " ]
1,591
1,592
1,592
NONE
null
# 🐛 Bug ## Information I am using 'bert-large-uncased-whole-word-masking-finetuned-squad' and observed an memory leak during the inference. Below the code snippet to reproduce it tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad") model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad") device = 'cuda' question = 'what is your question' answer = 'This is the answer' def tokenize_qa(tokenizer,question, answer_context): input_ids = tokenizer.encode(question, answer_context, max_length=512) tokens = tokenizer.convert_ids_to_tokens(input_ids) sep_index = input_ids.index( tokenizer.sep_token_id) num_seg_a = sep_index + 1 num_seg_b = len(input_ids) - num_seg_a segment_ids = [0] * num_seg_a + [1] * num_seg_b assert len(segment_ids) == len(input_ids) return input_ids, segment_ids, tokens ###################### Inference Code ########################## input_ids, segment_ids, tokens= tokenize_qa(tokenizer,question,answer_context) start_scores, end_scores = model(torch.tensor([input_ids], device= device),token_type_ids=torch.tensor([segment_ids], device= device)) When i run the inference code multiple times it accumulates large gpu memory and finally runs OOM. Please check let me know if anything wrong here. Thank you. Language I am using the model : English, - `transformers` version: '2.11.0' - Platform: Ubuntu 18.04 - Python version:3.6.9 - PyTorch version (GPU): 1.4.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4957/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4956
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4956/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4956/comments
https://api.github.com/repos/huggingface/transformers/issues/4956/events
https://github.com/huggingface/transformers/issues/4956
637,603,755
MDU6SXNzdWU2Mzc2MDM3NTU=
4,956
RobertaForMaskedLM Failing for example code given on Hugging Face' documentation page
{ "login": "tanulsingh", "id": 48856885, "node_id": "MDQ6VXNlcjQ4ODU2ODg1", "avatar_url": "https://avatars.githubusercontent.com/u/48856885?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanulsingh", "html_url": "https://github.com/tanulsingh", "followers_url": "https://api.github.com/users/tanulsingh/followers", "following_url": "https://api.github.com/users/tanulsingh/following{/other_user}", "gists_url": "https://api.github.com/users/tanulsingh/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanulsingh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanulsingh/subscriptions", "organizations_url": "https://api.github.com/users/tanulsingh/orgs", "repos_url": "https://api.github.com/users/tanulsingh/repos", "events_url": "https://api.github.com/users/tanulsingh/events{/privacy}", "received_events_url": "https://api.github.com/users/tanulsingh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Which version of the library are you using? The documentation corresponds to the master branch and the change in the argument names was pretty recent, so I don't think it's in the latest release yet. You should either [install from source](https://github.com/huggingface/transformers#from-source) or change the code to use the (soon-to-be deprecated) argument `masked_lm_labels` (see the documentation for the latest release [here](https://huggingface.co/transformers/v2.5.0/model_doc/bert.html#bertformaskedlm)).", "Thank you @sgugger for the latest documentation. I reinstalled transformers again yesterday sO I guess I am using latest version\r\n I was scratching my head due to this" ]
1,591
1,593
1,593
NONE
null
# 🐛 Bug from transformers import RobertaTokenizer, RobertaForMaskedLM import torch tokenizer = RobertaTokenizer.from_pretrained('roberta-base') model = RobertaForMaskedLM.from_pretrained('roberta-base') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids, labels=input_ids) loss, prediction_scores = outputs[:2] I am using this example code given on hugging face's documentation page , but this also gives an error TypeError: forward() got an unexpected keyword argument 'labels' The doc clearly states that forward accepts labels ,what's wrong here?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4956/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4955
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4955/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4955/comments
https://api.github.com/repos/huggingface/transformers/issues/4955/events
https://github.com/huggingface/transformers/issues/4955
637,599,851
MDU6SXNzdWU2Mzc1OTk4NTE=
4,955
How to use fine-tuned BERT to fill <mask>
{ "login": "Odimmsun", "id": 32326519, "node_id": "MDQ6VXNlcjMyMzI2NTE5", "avatar_url": "https://avatars.githubusercontent.com/u/32326519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Odimmsun", "html_url": "https://github.com/Odimmsun", "followers_url": "https://api.github.com/users/Odimmsun/followers", "following_url": "https://api.github.com/users/Odimmsun/following{/other_user}", "gists_url": "https://api.github.com/users/Odimmsun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Odimmsun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Odimmsun/subscriptions", "organizations_url": "https://api.github.com/users/Odimmsun/orgs", "repos_url": "https://api.github.com/users/Odimmsun/repos", "events_url": "https://api.github.com/users/Odimmsun/events{/privacy}", "received_events_url": "https://api.github.com/users/Odimmsun/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @Odimmsun , `BertForSequenceClassification` is used for classification task and not for mak-filling. You can use the pre-trained BERT model as it is for make-filling. There is pipeline for this task which you can find here https://huggingface.co/transformers/usage.html#masked-language-modeling\r\n\r\nBasically what you'll need to do is this\r\n\r\n```python3\r\nfrom transformers import pipeline\r\n\r\nnlp = pipeline(\"fill-mask\")\r\nprint(nlp(f\"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks.\"))\r\n```\r\n\r\nyou can also pass your own model to the pipeline using the `model` parameter.", "> Hi @Odimmsun , `BertForSequenceClassification` is used for classification task and not for mak-filling. You can use the pre-trained BERT model as it is for make-filling. There is pipeline for this task which you can find here https://huggingface.co/transformers/usage.html#masked-language-modeling\r\n> \r\n> Basically what you'll need to do is this\r\n> \r\n> ```python\r\n> from transformers import pipeline\r\n> \r\n> nlp = pipeline(\"fill-mask\")\r\n> print(nlp(f\"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks.\"))\r\n> ```\r\n> \r\n> you can also pass your own model to the pipeline using the `model` parameter.\r\n\r\nHi @patil-suraj ,thank you for your reply. Now, if i use \"nlp = pipeline(\"fill-mask\")\", the model i used is not fine-tuned, but i want to use a fine-tuned model to fill mask. How should i do to implement this?\r\n\r\n", "The pre-trained BERT model does mask-filling out of the box, but if you want to use your own fine-tuned model then just pass the model(path or url) to the `model` parameter .\r\n\r\n```python3\r\nnlp = pipeline(\"fill-mask\", model=\"your_model_path\")\r\n```", "\\`---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-12-d85876ca5c10> in <module>()\r\n 1 my_model.to('cpu')\r\n 2 nlp = pipeline(task='fill-mask', model=my_model, tokenizer=tokenizer)\r\n----> 3 nlp('我是你<mask>爸爸')\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)\r\n 804 values, predictions = topk.values.numpy(), topk.indices.numpy()\r\n 805 else:\r\n--> 806 masked_index = (input_ids == self.tokenizer.mask_token_id).nonzero().item()\r\n 807 logits = outputs[i, masked_index, :]\r\n 808 probs = logits.softmax(dim=0)\r\n\r\nValueError: only one element tensors can be converted to Python `scalars`\r\n\\`\r\nhi, my model is fine-tuned by BertForSequenceClassification. Then i use it to fill mask, the error raised as above. Now, i am confused.", "`BertForSequenceClassification` is meant for classification, it won't work for mask-filling task. If you want to fine-tune BERT for masked lm task then you should use BertForMaskedLM", "Check this example for how to fine-tune bert for masked LM\r\n\r\nhttps://github.com/huggingface/transformers/tree/master/examples/language-modeling#robertabertdistilbert-and-masked-language-modeling", "Thank @patil-suraj, do you know how to use BART with in-filling scheme (where spans of text are replaced with a single mask token)? I have not seen this pipline \r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "> `---------------------------------------------------------------------------\r\n> ValueError Traceback (most recent call last)\r\n> in ()\r\n> 1 my_model.to('cpu')\r\n> 2 nlp = pipeline(task='fill-mask', model=my_model, tokenizer=tokenizer)\r\n> ----> 3 nlp('我是你爸爸')\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in **call**(self, *args, **kwargs)\r\n> 804 values, predictions = topk.values.numpy(), topk.indices.numpy()\r\n> 805 else:\r\n> --> 806 masked_index = (input_ids == self.tokenizer.mask_token_id).nonzero().item()\r\n> 807 logits = outputs[i, masked_index, :]\r\n> 808 probs = logits.softmax(dim=0)\r\n> \r\n> ValueError: only one element tensors can be converted to Python `scalars`\r\n> `\r\n> hi, my model is fine-tuned by BertForSequenceClassification. Then i use it to fill mask, the error raised as above. Now, i am confused.\r\n\r\n老哥稳!" ]
1,591
1,600
1,598
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> I have fine-tuned a BERT model by classification task, use transformers.BertForSequenceClassification. Now, i want use this model to fill mask when i give a input like: 'my dog \<mask\> beautiful'. Can this be implemented? I would really appreciate if someone can teach me how to implement.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4955/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4954
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4954/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4954/comments
https://api.github.com/repos/huggingface/transformers/issues/4954/events
https://github.com/huggingface/transformers/pull/4954
637,557,800
MDExOlB1bGxSZXF1ZXN0NDMzNTIyMTA5
4,954
ElectraForMultipleChoice
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=h1) Report\n> Merging [#4954](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efeb75b8054cc299698cf8bc09f395ada2660745&el=desc) will **increase** coverage by `0.08%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4954/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4954 +/- ##\n==========================================\n+ Coverage 77.24% 77.33% +0.08% \n==========================================\n Files 133 133 \n Lines 22134 22166 +32 \n==========================================\n+ Hits 17097 17141 +44 \n+ Misses 5037 5025 -12 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.16% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.76% <ø> (ø)` | |\n| [src/transformers/configuration\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `80.12% <100.00%> (+1.95%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.57% <0.00%> (+1.55%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=footer). Last update [efeb75b...9cb71f9](https://codecov.io/gh/huggingface/transformers/pull/4954?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "hi @LysandreJik could you tell me why these tests are failing now ? Thanks.", "Looks like @LysandreJik deleted a function by mistake during the merge (the `create_and_check_electra_for_multiple_choice`), could you add it back?", "My bad, sorry about that. Thanks for the fix!" ]
1,591
1,592
1,592
MEMBER
null
This PR add `ElectraForMultipleChoice`. One of the missing models in this [project](https://github.com/huggingface/transformers/projects/17). Since, for multiple choice pooled outputs are needed, added `ElectraPooler` class. @sgugger , @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4954/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4954", "html_url": "https://github.com/huggingface/transformers/pull/4954", "diff_url": "https://github.com/huggingface/transformers/pull/4954.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4954.patch", "merged_at": 1592506775000 }
https://api.github.com/repos/huggingface/transformers/issues/4953
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4953/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4953/comments
https://api.github.com/repos/huggingface/transformers/issues/4953/events
https://github.com/huggingface/transformers/pull/4953
637,546,349
MDExOlB1bGxSZXF1ZXN0NDMzNTEzMzI2
4,953
Added feature to move added tokens in vocabulary for Transformer-XL
{ "login": "RafaelWO", "id": 38643099, "node_id": "MDQ6VXNlcjM4NjQzMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RafaelWO", "html_url": "https://github.com/RafaelWO", "followers_url": "https://api.github.com/users/RafaelWO/followers", "following_url": "https://api.github.com/users/RafaelWO/following{/other_user}", "gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}", "starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions", "organizations_url": "https://api.github.com/users/RafaelWO/orgs", "repos_url": "https://api.github.com/users/RafaelWO/repos", "events_url": "https://api.github.com/users/RafaelWO/events{/privacy}", "received_events_url": "https://api.github.com/users/RafaelWO/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=h1) Report\n> Merging [#4953](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5620033115e571013325d017bcca92991b0a4ace&el=desc) will **increase** coverage by `0.72%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4953/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4953 +/- ##\n==========================================\n+ Coverage 76.49% 77.21% +0.72% \n==========================================\n Files 128 128 \n Lines 21745 21756 +11 \n==========================================\n+ Hits 16633 16799 +166 \n+ Misses 5112 4957 -155 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.21% <100.00%> (+1.53%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `73.09% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (+0.77%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+1.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <0.00%> (+2.57%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=footer). Last update [5620033...265ea74](https://codecov.io/gh/huggingface/transformers/pull/4953?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Left some nitpicks, but overall this looks great :-) ", "LGTM" ]
1,591
1,593
1,592
CONTRIBUTOR
null
As discussed in #3554 the tokens in the tokenizer have to be shifted if adding a new token into aka resizing an embedding layer other than the last one. Of course this applies only for an `AdaptiveEmbedding` with more than one layer. This implementation adds a function to move an added token in the tokenizer to a specific position. This is closely related to the PR #4759
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4953/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4953", "html_url": "https://github.com/huggingface/transformers/pull/4953", "diff_url": "https://github.com/huggingface/transformers/pull/4953.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4953.patch", "merged_at": 1592833253000 }
https://api.github.com/repos/huggingface/transformers/issues/4952
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4952/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4952/comments
https://api.github.com/repos/huggingface/transformers/issues/4952/events
https://github.com/huggingface/transformers/issues/4952
637,472,648
MDU6SXNzdWU2Mzc0NzI2NDg=
4,952
Data used for training MarianMT models
{ "login": "yashkant", "id": 17530895, "node_id": "MDQ6VXNlcjE3NTMwODk1", "avatar_url": "https://avatars.githubusercontent.com/u/17530895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yashkant", "html_url": "https://github.com/yashkant", "followers_url": "https://api.github.com/users/yashkant/followers", "following_url": "https://api.github.com/users/yashkant/following{/other_user}", "gists_url": "https://api.github.com/users/yashkant/gists{/gist_id}", "starred_url": "https://api.github.com/users/yashkant/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yashkant/subscriptions", "organizations_url": "https://api.github.com/users/yashkant/orgs", "repos_url": "https://api.github.com/users/yashkant/repos", "events_url": "https://api.github.com/users/yashkant/events{/privacy}", "received_events_url": "https://api.github.com/users/yashkant/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, @yashkant, you might be able to find these details here https://github.com/Helsinki-NLP/Opus-MT" ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help Are there specifications available on the amount and type of training data used for the released MarianMT [models](https://huggingface.co/transformers/model_doc/marian.html)? ## Details <!-- Description of your issue --> I want to use these models for back translation in order to augment my current training data (the [VQA dataset](http://visualqa.org/) which is in English). I understand that for this, I could use any number of models in conjunction with source and target languages as `en`. However, I am concerned about the domain mismatch of the VQA dataset vs training data of the MT model. For example, using `en-ROMANCE` and `ROMANCE-en` models, I find that back-translation of the input sentence: `['>>lmo<< What do you think are the children playing with?']` is `['What do you think of the youth of the jubilee?']` Also, do you have any suggestions/guidelines on which/how many models to use particularly for this use case? I did not find this question suitable for SO hence it posted here directly Thank you for releasing the models!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4952/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4951
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4951/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4951/comments
https://api.github.com/repos/huggingface/transformers/issues/4951/events
https://github.com/huggingface/transformers/pull/4951
637,464,478
MDExOlB1bGxSZXF1ZXN0NDMzNDQ4NTA2
4,951
[examples] SummarizationModule improvements
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" }, { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=h1) Report\n> Merging [#4951](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c852036b4abca2c20e1adf92eda48472a7d84ef0&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4951/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4951 +/- ##\n=======================================\n Coverage 77.41% 77.42% \n=======================================\n Files 130 130 \n Lines 22023 22023 \n=======================================\n+ Hits 17050 17051 +1 \n+ Misses 4973 4972 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.26% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=footer). Last update [c852036...b7e1d5e](https://codecov.io/gh/huggingface/transformers/pull/4951?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Merging now. Happy to address post-merge comments!" ]
1,591
1,592
1,592
CONTRIBUTOR
null
This PR makes the SummarizationTrainer much more usable, and when improvements are not unique to summarization, they are implemented in `lightning_base.py` instead. - **Checkpointing** Before this PR, the code saves 5GB of PL checkpoints per epoch, now SummarizationTrainer saves the best checkpoint based on ROUGE 2 score, and also saves it in huggingface `save_pretrained` format using the `on_save_checkpoint`. This will help resolve lots of confusion in various issues about how to load the pl checkpoints. The current summarization code can only accept bs=1 and takes 24h to run 1 epoch on CNN DM. With the following changes, you can train much faster, if you wish. The docs suggested that larger batch sizes were possible with default params, which is fixed. ### Changes to Allow Faster Summarization Training *these are all optional and turned off by default* 1) freezing: before this PR, it was basically only possible to finetune with batchsize 2-4 on a 16GB system. With `--freeze_embeds` and `--freeze_encoder`, you can get batch size MUCH higher, towards 32. I've seen strong results with these options. 2) On CNNDM and XSUM the datasets are 200K examples, and epochs are very long. For this reason it is preferable to run validation (and get a rouge score) more frequently, but with previous params each `validation_step` took 1hr. By passing `--n_val=1000 --val_check_interval=0.25`, you can run validation 4x per epoch and it only takes 3 minutes. I also allows the config's beam search parameters to be used, rather than hardcoding faster but lower scoring ones. 3) `{train|val|test}_max_target_length`: I have found it preferable to truncate train summaries to 56 for XSUM and CNNDM respectively, but doing this for val/test artificially inflates rouge scores. So these clargs are separated. Changes to `lightning_base` - Number of trainable parameters and total parameters are logged by default. - All possible `pl.Trainer` clargs are passed through `add_generic_args` (Inspired by @nateraw) ### WandbLogger - `--logger wandb` will instantiate a default wandb logger. - `--logger wandb_shared` will post results to [here](https://app.wandb.ai/sshleifer/hf_summarization/table?workspace=user-), so that the community can compare hyperparameter settings empirically. - the default logger is still tensorboard logger because it doesn't require making an account. ### Distillation - `SummarizationDistiller` and `T5SummarizationDistiller` are checked in. This code was sent to me by a researcher who wishes to remain anonymous. DM to discuss.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4951/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4951/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4951", "html_url": "https://github.com/huggingface/transformers/pull/4951", "diff_url": "https://github.com/huggingface/transformers/pull/4951.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4951.patch", "merged_at": 1592416295000 }
https://api.github.com/repos/huggingface/transformers/issues/4950
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4950/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4950/comments
https://api.github.com/repos/huggingface/transformers/issues/4950/events
https://github.com/huggingface/transformers/issues/4950
637,422,058
MDU6SXNzdWU2Mzc0MjIwNTg=
4,950
GPTDoubleHeadsModel Unexpected node type: onnx:: Sub
{ "login": "mihail911", "id": 2789441, "node_id": "MDQ6VXNlcjI3ODk0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mihail911", "html_url": "https://github.com/mihail911", "followers_url": "https://api.github.com/users/mihail911/followers", "following_url": "https://api.github.com/users/mihail911/following{/other_user}", "gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}", "starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mihail911/subscriptions", "organizations_url": "https://api.github.com/users/mihail911/orgs", "repos_url": "https://api.github.com/users/mihail911/repos", "events_url": "https://api.github.com/users/mihail911/events{/privacy}", "received_events_url": "https://api.github.com/users/mihail911/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @mihail911, thanks for reporting the issue and the script to reproduce.\r\n\r\nI can confirm the issue, as it seems to happen on PyTorch side, I suspect it's a bug on their side. @tianleiwu should we forward the issue on PyTorch issue tracker? \r\n\r\nSlightly updated the script to avoid errors: \r\n\r\n```python\r\nimport torch\r\nfrom transformers import (GPT2Config, GPT2Model, GPT2Tokenizer, GPT2DoubleHeadsModel)\r\n\r\n# use_cache is True by default in GPT2Model. Here we wrap a class to disable past state output.\r\nclass GPT2DoubleHeadsModelNoPastState(GPT2DoubleHeadsModel):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n\r\n def forward(self, input_ids, token_type_ids):\r\n return super().forward(input_ids, past=None, attention_mask=None, token_type_ids=token_type_ids, use_cache=False)\r\n\r\nmodel_name=\"gpt2\"\r\nconfig = GPT2Config.from_pretrained(model_name)\r\ntokenizer = GPT2Tokenizer.from_pretrained(model_name)\r\nmodel = GPT2DoubleHeadsModelNoPastState.from_pretrained(model_name)\r\n\r\nexample_inputs = tokenizer.encode_plus(\"This is a sample input\", return_tensors=\"pt\")\r\ndel example_inputs[\"attention_mask\"]\r\nexample_outputs = model(**example_inputs)\r\n\r\ninput_names = ['input_ids', 'token_type_ids']\r\noutput_names=['output_1', 'output_2']\r\ndynamic_axes={'input_ids': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len'}, \r\n 'token_type_ids': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len'},\r\n 'output_1': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len', 3: 'vocab_size'}, \r\n 'output_2': {0: 'batch_size', 1: 'num_choices'}\r\n}\r\noutput_path='gpt2.onnx'\r\ntorch.onnx.export(model=model,\r\n args=(example_inputs[input_names[0]].unsqueeze(0), example_inputs[input_names[1]].unsqueeze(0)),\r\n f=output_path,\r\n input_names=input_names,\r\n output_names=output_names,\r\n example_outputs=example_outputs,\r\n dynamic_axes=dynamic_axes,\r\n do_constant_folding=True,\r\n opset_version=11,\r\n use_external_data_format=False)\r\n```", "@mfuntowicz, I've forwarded the issue to the developer of pytorch onnx exporter.\r\n\r\nI did narrow down the issue to [one line](https://github.com/huggingface/transformers/blob/ca5e1cdf8e314288bd0242a531815a6c75d8178e/src/transformers/modeling_utils.py#L2056). A walk-around is to add int() to cast data type:\r\n\r\nBefore:\r\n```\r\ncls_index = torch.full_like(hidden_states[..., :1, :], hidden_states.shape[-2] - 1, dtype=torch.long,)\r\n```\r\n\r\nAfter:\r\n```\r\ncls_index = torch.full_like(hidden_states[..., :1, :], int(hidden_states.shape[-2]) - 1, dtype=torch.long,)\r\n```\r\n@mihail911, could you try this (need install transformers from source) to see whether you can export the model?\r\n\r\n\r\n\r\n", "Yes I am able to export the model. Thanks @tianleiwu @mfuntowicz!" ]
1,591
1,592
1,592
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT2DoubleHeadsModel Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I've been following the ipython notebook provided [here](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb) 1. Take an off-the-shelf pretrained `gpt` model and export to onnx format using the following script: ``` import torch from transformers import (GPT2Config, GPT2Model, GPT2Tokenizer) # use_cache is True by default in GPT2Model. Here we wrap a class to disable past state output. class GPT2DoubleHeadsModelNoPastState(GPT2DoubleHeadsModel): def __init__(self, config): super().__init__(config) def forward(self, input_ids, token_type_ids): return super().forward(input_ids, past=None, attention_mask=None, token_type_ids=token_type_ids, use_cache=False) model_name="gpt2" config = GPT2Config.from_pretrained(model_name) tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = GPT2ModelNoPastState.from_pretrained(model_name) example_inputs = tokenizer.encode_plus("This is a sample input", return_tensors="pt") del example_inputs["attention_mask"] example_outputs = model(**example_inputs) input_names = ['input_ids', 'token_type_ids'] output_names=['output_1', 'output_2'] dynamic_axes={'input_ids': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len'}, 'token_type_ids': {0: 'batch_size', 1: 'num_choices', 2: 'seq_len'}, 'output_1': {0: 'batch_size', 1: 'num_choices': 2: 'seq_len', 3: 'vocab_size'}, 'output_2': {0: 'batch_size', 1: 'num_choices'} } output_path='gpt2.onnx' torch.onnx.export(model=model, args=(example_inputs[input_names[0]].unsqueeze(0), example_inputs[input_names[1]].unsqueeze(0)), f=output_path, input_names=input_names, output_names=output_names, example_outputs=example_outputs, dynamic_axes=dynamic_axes, do_constant_folding=True, opset_version=11, use_external_data_format=False) ``` This script is based off of #4805 2. After invoking the above, I get the error: ``` .../torch/onnx/symbolic_helper.py", line 87... RuntimeError: Unexpected node type: onnx::Sub ``` ## Expected behavior I would expect this to work successfully, and unfortunately I'm not exactly sure how to interpret this error. There's not a lot of documentation online. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: Commit 0e1869cc286d607f1598506be7bd1312b76ca82c - Onnxruntime: 1.3.0 - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0+cu101 - Using GPU in script?: Yes Thanks for your help! @mfuntowicz @tianleiwu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4950/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4949
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4949/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4949/comments
https://api.github.com/repos/huggingface/transformers/issues/4949/events
https://github.com/huggingface/transformers/pull/4949
637,410,581
MDExOlB1bGxSZXF1ZXN0NDMzNDA2Mjk0
4,949
[mbart] Fix fp16 testing logic
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=h1) Report\n> Merging [#4949](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/473808da0d476792070f0e7dfebcf1121a12a34f&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4949/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4949 +/- ##\n=======================================\n Coverage 77.14% 77.14% \n=======================================\n Files 128 128 \n Lines 21745 21745 \n=======================================\n Hits 16775 16775 \n Misses 4970 4970 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=footer). Last update [473808d...a9ebdd5](https://codecov.io/gh/huggingface/transformers/pull/4949?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
CONTRIBUTOR
null
the expected logits must be in the same dtype as the resulting logits.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4949/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4949", "html_url": "https://github.com/huggingface/transformers/pull/4949", "diff_url": "https://github.com/huggingface/transformers/pull/4949.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4949.patch", "merged_at": 1591927895000 }
https://api.github.com/repos/huggingface/transformers/issues/4948
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4948/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4948/comments
https://api.github.com/repos/huggingface/transformers/issues/4948/events
https://github.com/huggingface/transformers/pull/4948
637,409,273
MDExOlB1bGxSZXF1ZXN0NDMzNDA1MjY0
4,948
[wip] Send slack message if self-scheduled runner fails
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=h1) Report\n> Merging [#4948](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/473808da0d476792070f0e7dfebcf1121a12a34f&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4948/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4948 +/- ##\n==========================================\n+ Coverage 77.14% 77.21% +0.06% \n==========================================\n Files 128 128 \n Lines 21745 21745 \n==========================================\n+ Hits 16775 16790 +15 \n+ Misses 4970 4955 -15 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=footer). Last update [473808d...7329595](https://codecov.io/gh/huggingface/transformers/pull/4948?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@julien-c do you know an easy way to test whether this works without merging? ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,601
1,601
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4948/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4948", "html_url": "https://github.com/huggingface/transformers/pull/4948", "diff_url": "https://github.com/huggingface/transformers/pull/4948.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4948.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4947
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4947/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4947/comments
https://api.github.com/repos/huggingface/transformers/issues/4947/events
https://github.com/huggingface/transformers/issues/4947
637,403,720
MDU6SXNzdWU2Mzc0MDM3MjA=
4,947
Trainer.evaluate does not support seq2seq models
{ "login": "Guitaricet", "id": 2821124, "node_id": "MDQ6VXNlcjI4MjExMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Guitaricet", "html_url": "https://github.com/Guitaricet", "followers_url": "https://api.github.com/users/Guitaricet/followers", "following_url": "https://api.github.com/users/Guitaricet/following{/other_user}", "gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}", "starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions", "organizations_url": "https://api.github.com/users/Guitaricet/orgs", "repos_url": "https://api.github.com/users/Guitaricet/repos", "events_url": "https://api.github.com/users/Guitaricet/events{/privacy}", "received_events_url": "https://api.github.com/users/Guitaricet/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @Guitaricet , if you only want to evaluate for loss (AFAIK this is the case for seq2seq models) then you can set `prediction_loss_only` to `True`", "Hi! Thank you, but I need the metrics too. Workaround was to inherit from `Trainer` and override `_prediction_loop`. ", "That sounds like a reasonable solution, but we should document this somewhere. Pinging @sgugger on this:)", "Yes, documentation about trainer would be awesome! Would love to contribute", "Still no updates on this issue?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,602
1,602
NONE
null
# 🐛 Bug ## Information Hi! I can't thank you enough for Transformers. I know that the Trainer is still under development, but would like to report this just to know the current status. Currently `Trainer._prediction_loop` assumes that different batches of data have the same shape. Specifically, [this line](https://github.com/huggingface/transformers/blob/473808da0d476792070f0e7dfebcf1121a12a34f/src/transformers/trainer.py#L786) ```python preds = torch.cat((preds, logits.detach()), dim=0) ``` This does not allow to use Trainer.evaluate for models with a variable output (e.g. seq2seq models). One of the possible solutions is to pad all batches to the same length, but it is pretty inefficient. The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. create seq2seq model 2. pad batches in such a way that each batch is padded to the maximum length within batch 3. create Trainer for the model, call .evaluate() ``` Traceback (most recent call last): File "/home/vlialin/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 509, in train self.evaluate() File "/home/vlialin/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 696, in evaluate output = self._prediction_loop(eval_dataloader, description="Evaluation") File "/home/vlialin/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 767, in _prediction_loop preds = torch.cat((preds, logits.detach()), dim=0) RuntimeError: Sizes of tensors must match except in dimension 0. Got 29 and 22 in dimension 1 ``` ## Expected behavior Trainer is able to evaluate Seq2seq ## Environment info - `transformers` version: 2.11 - Platform: Linux - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): 2.2.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4947/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4947/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4946
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4946/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4946/comments
https://api.github.com/repos/huggingface/transformers/issues/4946/events
https://github.com/huggingface/transformers/pull/4946
637,372,899
MDExOlB1bGxSZXF1ZXN0NDMzMzc1Nzc2
4,946
feat(TFTrainer): improve logging
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Integration test error does not seem to be related to this PR.\r\n\r\n@jplu On another note, I got strange results while using `run_tf_glue` as learning rate goes quickly to 0.\r\n\r\nCommand:\r\n`run_tf_glue.py --model_name_or_path bert-base-cased --task_name MRPC --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/MRPC/ --overwrite_output_dir --logging_dir log --evaluate_during_training --eval_steps 50 --logging_steps 10`\r\n\r\nGraph\r\n![image](https://user-images.githubusercontent.com/715491/84448088-5e9c2600-ac0f-11ea-9994-bdb64c84c483.png)\r\n\r\nW&B Run: https://app.wandb.ai/borisd13/huggingface/runs/21rxop7c", "Good you have restarted a new PR!\r\n\r\nHumm for the loss that drop quickly to 0 I think it might come from your side because I get normal evolution even after multiple runs. Here an output:\r\n\r\n```\r\n06/12/2020 10:05:38 - INFO - transformers.trainer_tf - ***** Running training *****\r\n06/12/2020 10:05:38 - INFO - transformers.trainer_tf - Num examples = 3668\r\n06/12/2020 10:05:38 - INFO - transformers.trainer_tf - Num Epochs = 3\r\n06/12/2020 10:05:38 - INFO - transformers.trainer_tf - Total optimization steps = 29\r\nWARNING:tensorflow:From /home/jplu/transformers/src/transformers/trainer_tf.py:355: StrategyBase.experimental_run_v2 (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nrenamed to `run`\r\n06/12/2020 10:05:38 - WARNING - tensorflow - From /home/jplu/transformers/src/transformers/trainer_tf.py:355: StrategyBase.experimental_run_v2 (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nrenamed to `run`\r\nWARNING:tensorflow:From /opt/anaconda3/envs/jplu-transformers/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nIf using Keras pass *_constraint arguments to layers.\r\n06/12/2020 10:05:47 - WARNING - tensorflow - From /opt/anaconda3/envs/jplu-transformers/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nIf using Keras pass *_constraint arguments to layers.\r\nINFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:06:17 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\nINFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:06:17 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\nINFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:06:53 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\nINFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:06:53 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\nINFO:tensorflow:batch_all_reduce: 201 all-reduces with algorithm = nccl, num_packs = 1\r\n06/12/2020 10:08:32 - INFO - tensorflow - batch_all_reduce: 201 all-reduces with algorithm = nccl, num_packs = 1\r\nINFO:tensorflow:batch_all_reduce: 201 all-reduces with algorithm = nccl, num_packs = 1\r\n06/12/2020 10:08:48 - INFO - tensorflow - batch_all_reduce: 201 all-reduces with algorithm = nccl, num_packs = 1\r\n06/12/2020 10:09:46 - INFO - transformers.trainer_tf - Epoch 1 Step 10 Train Loss 0.6610\r\n06/12/2020 10:10:02 - INFO - transformers.trainer_tf - Epoch 1 Step 20 Train Loss 0.5626\r\nINFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:10:46 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\nINFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:10:46 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:11:15 - INFO - transformers.trainer_tf - Epoch 2 Step 30 Train Loss 0.5585\r\n06/12/2020 10:11:31 - INFO - transformers.trainer_tf - Epoch 2 Step 40 Train Loss 0.5492\r\n06/12/2020 10:11:47 - INFO - transformers.trainer_tf - ***** Running Evaluation *****\r\n06/12/2020 10:11:47 - INFO - transformers.trainer_tf - Batch size = 32\r\nINFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:11:53 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\nINFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:11:53 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\nINFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:12:04 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\nINFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:12:04 - INFO - tensorflow - Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).\r\n06/12/2020 10:12:08 - INFO - transformers.trainer_tf - Epoch 2 Step 50 Validation Metrics {'eval_eval_loss': 0.5300102, 'eval_eval_acc': 0.7230392156862745, 'eval_eval_f1': 0.8274809160305343, 'eval_eval_acc_and_f1': 0.7752600658584043, 'learning_rate': 0.0}\r\n06/12/2020 10:12:08 - INFO - transformers.trainer_tf - Epoch 2 Step 50 Train Loss 0.6273\r\n06/12/2020 10:13:21 - INFO - transformers.trainer_tf - Epoch 3 Step 60 Train Loss 0.6168\r\n06/12/2020 10:13:37 - INFO - transformers.trainer_tf - Epoch 3 Step 70 Train Loss 0.5616\r\n06/12/2020 10:13:53 - INFO - transformers.trainer_tf - Epoch 3 Step 80 Train Loss 0.5551\r\n06/12/2020 10:14:05 - INFO - transformers.trainer_tf - Saving model in /tmp/MRPC/\r\n```\r\n\r\nI used the exact same command line than yours.", "I seem to be having a different value of `self.train_steps`.\r\n\r\n```\r\nNum examples = 3668\r\nNum Epochs = 3\r\nTotal optimization steps = 115\r\n```\r\nHere is my full log output: https://app.wandb.ai/borisd13/huggingface/runs/1u5spvau/logs\r\n\r\nActually I just checked and I'm getting the exact same problem in the `master` branch so i guess the issue I'm having is independent from this PR.\r\n\r\nLet me know if you need any modification on this PR.", "Sorry I don't see what is the issue in your logs, all the values seems ok. I get 29 steps because I run over 4 gpus.", "Here is a gist with full output: https://gist.github.com/borisdayma/62fd9338aae4f373a9c0709a8961f5bc\r\n\r\nIt's probably independent. I could file a separate issue.", "I still don't see what is the issue sorry, everything looks normal.", "@jplu actually I noticed you have the same issue.\r\n\r\n```\r\n06/12/2020 10:12:08 - INFO - transformers.trainer_tf - Epoch 2 Step 50 Validation Metrics {'eval_eval_loss': 0.5300102, 'eval_eval_acc': 0.7230392156862745, 'eval_eval_f1': 0.8274809160305343, 'eval_eval_acc_and_f1': 0.7752600658584043, 'learning_rate': 0.0}\r\n```\r\n\r\nIn current master, learning_rate is displayed only at eval. It should not be 0 at this specific step.\r\nIf you confirm let me know if you want me to file an issue.\r\n\r\nIn any case, since it's independent from this PR, let me know if you approve it.\r\nIt's always difficult to stay in sync with master so I'd like to make all necessary changes as soon as possible since it's a difficult PR.", "This is normal, it is the decay of the LR so at some point it gets 0. It is ok. I will take some time to review your PR this weekend.", "Oh I see what you mean, yes it gets to 0 at the end of the first epoch, and it shouldn't, it is fixed from my side, PR will be here so no worries we can focus on your logging improvement. Thanks!!", "@jplu I addressed your comments. Let me know if I understood correctly:\r\n\r\n* `global_step` moved to `__init__`\r\n* `epoch` added to logs directly within training loop\r\n* In addition, I also added `epoch` in eval loop when called from training loop. If we log training loss every 20 steps and we log evaluation metrics every 50 steps, we need to make sure we add `epoch`\r\n\r\nI'm going to add a comment for a further simplification of the code if you'd like.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=h1) Report\n> Merging [#4946](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/473808da0d476792070f0e7dfebcf1121a12a34f&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `18.18%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4946/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4946 +/- ##\n==========================================\n- Coverage 77.14% 77.14% -0.01% \n==========================================\n Files 128 128 \n Lines 21745 21773 +28 \n==========================================\n+ Hits 16775 16796 +21 \n- Misses 4970 4977 +7 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `18.98% <11.76%> (-0.07%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <100.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=footer). Last update [473808d...28f004b](https://codecov.io/gh/huggingface/transformers/pull/4946?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Ok, we almost see the end of the tunnel. Just few comments yet :)", "The way it is now, `epoch` and `global_step` will just be 0 in eval mode only." ]
1,591
1,592
1,592
CONTRIBUTOR
null
This PR supersedes PR #4756 I wanted initially to refactor logging between `Trainer` and `TFTrainer` but there seems to be too many differences between them for it to make sense (distributed training, logging methods, `wandb.watch`…). Logging is now handled directly within each Trainer and all comments from previous PR have been applied here. Notes: * `TFTrainer`: @jplu There should be the same modifications we discussed. I just moved the check of `global_step` within logging for better readability. * `TFTrainer._log` does not have a tqdm iterator argument (unlike in `Trainer`) since you're not using it at the moment but it could be useful in the future * `Trainer`: main update is to handle the case where we do only evaluations (no training)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4946/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4946", "html_url": "https://github.com/huggingface/transformers/pull/4946", "diff_url": "https://github.com/huggingface/transformers/pull/4946.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4946.patch", "merged_at": 1592244378000 }
https://api.github.com/repos/huggingface/transformers/issues/4945
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4945/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4945/comments
https://api.github.com/repos/huggingface/transformers/issues/4945/events
https://github.com/huggingface/transformers/issues/4945
637,357,132
MDU6SXNzdWU2MzczNTcxMzI=
4,945
Unable to evaluate on fine-tuned bart for summarization
{ "login": "cshivade", "id": 43888804, "node_id": "MDQ6VXNlcjQzODg4ODA0", "avatar_url": "https://avatars.githubusercontent.com/u/43888804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cshivade", "html_url": "https://github.com/cshivade", "followers_url": "https://api.github.com/users/cshivade/followers", "following_url": "https://api.github.com/users/cshivade/following{/other_user}", "gists_url": "https://api.github.com/users/cshivade/gists{/gist_id}", "starred_url": "https://api.github.com/users/cshivade/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cshivade/subscriptions", "organizations_url": "https://api.github.com/users/cshivade/orgs", "repos_url": "https://api.github.com/users/cshivade/repos", "events_url": "https://api.github.com/users/cshivade/events{/privacy}", "received_events_url": "https://api.github.com/users/cshivade/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Got around this by running the following code to generate the config.json and then running evaluate_cnn.py as above.\r\n\r\n```\r\nfrom lightning_base import BaseTransformer\r\nfrom finetune import SummarizationTrainer\r\nimport torch\r\nfrom argparse import Namespace\r\n\r\nargs = Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='PATH_TO_DATA', do_predict=False, do_train=False, eval_batch_size=4, learning_rate=3e-05, max_source_length=400, max_target_length=400, model_name_or_path='facebook/bart-large', n_gpu=8, num_train_epochs=3, output_dir='PATH_TO_OUTPUT', tokenizer_name='', train_batch_size=4, warmup_steps=0, weight_decay=0.0)\r\n\r\nmodel = SummarizationTrainer(args)\r\nmodel = model.load_from_checkpoint('PATH_TO_CHECKPOINT')\r\ntorch.save(model.state_dict(), args.output_dir + '/pytorch_model.bin')\r\nmodel.config.to_json_file(args.output_dir + '/config.json')\r\n```" ]
1,591
1,592
1,592
NONE
null
Hello, I used the [finetune_bart.sh ](https://github.com/huggingface/transformers/blob/master/examples/summarization/finetune_bart.sh) and was able to finetune the model on my task. I checked the output directory and checkpoints were stored and there were no errors. Following the training, i ran evaluate_cnn.py using this command. ``` python evaluate_cnn.py <path_to_test.source> test_generations.txt <model-name> --score_path rouge_scores.txt ``` specified [here](https://github.com/huggingface/transformers/tree/master/examples/summarization). I made sure the above command points to test.source for the finetuning task. However, the evaluation is unable to load the checkpoint and throws the following error ``` OSError: Can't load config for '/..PATH_TO../wiki_bart'. Make sure that: - '/..PATH_TO../wiki_bart' is a correct model identifier listed on 'https://huggingface.co/models' - or '/..PATH_TO../wiki_bart' is the correct path to a directory containing a config.json file ``` Seems like it is looking for the config.json which is not present along with the checkpoint files. I saw the finetune.py has a do_predict argument like do_train. Is that to be used instead of evaluate_cnn.py? Can you please help?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4945/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4944
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4944/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4944/comments
https://api.github.com/repos/huggingface/transformers/issues/4944/events
https://github.com/huggingface/transformers/pull/4944
637,353,574
MDExOlB1bGxSZXF1ZXN0NDMzMzU5NzQ5
4,944
Refactor proposition for multiple choice models
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=h1) Report\n> Merging [#4944](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/473808da0d476792070f0e7dfebcf1121a12a34f&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `91.66%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4944/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4944 +/- ##\n==========================================\n- Coverage 77.14% 77.11% -0.04% \n==========================================\n Files 128 128 \n Lines 21745 21713 -32 \n==========================================\n- Hits 16775 16743 -32 \n Misses 4970 4970 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.48% <90.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.86% <100.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.75% <100.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.35% <100.00%> (-0.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.26% <100.00%> (-0.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4944/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=footer). Last update [473808d...95d5bad](https://codecov.io/gh/huggingface/transformers/pull/4944?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
COLLABORATOR
null
Opening the discussion about refactoring the dupe code in all task-specific models. This is a proposal of design that still leaves the specific classes and their docstrings, does not change the name of their attributes for backward compatibility but delegates the actual forward method to a task-specific method in `PreTrainedModel`. This is the initial development to get feedback and suggestions for improvements :-) An alternative is to have directly a model with multiple choice built for any architecture that supports the task. Explored that in [this gist](https://gist.github.com/sgugger/edc345943c92b155e0d73ef7a1897c21) if you want to see what it could look like. The main problem with this approach is that the model-specific arguments get hidden in kwargs, not sure if this is a blocker or not.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4944/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4944", "html_url": "https://github.com/huggingface/transformers/pull/4944", "diff_url": "https://github.com/huggingface/transformers/pull/4944.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4944.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4943
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4943/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4943/comments
https://api.github.com/repos/huggingface/transformers/issues/4943/events
https://github.com/huggingface/transformers/pull/4943
637,304,752
MDExOlB1bGxSZXF1ZXN0NDMzMzE5NTc5
4,943
NER: fix construction of input examples for RoBERTa
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=h1) Report\n> Merging [#4943](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/473808da0d476792070f0e7dfebcf1121a12a34f&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4943/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4943 +/- ##\n==========================================\n+ Coverage 77.14% 77.20% +0.06% \n==========================================\n Files 128 128 \n Lines 21745 21745 \n==========================================\n+ Hits 16775 16789 +14 \n+ Misses 4970 4956 -14 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4943/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4943/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=footer). Last update [473808d...e9eb626](https://codecov.io/gh/huggingface/transformers/pull/4943?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "LGTM, thanks!" ]
1,591
1,592
1,592
COLLABORATOR
null
Hi, this PR avoids adding an extra `</s>` symbol, so that the final sequence ends with `</s> </s>`. The documentation clearly states, that ther's only one `</s>` expected at the end of a single sequence: https://huggingface.co/transformers/model_doc/roberta.html#transformers.RobertaTokenizer.build_inputs_with_special_tokens The `fairseq` reference implementation shows also only one `</s>` at the end: https://github.com/pytorch/fairseq/tree/master/examples/roberta#apply-byte-pair-encoding-bpe-to-input-text Technically, the `sep_token_extra` argument is set to `False` now - I didn't remove this parameter, so future models/tokenizers can use it (when they really need it). This fixes #4755.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4943/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4943", "html_url": "https://github.com/huggingface/transformers/pull/4943", "diff_url": "https://github.com/huggingface/transformers/pull/4943.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4943.patch", "merged_at": 1592224241000 }
https://api.github.com/repos/huggingface/transformers/issues/4942
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4942/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4942/comments
https://api.github.com/repos/huggingface/transformers/issues/4942/events
https://github.com/huggingface/transformers/issues/4942
637,293,439
MDU6SXNzdWU2MzcyOTM0Mzk=
4,942
Dataloader in Trainer num_workers > 0
{ "login": "Diego999", "id": 1092464, "node_id": "MDQ6VXNlcjEwOTI0NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1092464?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Diego999", "html_url": "https://github.com/Diego999", "followers_url": "https://api.github.com/users/Diego999/followers", "following_url": "https://api.github.com/users/Diego999/following{/other_user}", "gists_url": "https://api.github.com/users/Diego999/gists{/gist_id}", "starred_url": "https://api.github.com/users/Diego999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Diego999/subscriptions", "organizations_url": "https://api.github.com/users/Diego999/orgs", "repos_url": "https://api.github.com/users/Diego999/repos", "events_url": "https://api.github.com/users/Diego999/events{/privacy}", "received_events_url": "https://api.github.com/users/Diego999/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I also have the same issue.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,602
1,602
NONE
null
# ❓ Questions & Help I notice that when doing inference/training with a pre-trained language model in a Trainer, only one worker is used. For training it's not a problem as the batch size is usually small, however, for inference, I can largely increase it. However, it seems that https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L263 only supports num_workers=0. I tried to modified it manually and put num_workers=10, but no effect. Am I doing something wrong in my reasoning? Thanks for your help!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4942/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4942/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4941
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4941/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4941/comments
https://api.github.com/repos/huggingface/transformers/issues/4941/events
https://github.com/huggingface/transformers/pull/4941
637,259,841
MDExOlB1bGxSZXF1ZXN0NDMzMjgxNTg5
4,941
[Benchmark] fix indentation error
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=h1) Report\n> Merging [#4941](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/699541c4b34479451c91b3c6c204d904f62bed83&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4941/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4941 +/- ##\n==========================================\n- Coverage 77.17% 77.17% -0.01% \n==========================================\n Files 128 128 \n Lines 21723 21722 -1 \n==========================================\n- Hits 16764 16763 -1 \n Misses 4959 4959 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/4941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `69.42% <0.00%> (+0.56%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=footer). Last update [699541c...7f4d433](https://codecov.io/gh/huggingface/transformers/pull/4941?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
MEMBER
null
Because of a wrong indentation, the memory usage was calculated incorrectly
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4941/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4941", "html_url": "https://github.com/huggingface/transformers/pull/4941", "diff_url": "https://github.com/huggingface/transformers/pull/4941.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4941.patch", "merged_at": 1591903682000 }
https://api.github.com/repos/huggingface/transformers/issues/4940
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4940/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4940/comments
https://api.github.com/repos/huggingface/transformers/issues/4940/events
https://github.com/huggingface/transformers/pull/4940
637,243,143
MDExOlB1bGxSZXF1ZXN0NDMzMjY3Njc4
4,940
Delay decay schedule until the end of warmup
{ "login": "amodaresi", "id": 15783079, "node_id": "MDQ6VXNlcjE1NzgzMDc5", "avatar_url": "https://avatars.githubusercontent.com/u/15783079?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amodaresi", "html_url": "https://github.com/amodaresi", "followers_url": "https://api.github.com/users/amodaresi/followers", "following_url": "https://api.github.com/users/amodaresi/following{/other_user}", "gists_url": "https://api.github.com/users/amodaresi/gists{/gist_id}", "starred_url": "https://api.github.com/users/amodaresi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amodaresi/subscriptions", "organizations_url": "https://api.github.com/users/amodaresi/orgs", "repos_url": "https://api.github.com/users/amodaresi/repos", "events_url": "https://api.github.com/users/amodaresi/events{/privacy}", "received_events_url": "https://api.github.com/users/amodaresi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=h1) Report\n> Merging [#4940](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6293eb04dfef704e87a6e0b358848ffc41587b4f&el=desc) will **increase** coverage by `0.40%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4940/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4940 +/- ##\n==========================================\n+ Coverage 77.20% 77.60% +0.40% \n==========================================\n Files 128 128 \n Lines 21746 21746 \n==========================================\n+ Hits 16789 16877 +88 \n+ Misses 4957 4869 -88 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+23.24%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=footer). Last update [6293eb0...96ae67a](https://codecov.io/gh/huggingface/transformers/pull/4940?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@jplu what do you think of this?", "Fix #5098" ]
1,591
1,595
1,593
CONTRIBUTOR
null
The decay schedule should start at the end of the warmup steps. Without this simple edit, learning rate will drop at the end of the warmup steps, like the figure shown below: <img width="378" alt="image" src="https://user-images.githubusercontent.com/15783079/84426617-c5820500-ac38-11ea-9579-2c0d5f374e33.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4940/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4940", "html_url": "https://github.com/huggingface/transformers/pull/4940", "diff_url": "https://github.com/huggingface/transformers/pull/4940.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4940.patch", "merged_at": 1593011910000 }
https://api.github.com/repos/huggingface/transformers/issues/4939
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4939/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4939/comments
https://api.github.com/repos/huggingface/transformers/issues/4939/events
https://github.com/huggingface/transformers/pull/4939
637,207,304
MDExOlB1bGxSZXF1ZXN0NDMzMjM4NDMz
4,939
[cleanup] Hoist ModelTester objects to top level
{ "login": "aretius", "id": 18247856, "node_id": "MDQ6VXNlcjE4MjQ3ODU2", "avatar_url": "https://avatars.githubusercontent.com/u/18247856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aretius", "html_url": "https://github.com/aretius", "followers_url": "https://api.github.com/users/aretius/followers", "following_url": "https://api.github.com/users/aretius/following{/other_user}", "gists_url": "https://api.github.com/users/aretius/gists{/gist_id}", "starred_url": "https://api.github.com/users/aretius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aretius/subscriptions", "organizations_url": "https://api.github.com/users/aretius/orgs", "repos_url": "https://api.github.com/users/aretius/repos", "events_url": "https://api.github.com/users/aretius/events{/privacy}", "received_events_url": "https://api.github.com/users/aretius/received_events", "type": "User", "site_admin": false }
[ { "id": 2139563322, "node_id": "MDU6TGFiZWwyMTM5NTYzMzIy", "url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup", "name": "cleanup", "color": "e7fc49", "default": false, "description": "" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=h1) Report\n> Merging [#4939](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f9f8a5312e92541ff9a5f483fc4907ec87da876e&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4939/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4939 +/- ##\n=======================================\n Coverage 77.39% 77.40% \n=======================================\n Files 130 130 \n Lines 22018 22018 \n=======================================\n+ Hits 17041 17042 +1 \n+ Misses 4977 4976 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.73% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=footer). Last update [f9f8a53...16a4a1e](https://codecov.io/gh/huggingface/transformers/pull/4939?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I think we can investigate code reuse between `ModelTester`s in a separate PR, given the enormity of this one :)", "I like the changes! I only checked the files superficially. I trust @aretius and @sshleifer that no tests functionality was changed of got lost :-)", "Thanks, @sshleifer :)\r\nWould be glad to pick up code reuse in a separate PR \r\nYes @patrickvonplaten no test functionality was lost!", "Merge conflict resolved @LysandreJik " ]
1,591
1,592
1,592
CONTRIBUTOR
null
Fixes #4902 @sshleifer All tests seem to pass. Please review!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4939/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4939/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4939", "html_url": "https://github.com/huggingface/transformers/pull/4939", "diff_url": "https://github.com/huggingface/transformers/pull/4939.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4939.patch", "merged_at": 1592309024000 }
https://api.github.com/repos/huggingface/transformers/issues/4938
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4938/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4938/comments
https://api.github.com/repos/huggingface/transformers/issues/4938/events
https://github.com/huggingface/transformers/issues/4938
637,115,995
MDU6SXNzdWU2MzcxMTU5OTU=
4,938
T5ForConditionalGeneration fp16 issues
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
CONTRIBUTOR
null
This is a continuation of and very related to https://github.com/huggingface/transformers/issues/4586 but the issue here is `NaN` loss during finetuning, rather than `nan` in model outputs. ### Instructions to reproduce From `examples/summarization` Setup cnn tiny: ```bash wget https://s3.amazonaws.com/datasets.huggingface.co/summarization/cnn_tiny.tgz tar -xzvf cnn_tiny.tgz rm cnn_tiny.tgz export OUTPUT_DIR_NAME=bart_utest_output export CURRENT_DIR=${PWD} export OUTPUT_DIR=${CURRENT_DIR}/${OUTPUT_DIR_NAME} # Make output directory if it doesn't exist mkdir -p $OUTPUT_DIR # Add parent directory to python path to access lightning_base.py and utils.py export PYTHONPATH="../":"${PYTHONPATH}" ``` ### Call finetune.py with 'O1' ```bash python finetune.py \ --data_dir=cnn_tiny/ \ --model_name_or_path=t5-large \ --learning_rate=3e-5 \ --train_batch_size=1 \ --eval_batch_size=2 \ --output_dir=$OUTPUT_DIR \ --num_train_epochs=10 \ --n_gpu=1 \ --fp16 \ --fp16_opt_level=O1 \ --do_train $@ ``` Uses 16GB, but loss is always NAN. ### Call finetune.py without fp16 ```bash mkdir t5_large_fp32 python finetune.py \ --data_dir=cnn_tiny/ \ --model_name_or_path=t5-large \ --learning_rate=3e-5 \ --train_batch_size=1 \ --eval_batch_size=1 \ --output_dir=t5_large_fp32 \ --num_train_epochs=10 \ --n_gpu=1 \ --do_train $@ ``` Result: OOM on 16GB card, uses 19GB on RTX (24GB Card) If you try `t5_base`, it seems to use 8GB of GPU ram in fp16, and no issues.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4938/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4938/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4937
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4937/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4937/comments
https://api.github.com/repos/huggingface/transformers/issues/4937/events
https://github.com/huggingface/transformers/issues/4937
637,061,552
MDU6SXNzdWU2MzcwNjE1NTI=
4,937
What is the different options for pooler_type in Bert config ?
{ "login": "ClementViricel", "id": 19490120, "node_id": "MDQ6VXNlcjE5NDkwMTIw", "avatar_url": "https://avatars.githubusercontent.com/u/19490120?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ClementViricel", "html_url": "https://github.com/ClementViricel", "followers_url": "https://api.github.com/users/ClementViricel/followers", "following_url": "https://api.github.com/users/ClementViricel/following{/other_user}", "gists_url": "https://api.github.com/users/ClementViricel/gists{/gist_id}", "starred_url": "https://api.github.com/users/ClementViricel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ClementViricel/subscriptions", "organizations_url": "https://api.github.com/users/ClementViricel/orgs", "repos_url": "https://api.github.com/users/ClementViricel/repos", "events_url": "https://api.github.com/users/ClementViricel/events{/privacy}", "received_events_url": "https://api.github.com/users/ClementViricel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! The pooler is actually a linear layer, that is used on top of the BERT encoder (the hidden layers). It's not doing a pooling operation like average or max pooling.\r\n", "Ok i read that somewhere. But my is on what component ? If you use a pooler on the last layer of bert encoder you will get an output of size [batch, step, dense_unit], but i am sure that the pooler output is a squeeze_first output. Why is there a information about pooler_type in the config, if we cannot change the pooler_type ?" ]
1,591
1,592
1,592
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I want to change the pooling type at the top of the output hidden states of Bert. I search in the documentation and find nothing. Can anyone help me ? I just want the different option of pooling (max, average etc.). Here's a piece of code to see the option i am talking about. `import transformers encoder = transformers.TFBertModel.from_pretrained("bert-base-uncased") encoder.config` <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4937/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4936
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4936/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4936/comments
https://api.github.com/repos/huggingface/transformers/issues/4936/events
https://github.com/huggingface/transformers/pull/4936
636,949,805
MDExOlB1bGxSZXF1ZXN0NDMzMDIzNjgz
4,936
[Model card] model card for electra-base QA model
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=h1) Report\n> Merging [#4936](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/699541c4b34479451c91b3c6c204d904f62bed83&el=desc) will **increase** coverage by `0.40%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4936/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4936 +/- ##\n==========================================\n+ Coverage 76.77% 77.17% +0.40% \n==========================================\n Files 128 128 \n Lines 21723 21723 \n==========================================\n+ Hits 16677 16764 +87 \n+ Misses 5046 4959 -87 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4936/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4936/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (+0.77%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4936/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <0.00%> (+19.75%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=footer). Last update [699541c...4301a6e](https://codecov.io/gh/huggingface/transformers/pull/4936?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4936/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4936", "html_url": "https://github.com/huggingface/transformers/pull/4936", "diff_url": "https://github.com/huggingface/transformers/pull/4936.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4936.patch", "merged_at": 1591895795000 }
https://api.github.com/repos/huggingface/transformers/issues/4935
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4935/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4935/comments
https://api.github.com/repos/huggingface/transformers/issues/4935/events
https://github.com/huggingface/transformers/issues/4935
636,917,712
MDU6SXNzdWU2MzY5MTc3MTI=
4,935
name 'ElectraForSequenceClassification' is not defined
{ "login": "Gemini77", "id": 40814484, "node_id": "MDQ6VXNlcjQwODE0NDg0", "avatar_url": "https://avatars.githubusercontent.com/u/40814484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gemini77", "html_url": "https://github.com/Gemini77", "followers_url": "https://api.github.com/users/Gemini77/followers", "following_url": "https://api.github.com/users/Gemini77/following{/other_user}", "gists_url": "https://api.github.com/users/Gemini77/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gemini77/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gemini77/subscriptions", "organizations_url": "https://api.github.com/users/Gemini77/orgs", "repos_url": "https://api.github.com/users/Gemini77/repos", "events_url": "https://api.github.com/users/Gemini77/events{/privacy}", "received_events_url": "https://api.github.com/users/Gemini77/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Gemini77 , what is your transformers version ? \r\n\r\n`ElectraForSequenceClassification` is available in transformers >=2.10.0 version" ]
1,591
1,592
1,592
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4935/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4934
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4934/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4934/comments
https://api.github.com/repos/huggingface/transformers/issues/4934/events
https://github.com/huggingface/transformers/issues/4934
636,916,731
MDU6SXNzdWU2MzY5MTY3MzE=
4,934
Using LongformerForQuestionAnswering on large documents (40K+ characters)
{ "login": "zirlman", "id": 24474083, "node_id": "MDQ6VXNlcjI0NDc0MDgz", "avatar_url": "https://avatars.githubusercontent.com/u/24474083?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zirlman", "html_url": "https://github.com/zirlman", "followers_url": "https://api.github.com/users/zirlman/followers", "following_url": "https://api.github.com/users/zirlman/following{/other_user}", "gists_url": "https://api.github.com/users/zirlman/gists{/gist_id}", "starred_url": "https://api.github.com/users/zirlman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zirlman/subscriptions", "organizations_url": "https://api.github.com/users/zirlman/orgs", "repos_url": "https://api.github.com/users/zirlman/repos", "events_url": "https://api.github.com/users/zirlman/events{/privacy}", "received_events_url": "https://api.github.com/users/zirlman/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "_What model of GPU are you using ?_\r\n\r\nI didn't open your notebook, that's what I was looking for :\r\n```\r\nCUDA out of memory. Tried to allocate 4.92 GiB (GPU 0; 11.17 GiB total capacity; 4.64 GiB already allocated; 3.69 GiB free; 7.12 GiB reserved in total by PyTorch)\r\nDetached cpu cpu\r\n```", "> What model of GPU are you using ?\r\n\r\n@tuanardouin I was running the example on Colab only. So most of the time it was K80. But even on P100 it ran out of memory. I've expected that the model was performance expensive, but not this much 😱 ", "This notebook might help :-) https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb" ]
1,591
1,592
1,592
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarily intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiasts can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> So I decided to try the new LongformerForQuestionAnswering model on a larger input. From the [paper](https://arxiv.org/pdf/2004.05150.pdf) & the [source code](https://huggingface.co/transformers/_modules/transformers/modeling_longformer.html#LongformerForQuestionAnswering) I've understood that large document processing can be achieved by batching the input sequence in a certain format. For QA task the format goes like this: **question<\/sep><\/sep>context_block** where **<\/sep>** represents the separator token. This pattern allows the model to place global attention on the question. Feel free to correct me if I'm wrong. To test my assumption I've made a [notebook](https://colab.research.google.com/drive/1bEglkGcTXM_ZvdqbUT2_csp_TsRFPJwx?usp=sharing), but after running it I receive a `CUDA run out of memory error`. To get more insight, I've decided to process each batch record separately (`answer = get_answer(text, question, False)` ) but none of the records gave a reasonable answer. So my questions are as follows: 1. Is my assumption for processing longer documents correct? 2. If so, what are the possible solutions for the memory error? I was thinking about a sliding window approach on the batch records 🤔 3. What could be the reason for such poor results when processing a single batch record? 4. This is more a PyTorch question, but shouldn't this code snippet empty the allocated GPU storage? Am I missing something here? As far as I know, CUDA storage is cleared when detaching the variable back to the CPU. Snippet: ```python if torch.cuda.is_available(): input_ids = input_ids.to("cpu") attention_mask = attention_mask.to("cpu") ``` Thank you 🤗
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4934/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4933
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4933/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4933/comments
https://api.github.com/repos/huggingface/transformers/issues/4933/events
https://github.com/huggingface/transformers/pull/4933
636,869,702
MDExOlB1bGxSZXF1ZXN0NDMyOTU3MTM5
4,933
[AutoModel] Split AutoModelWithLMHead into clm, mlm, encoder-decoder
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=h1) Report\n> Merging [#4933](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `60.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4933/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4933 +/- ##\n==========================================\n+ Coverage 77.10% 77.11% +0.01% \n==========================================\n Files 128 128 \n Lines 21723 21769 +46 \n==========================================\n+ Hits 16749 16788 +39 \n- Misses 4974 4981 +7 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.58% <55.55%> (-7.82%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.21% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=footer). Last update [e80d6c6...7117cb1](https://codecov.io/gh/huggingface/transformers/pull/4933?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This all looks good to me. I'd just simplify `AutoModelForSeq2SeqCausalLM` to `AutoModelForSeq2SeqLM` and the related names since I don't think there exists an encoder/decoder setup for LM where the decoder doesn't have a causal mask, so causal doesn't really add any needed info to the name.", "I like this but if we merge this, I think we should make a note to actually remove the `AutoModelWithLMHead` at the next (or next-next?) major release otherwise users will still use it and it will be confusing", "Merging to unblock the encoder decoder framework to work correctly. Pinging @thomwolf for notification.", "Yes, I'm happy with this!", "what is the different when i pass a ones.tril() mask to the bertencoder" ]
1,591
1,597
1,591
MEMBER
null
This is a follow-up PR of #4874. In #4874, `BertModelForMaskedLM` , was split into a clm `BertLMHeadModel` and mlm Bert Model `BertForMaskedLM`. In order for the encoder-decoder framework to work correctly, the `BertLMHeadModel` needs to be loaded, when instantiating an `EncoderDecoder.from_encoder_decoder_pretrained()`, therefore a new `AutoModelForCausalLM` has to be created. This PR deprecates `AutoModelWithLMHead` and introduces: - `AutoModelForCausalLM` for Autoregressive models - `AutoModelForMaskedLM` for Autoencoding models - `AutoModelForSeq2SeqCausalLM` for Sequence-to-sequence models with causal LM for the decoder @julien-c @LysandreJik @sgugger @thomwolf @sshleifer -> `AutoModelWithLMHead` still works as before so no breaking changes, but it's deprecated. All AutoModels are exposed in the `__init__` (don't really see a reason why they shouldn't). What do you guys think about the naming? **IMPORTANT:** #4874 and this PR might introduce some breaking changes for the encoder-decoder framework: Instead of using `AutoModelWithLMHead` one has to use `AutoModelForCausalLM` for the decoder model now and instead of using `BertForMaskedLM` one should use `BertLMHeadModel` from now one. There are **no breaking changes** for the encoder-decoder user-facing functions `.from_pretrained()` and `.from_encoder_decoder_pretrained()`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4933/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4933", "html_url": "https://github.com/huggingface/transformers/pull/4933", "diff_url": "https://github.com/huggingface/transformers/pull/4933.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4933.patch", "merged_at": 1591948910000 }
https://api.github.com/repos/huggingface/transformers/issues/4932
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4932/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4932/comments
https://api.github.com/repos/huggingface/transformers/issues/4932/events
https://github.com/huggingface/transformers/issues/4932
636,813,821
MDU6SXNzdWU2MzY4MTM4MjE=
4,932
Training ELECTRA model on TPU with the help of Trainer or TFTrainer classes
{ "login": "DevKretov", "id": 38000417, "node_id": "MDQ6VXNlcjM4MDAwNDE3", "avatar_url": "https://avatars.githubusercontent.com/u/38000417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DevKretov", "html_url": "https://github.com/DevKretov", "followers_url": "https://api.github.com/users/DevKretov/followers", "following_url": "https://api.github.com/users/DevKretov/following{/other_user}", "gists_url": "https://api.github.com/users/DevKretov/gists{/gist_id}", "starred_url": "https://api.github.com/users/DevKretov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DevKretov/subscriptions", "organizations_url": "https://api.github.com/users/DevKretov/orgs", "repos_url": "https://api.github.com/users/DevKretov/repos", "events_url": "https://api.github.com/users/DevKretov/events{/privacy}", "received_events_url": "https://api.github.com/users/DevKretov/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
NONE
null
# ❓ Questions & Help Hi there, I am trying to train Electra model from scratch using HF's Trainer interface. My primary source is this colab: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=ri2BIQKqjfHm There are several questions about Electra training specifics: 1. Which model to use: ElectraForPreTraining or ElectraForMaskedLM? From my perspective, both seem like appropriate choice, however <model>ForMaskedLM is been used in the Colab notebook above. 2. If I opt for PyTorch model, I am stuck with LineByLineTextDataset. The way LineByLineTextDataset is implemented makes me use small datasets (the whole dataset is loaded into RAM and is preprocessed, so I cannot make use of really huge amount of text (hundreds of millions of sentences, for example). I tried to inherit my own Dataset type from torch's IterableDataset class, but it doesn't support sampling, which is a further step of making the whole pipeline functional. Does transformers framework offer another variant of Dataset class? I would much prefer having something similar to TFRecords dataset from TensorFlow 2. 3. If I still overcome the second issue, I see some crucial differences between trainer classes written for torch and TF. The most important one is Data Collator. TF Trainer doesn't offer such a parameter in class constructor, whereas the torch's Trainer does. If I'm not mistaken, data collator is responsible for masking input tokens, so running Electra model on TF Trainer is not correct, since I haven't found any implicit masking inside TF model. That's why I'm asking how to train Electra model with the help of TF Trainer class? Since choosing Torch's Trainer seems impossible due to the Datasets being loaded into RAM, the only variant that seems logical to me, is to train Electra model on TPU with TF Records dataset and thus making use of TF Trainer, which doesn't have a vital data preparation step of masking out 15% of tokens. If I'm mistaken, please give me a hint. Thanks :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4932/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4931
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4931/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4931/comments
https://api.github.com/repos/huggingface/transformers/issues/4931/events
https://github.com/huggingface/transformers/issues/4931
636,776,077
MDU6SXNzdWU2MzY3NzYwNzc=
4,931
BertTokenizer from own vocab meet problem
{ "login": "Yangxiaojun1230", "id": 59246446, "node_id": "MDQ6VXNlcjU5MjQ2NDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/59246446?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yangxiaojun1230", "html_url": "https://github.com/Yangxiaojun1230", "followers_url": "https://api.github.com/users/Yangxiaojun1230/followers", "following_url": "https://api.github.com/users/Yangxiaojun1230/following{/other_user}", "gists_url": "https://api.github.com/users/Yangxiaojun1230/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yangxiaojun1230/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yangxiaojun1230/subscriptions", "organizations_url": "https://api.github.com/users/Yangxiaojun1230/orgs", "repos_url": "https://api.github.com/users/Yangxiaojun1230/repos", "events_url": "https://api.github.com/users/Yangxiaojun1230/events{/privacy}", "received_events_url": "https://api.github.com/users/Yangxiaojun1230/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "ok, I fix it by using lower case" ]
1,591
1,591
1,591
NONE
null
# ❓ Questions & Help I set up own vocab.txt and the item like 3-213,3A23 and so on. When use BertTokenizer.from_pretrained() it couldn't tokenize the '3-213'. Even when I write 'A' in vocab.txt ,and couldn't encode('A'). Do anyone know how to fix this issue? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4931/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4930
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4930/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4930/comments
https://api.github.com/repos/huggingface/transformers/issues/4930/events
https://github.com/huggingface/transformers/pull/4930
636,769,794
MDExOlB1bGxSZXF1ZXN0NDMyODc5MzYw
4,930
Update setup.py for sentencepiece.
{ "login": "soni-vikas", "id": 26042392, "node_id": "MDQ6VXNlcjI2MDQyMzky", "avatar_url": "https://avatars.githubusercontent.com/u/26042392?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soni-vikas", "html_url": "https://github.com/soni-vikas", "followers_url": "https://api.github.com/users/soni-vikas/followers", "following_url": "https://api.github.com/users/soni-vikas/following{/other_user}", "gists_url": "https://api.github.com/users/soni-vikas/gists{/gist_id}", "starred_url": "https://api.github.com/users/soni-vikas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soni-vikas/subscriptions", "organizations_url": "https://api.github.com/users/soni-vikas/orgs", "repos_url": "https://api.github.com/users/soni-vikas/repos", "events_url": "https://api.github.com/users/soni-vikas/events{/privacy}", "received_events_url": "https://api.github.com/users/soni-vikas/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=h1) Report\n> Merging [#4930](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/699541c4b34479451c91b3c6c204d904f62bed83&el=desc) will **increase** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4930/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4930 +/- ##\n==========================================\n+ Coverage 76.77% 77.16% +0.39% \n==========================================\n Files 128 128 \n Lines 21723 21723 \n==========================================\n+ Hits 16677 16763 +86 \n+ Misses 5046 4960 -86 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (+0.77%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <0.00%> (+19.75%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=footer). Last update [699541c...ad6c947](https://codecov.io/gh/huggingface/transformers/pull/4930?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
NONE
null
Now the `sentencepiece` library has upgraded to `0.1.92` version which is incompatible with ` transformers==2.8.0`. sentencepiece==0.1.92 gives segmentation fault (core dumped).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4930/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4930/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4930", "html_url": "https://github.com/huggingface/transformers/pull/4930", "diff_url": "https://github.com/huggingface/transformers/pull/4930.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4930.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/4929
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4929/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4929/comments
https://api.github.com/repos/huggingface/transformers/issues/4929/events
https://github.com/huggingface/transformers/pull/4929
636,763,163
MDExOlB1bGxSZXF1ZXN0NDMyODc0NTM5
4,929
[ElectraForQuestionAnswering] fix qa example in doc
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=h1) Report\n> Merging [#4929](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4929/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4929 +/- ##\n=======================================\n Coverage 77.10% 77.11% \n=======================================\n Files 128 128 \n Lines 21723 21723 \n=======================================\n+ Hits 16749 16751 +2 \n+ Misses 4974 4972 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `78.16% <ø> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.81% <0.00%> (+0.19%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=footer). Last update [e80d6c6...4a895d8](https://codecov.io/gh/huggingface/transformers/pull/4929?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,592
1,592
MEMBER
null
This PR fixes the QA example in the doc for `ElectraForQuestionAnswering`. In the last example `token_type_ids` were not used, this PR fixes that @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4929/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4929", "html_url": "https://github.com/huggingface/transformers/pull/4929", "diff_url": "https://github.com/huggingface/transformers/pull/4929.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4929.patch", "merged_at": 1592430857000 }
https://api.github.com/repos/huggingface/transformers/issues/4928
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4928/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4928/comments
https://api.github.com/repos/huggingface/transformers/issues/4928/events
https://github.com/huggingface/transformers/issues/4928
636,760,845
MDU6SXNzdWU2MzY3NjA4NDU=
4,928
sentencepiece dependency must be a specific version.
{ "login": "soni-vikas", "id": 26042392, "node_id": "MDQ6VXNlcjI2MDQyMzky", "avatar_url": "https://avatars.githubusercontent.com/u/26042392?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soni-vikas", "html_url": "https://github.com/soni-vikas", "followers_url": "https://api.github.com/users/soni-vikas/followers", "following_url": "https://api.github.com/users/soni-vikas/following{/other_user}", "gists_url": "https://api.github.com/users/soni-vikas/gists{/gist_id}", "starred_url": "https://api.github.com/users/soni-vikas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soni-vikas/subscriptions", "organizations_url": "https://api.github.com/users/soni-vikas/orgs", "repos_url": "https://api.github.com/users/soni-vikas/repos", "events_url": "https://api.github.com/users/soni-vikas/events{/privacy}", "received_events_url": "https://api.github.com/users/soni-vikas/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
NONE
null
# 🐛 Bug ## Information Now the `sentencepiece` library has upgraded to `0.1.92` version which is incompatible with `transformers==2.8.0`. ## Error `segmentation fault (core dumped)` ## location. https://github.com/huggingface/transformers/blob/v2.8.0/setup.py#L113 ## configurations - Platform: - Torch version: torch==1.3.1 - Transformer version; transformers==2.8.0 - Python version: 3.7.3 - Using GPU in the script?: No.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4928/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4927
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4927/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4927/comments
https://api.github.com/repos/huggingface/transformers/issues/4927/events
https://github.com/huggingface/transformers/issues/4927
636,755,292
MDU6SXNzdWU2MzY3NTUyOTI=
4,927
Incorrect loss values calculated for TPU training.
{ "login": "misrasaurabh1", "id": 1271289, "node_id": "MDQ6VXNlcjEyNzEyODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1271289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/misrasaurabh1", "html_url": "https://github.com/misrasaurabh1", "followers_url": "https://api.github.com/users/misrasaurabh1/followers", "following_url": "https://api.github.com/users/misrasaurabh1/following{/other_user}", "gists_url": "https://api.github.com/users/misrasaurabh1/gists{/gist_id}", "starred_url": "https://api.github.com/users/misrasaurabh1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/misrasaurabh1/subscriptions", "organizations_url": "https://api.github.com/users/misrasaurabh1/orgs", "repos_url": "https://api.github.com/users/misrasaurabh1/repos", "events_url": "https://api.github.com/users/misrasaurabh1/events{/privacy}", "received_events_url": "https://api.github.com/users/misrasaurabh1/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@LysandreJik Any comments on this evaluation behavior with Trainer?", "Could it be solved by adding a `if self.is_world_master` prior to calling `self._log` (or directly within that function)?", "I think you are right. I was concerned that the evaluator is calculating all these eval loss values separately but the trainer is aggregating the eval values properly so the eval loss logged into wandb should be correct.\r\n``` \r\n def _prediction_loop( \r\n ......\r\n elif is_tpu_available():\r\n # tpu-comment: Get all predictions and labels from all worker shards of eval dataset\r\n if preds is not None:\r\n preds = xm.mesh_reduce(\"eval_preds\", preds, torch.cat)\r\n if label_ids is not None:\r\n label_ids = xm.mesh_reduce(\"eval_label_ids\", label_ids, torch.cat)\r\n```\r\nAnd yes we could still remove these eval loss values from log to remove the confusion it creates.", "Actually we added [this line](https://github.com/huggingface/transformers/blob/f9f8a5312e92541ff9a5f483fc4907ec87da876e/src/transformers/trainer.py#L574) recently so I don't think you should have any issue.", "Yes, the wandb graphs look good. My concern was that these console logs look incorrect.", "Ok, maybe we should wrap the entire logging (wandb + tensorboard + console) with \"is_world_master\" instead of doing it only for wandb.\r\n\r\n@julien-c what do you think? If that's the way to go I can submit a quick PR.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,591
1,598
1,598
CONTRIBUTOR
null
# 🐛 Bug Currently using Trainer on TPU calculates incorrect training and eval_during_training loss values. This leads to loss values logged on Wandb also being incorrect. ## Information The problem seems to be that with a PyTorch/XLA training setup with multiprocessing, each processes trains and evals on disjoint (I believe) subsets of the training and validation set respectively. This leads to multiple train_loss and eval_during_training_loss values equaling the number of processes used. These loss values are also different. None of these values is the correct loss values as the loss is calculated on the entire dataset and not on smaller subsets of it. The solution would be to aggregate these loss values with XLA operations into a single train_loss and eval_loss values. Different eval_loss values is evident in this console log ``` 06/11/2020 05:33:59 - INFO - transformers.trainer - ***** Running Evaluation ***** 06/11/2020 05:33:59 - INFO - transformers.trainer - Num examples = 5180 06/11/2020 05:33:59 - INFO - transformers.trainer - Batch size = 8 Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:41<00:00, 1.96it/s] {"eval_loss": 2.407614219335862, "epoch": 0.06633645851760127, "step": 1500} 06/11/2020 05:34:40 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500█████| 81/81 [00:41<00:00, 2.06it/s$ Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:42<00:00, 1.89it/s$ {"eval_loss": 1.757087172181518, "epoch": 0.06633645851760127, "step": 1500} 06/11/2020 05:34:41 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500 Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:43<00:00, 1.87it/s$ {"eval_loss": 2.2870501747101915, "epoch": 0.06633645851760127, "step": 1500} 06/11/2020 05:34:42 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500 Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:43<00:00, 1.87it/s$ {"eval_loss": 2.3224751780062545, "epoch": 0.06633645851760127, "step": 1500} 06/11/2020 05:34:42 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500 Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:43<00:00, 1.84it/s] {"eval_loss": 2.339173612035351, "epoch": 0.06633645851760127, "step": 1500} 06/11/2020 05:34:42 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500 Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:43<00:00, 1.85it/s] {"eval_loss": 2.3176549371377924, "epoch": 0.06633645851760127, "step": 1500} 06/11/2020 05:34:42 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500 Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:43<00:00, 1.84it/s] {"eval_loss": 2.449997420664187, "epoch": 0.06633645851760127, "step": 1500} 06/11/2020 05:34:42 - INFO - transformers.trainer - Saving model checkpoint to /home/saurabh/data/<retracted>/checkpoint-1500 Evaluation: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81/81 [00:44<00:00, 1.84it/s] {"eval_loss": 2.18177890336072, "epoch": 0.06633645851760127, "step": 1500} ``` Model I am using (Bert, XLNet ...): Every model with PyTorch Trainer Language I am using the model on (English, Chinese ...): Doesn't matter The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Run any PyTorch/TPU training, for example a language modelling task <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> 1. Setup a PyTorch/XLA training environment ``` export TRAIN_FILE=/path/to/dataset/wiki.train.raw export TEST_FILE=/path/to/dataset/wiki.test.raw export WANDB_WATCH=false # Fixes bug https://github.com/huggingface/transformers/issues/4814 python xla_spawn.py --num_cores 8 language_modeling/run_language_modeling.py \ --output_dir=output \ --model_type=roberta \ --model_name_or_path=roberta-base \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --mlm --evaluate_during_training --per_device_train_batch_size=4 --per_device_eval_batch_size=4 ``` ## Expected behavior A single train_loss and eval_loss value per logging_step in console output and also with Wandb. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 (master) - Platform: Linux-5.3.0-1026-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0a0+6bdfd6a (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: yes, 8 way TPU/XLA multiprocessing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4927/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4926
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4926/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4926/comments
https://api.github.com/repos/huggingface/transformers/issues/4926/events
https://github.com/huggingface/transformers/pull/4926
636,738,653
MDExOlB1bGxSZXF1ZXN0NDMyODU2MDQx
4,926
Fixing TPU training by disabling wandb.watch gradients logging
{ "login": "misrasaurabh1", "id": 1271289, "node_id": "MDQ6VXNlcjEyNzEyODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1271289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/misrasaurabh1", "html_url": "https://github.com/misrasaurabh1", "followers_url": "https://api.github.com/users/misrasaurabh1/followers", "following_url": "https://api.github.com/users/misrasaurabh1/following{/other_user}", "gists_url": "https://api.github.com/users/misrasaurabh1/gists{/gist_id}", "starred_url": "https://api.github.com/users/misrasaurabh1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/misrasaurabh1/subscriptions", "organizations_url": "https://api.github.com/users/misrasaurabh1/orgs", "repos_url": "https://api.github.com/users/misrasaurabh1/repos", "events_url": "https://api.github.com/users/misrasaurabh1/events{/privacy}", "received_events_url": "https://api.github.com/users/misrasaurabh1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=h1) Report\n> Merging [#4926](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4926/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4926 +/- ##\n==========================================\n+ Coverage 77.10% 77.17% +0.07% \n==========================================\n Files 128 128 \n Lines 21723 21723 \n==========================================\n+ Hits 16749 16765 +16 \n+ Misses 4974 4958 -16 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=footer). Last update [e80d6c6...d857f37](https://codecov.io/gh/huggingface/transformers/pull/4926?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,592
1,592
CONTRIBUTOR
null
Fixes issue https://github.com/huggingface/transformers/issues/4814 PyTorch TPU trainer.py had a bug where the training would freeze up during the logging step. On investigation, the culprit was found to be a wandb.watch call which was trying to log gradients. This operation is suspected to be unsupported by Wandb for TPUs. Waiting for a confirmation of TPU gradient logging support by the Wandb team.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4926/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4926", "html_url": "https://github.com/huggingface/transformers/pull/4926", "diff_url": "https://github.com/huggingface/transformers/pull/4926.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4926.patch", "merged_at": 1592431452000 }
https://api.github.com/repos/huggingface/transformers/issues/4925
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4925/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4925/comments
https://api.github.com/repos/huggingface/transformers/issues/4925/events
https://github.com/huggingface/transformers/pull/4925
636,669,342
MDExOlB1bGxSZXF1ZXN0NDMyODAwNjY1
4,925
Use dataloader_drop_last in TF dataset
{ "login": "setu4993", "id": 1833708, "node_id": "MDQ6VXNlcjE4MzM3MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1833708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/setu4993", "html_url": "https://github.com/setu4993", "followers_url": "https://api.github.com/users/setu4993/followers", "following_url": "https://api.github.com/users/setu4993/following{/other_user}", "gists_url": "https://api.github.com/users/setu4993/gists{/gist_id}", "starred_url": "https://api.github.com/users/setu4993/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/setu4993/subscriptions", "organizations_url": "https://api.github.com/users/setu4993/orgs", "repos_url": "https://api.github.com/users/setu4993/repos", "events_url": "https://api.github.com/users/setu4993/events{/privacy}", "received_events_url": "https://api.github.com/users/setu4993/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=h1) Report\n> Merging [#4925](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4925/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4925 +/- ##\n==========================================\n+ Coverage 77.10% 77.17% +0.07% \n==========================================\n Files 128 128 \n Lines 21723 21723 \n==========================================\n+ Hits 16749 16765 +16 \n+ Misses 4974 4958 -16 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `19.04% <ø> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=footer). Last update [e80d6c6...a9099e0](https://codecov.io/gh/huggingface/transformers/pull/4925?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "LGTM from a quick glance but I'll let @jplu chime in", "Nice! LGTM!!" ]
1,591
1,591
1,591
CONTRIBUTOR
null
Follows PR #4757, fixes issue #4891. I didn't realize then that `TFTrainingArguments` inherited `TrainingArguments` and all of the same options would be available there as well. Adding this to get to feature parity across PyTorch and TF trainers.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4925/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4925", "html_url": "https://github.com/huggingface/transformers/pull/4925", "diff_url": "https://github.com/huggingface/transformers/pull/4925.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4925.patch", "merged_at": 1591855882000 }
https://api.github.com/repos/huggingface/transformers/issues/4924
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4924/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4924/comments
https://api.github.com/repos/huggingface/transformers/issues/4924/events
https://github.com/huggingface/transformers/pull/4924
636,664,229
MDExOlB1bGxSZXF1ZXN0NDMyNzk2NTk1
4,924
Fix deprecation warnings due to invalid escape sequences.
{ "login": "tirkarthi", "id": 3972343, "node_id": "MDQ6VXNlcjM5NzIzNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/3972343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tirkarthi", "html_url": "https://github.com/tirkarthi", "followers_url": "https://api.github.com/users/tirkarthi/followers", "following_url": "https://api.github.com/users/tirkarthi/following{/other_user}", "gists_url": "https://api.github.com/users/tirkarthi/gists{/gist_id}", "starred_url": "https://api.github.com/users/tirkarthi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tirkarthi/subscriptions", "organizations_url": "https://api.github.com/users/tirkarthi/orgs", "repos_url": "https://api.github.com/users/tirkarthi/repos", "events_url": "https://api.github.com/users/tirkarthi/events{/privacy}", "received_events_url": "https://api.github.com/users/tirkarthi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=h1) Report\n> Merging [#4924](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4924/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4924 +/- ##\n==========================================\n+ Coverage 77.10% 77.17% +0.07% \n==========================================\n Files 128 128 \n Lines 21723 21723 \n==========================================\n+ Hits 16749 16765 +16 \n+ Misses 4974 4958 -16 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4924/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4924/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4924/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4924/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.48% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4924/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=footer). Last update [e80d6c6...51acaab](https://codecov.io/gh/huggingface/transformers/pull/4924?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,592
1,592
CONTRIBUTOR
null
Fixes #3754
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4924/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4924", "html_url": "https://github.com/huggingface/transformers/pull/4924", "diff_url": "https://github.com/huggingface/transformers/pull/4924.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4924.patch", "merged_at": 1592430419000 }
https://api.github.com/repos/huggingface/transformers/issues/4923
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4923/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4923/comments
https://api.github.com/repos/huggingface/transformers/issues/4923/events
https://github.com/huggingface/transformers/issues/4923
636,637,964
MDU6SXNzdWU2MzY2Mzc5NjQ=
4,923
Documentation doesn't include instructions for applying BertModel to documents using GPU acceleration
{ "login": "rjurney", "id": 42149, "node_id": "MDQ6VXNlcjQyMTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/42149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rjurney", "html_url": "https://github.com/rjurney", "followers_url": "https://api.github.com/users/rjurney/followers", "following_url": "https://api.github.com/users/rjurney/following{/other_user}", "gists_url": "https://api.github.com/users/rjurney/gists{/gist_id}", "starred_url": "https://api.github.com/users/rjurney/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rjurney/subscriptions", "organizations_url": "https://api.github.com/users/rjurney/orgs", "repos_url": "https://api.github.com/users/rjurney/repos", "events_url": "https://api.github.com/users/rjurney/events{/privacy}", "received_events_url": "https://api.github.com/users/rjurney/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is a pretty basic question that's not really about the library:)\r\n\r\nCheck out the PyTorch doc, in particular, the [60 minute blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html). TL;DR: You'll need to move your model and inputs to the device you want with `.to(device)` calls.", "@julien-c Ok, thanks I appreciate it :)", "You're welcome:)\r\n\r\nClosing this for now", "@julien-c I am stuck in that I can't figure out how to get the string data to the GPU. Tensor.from_numpy does not support strings. So how do you achieve this? I get errors about the data not being on the save device as the GPU. It works on CPU but I've searched and searched and can't figure out how to get strings onto the GPU. Is there another method that use encoded data? That I can send to the GPU.\r\n\r\nIf I get it working I'll create an example of GPU encoding :D", "It seem that the `encode()` method needs a brother that accepts tokenized/BERT encoded data from the first layer. Then the model will be applied to the data. This makes GPU support easy because you can create a Tensor out of data encoded by a BertTokenizer and send it to a GPU device. No such luck with strings.", "Actually I think I'm confused. You don't use encode() to apply the model, do you? Use use SentenceTransformer(data)." ]
1,591
1,592
1,591
NONE
null
# 🐛 Bug ## Information I am using `BertModel` to encode my documents using the representation I've just fine-tuned and it is not using the GPU to do the encoding. This is bad because encoding is very slow. I have it running on many cores but would prefer GPU acceleration. It is not clear from the documentation how to use the GPU to encode documents using a trained `BertModel`. Model I am using (Bert, XLNet ...): Bert. Language I am using the model on (English, Chinese ...): English. The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Use the language model example to fine-tune a bert model 2. Load your BertModel and apply it to text documents: ```python import numpy as np import torch from transformers import BertModel, BertTokenizer tokenizer = BertTokenizer.from_pretrained('my_model') bert_model = BertModel.from_pretrained('my_model') docs = ['I like cats but also dogs', 'I like cars full of cats', 'I like apples but not oranges', 'I like tutus on ballerinas.'] docs = np.tile(docs, 5000) encoded_docs = [] for doc in docs[0:10]: tensor = torch.tensor(tokenizer.encode(doc, add_special_tokens=True)).unsqueeze(0) encoded_tensor = bert_model(tensor) encoded_ary = encoded_tensor[0].cpu().detach().numpy() encoded_docs.append(encoded_ary) encoded_docs ``` 3. While this is going run: ```bash gpustat -i 1 ``` 4. See that the GPUs aren't being used. ```bash [0] Tesla T4 | 31'C, 0 % | 0 / 15109 MB | [1] Tesla T4 | 31'C, 0 % | 0 / 15109 MB | [2] Tesla T4 | 32'C, 0 % | 0 / 15109 MB | [3] Tesla T4 | 30'C, 0 % | 0 / 15109 MB | ``` 5. Run this on many cores and see how incredibly slow it still is without GPUs. Get frustrated. ## Expected behavior I want transformers/PyTorch to use the GPU to encode the text using the Bert model so it is muc faster than doing it in CPU. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Ubuntu 18.04 Amazon Deep Learning AMI - Python version: Python 3.6.10 :: Anaconda, Inc. - PyTorch version (GPU?): torch==1.4.0 - Tensorflow version (GPU?): - Using GPU in script?: Trying - Using distributed or parallel set-up in script?: In reality I am running this in Dask in an apply, but the example should do the same thing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4923/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4922
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4922/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4922/comments
https://api.github.com/repos/huggingface/transformers/issues/4922/events
https://github.com/huggingface/transformers/issues/4922
636,547,984
MDU6SXNzdWU2MzY1NDc5ODQ=
4,922
Unexpected behavior encoding token_type_ids in GPT models
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,591
1,598
1,594
NONE
null
# 🐛 Bug ## Information When `token_type_ids` are passed into the `GPT2Model` and subclasses, they're encoded using the `nn.Embedding` lookup table as the vocabulary ([line](https://github.com/huggingface/transformers/blob/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc/src/transformers/modeling_gpt2.py#L477)). Because the token type ids output by `encode_plus()` are `{0,1}`, this seems to create unexpected behavior by using the same embedding vectors to represent both word-tokens 0 and 1 in the vocabulary and the very distinct token type ids. ## Expected Behavior Either: 1. instead of returning `token_type_ids` consisting of indices 0 and 1, extend the vocabulary by 2 and use those two ids as the `token_type_ids` in `encode_plus()`, or 2. addition of a separate `nn.Embedding` matrix for token_type_ids [here](https://github.com/huggingface/transformers/blob/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc/src/transformers/modeling_gpt2.py#L352) (such as [in BERTModel](https://github.com/huggingface/transformers/blob/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc/src/transformers/modeling_bert.py#L153)) 3. discourage or throw a warning when `token_type_ids` are passed to GPTModel instances
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4922/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4921
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4921/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4921/comments
https://api.github.com/repos/huggingface/transformers/issues/4921/events
https://github.com/huggingface/transformers/pull/4921
636,547,069
MDExOlB1bGxSZXF1ZXN0NDMyNzAwNDAz
4,921
Make multiple choice models work with input_embeds
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=h1) Report\n> Merging [#4921](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/466aa57a45bfb9fc47d4b75d22c02c34b4b4b0fc&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/4921/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4921 +/- ##\n==========================================\n+ Coverage 77.06% 77.13% +0.07% \n==========================================\n Files 128 128 \n Lines 21649 21653 +4 \n==========================================\n+ Hits 16683 16702 +19 \n+ Misses 4966 4951 -15 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.31% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.02% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.78% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <100.00%> (+2.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=footer). Last update [466aa57...edd1ede](https://codecov.io/gh/huggingface/transformers/pull/4921?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,591
1,591
1,591
COLLABORATOR
null
Currently, all the multiple choice models will fail if we pass them `inputs_embeds` instead of `input_ids`. This PR fixes that on the pytorch side and adapts the corresponding test for common models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4921/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4921", "html_url": "https://github.com/huggingface/transformers/pull/4921", "diff_url": "https://github.com/huggingface/transformers/pull/4921.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4921.patch", "merged_at": 1591828714000 }