url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/6521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6521/comments | https://api.github.com/repos/huggingface/transformers/issues/6521/events | https://github.com/huggingface/transformers/pull/6521 | 679,793,878 | MDExOlB1bGxSZXF1ZXN0NDY4NDczMjIy | 6,521 | allow spaces in bash args with "$@" | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=h1) Report\n> Merging [#6521](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe61c05b85f98846779bb490a747875e7d54ec2a&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6521 +/- ##\n=======================================\n Coverage 80.59% 80.59% \n=======================================\n Files 156 156 \n Lines 28058 28058 \n=======================================\n Hits 22612 22612 \n Misses 5446 5446 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=footer). Last update [2060181...4032ff5](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | extends the bugfix for #6477 to more scripts.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6521/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6521",
"html_url": "https://github.com/huggingface/transformers/pull/6521",
"diff_url": "https://github.com/huggingface/transformers/pull/6521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6521.patch",
"merged_at": 1597669596000
} |
https://api.github.com/repos/huggingface/transformers/issues/6520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6520/comments | https://api.github.com/repos/huggingface/transformers/issues/6520/events | https://github.com/huggingface/transformers/issues/6520 | 679,773,983 | MDU6SXNzdWU2Nzk3NzM5ODM= | 6,520 | Can't load pegasus models. | {
"login": "aqdaskamal95",
"id": 46322332,
"node_id": "MDQ6VXNlcjQ2MzIyMzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/46322332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aqdaskamal95",
"html_url": "https://github.com/aqdaskamal95",
"followers_url": "https://api.github.com/users/aqdaskamal95/followers",
"following_url": "https://api.github.com/users/aqdaskamal95/following{/other_user}",
"gists_url": "https://api.github.com/users/aqdaskamal95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aqdaskamal95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aqdaskamal95/subscriptions",
"organizations_url": "https://api.github.com/users/aqdaskamal95/orgs",
"repos_url": "https://api.github.com/users/aqdaskamal95/repos",
"events_url": "https://api.github.com/users/aqdaskamal95/events{/privacy}",
"received_events_url": "https://api.github.com/users/aqdaskamal95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not sure what you mean here. Can you please post the traceback and the code that resulted in the error ?",
"This is what I got:\r\n```\r\nKeyError Traceback (most recent call last)\r\n\r\n<ipython-input-18-eb1fb8795ed4> in <module>()\r\n 2 from transformers import AutoTokenizer, AutoModelWithLMHead\r\n 3 \r\n----> 4 tokenizer = AutoTokenizer.from_pretrained(\"google/pegasus-multi_news\")\r\n 5 \r\n 6 cla = AutoModelWithLMHead.from_pretrained(\"google/pegasus-multi_news\")\r\n\r\n1 frames\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)\r\n 204 config = kwargs.pop(\"config\", None)\r\n 205 if not isinstance(config, PretrainedConfig):\r\n--> 206 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)\r\n 207 \r\n 208 if \"bert-base-japanese\" in str(pretrained_model_name_or_path):\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\r\n 204 \r\n 205 if \"model_type\" in config_dict:\r\n--> 206 config_class = CONFIG_MAPPING[config_dict[\"model_type\"]]\r\n 207 return config_class.from_dict(config_dict, **kwargs)\r\n 208 else:\r\n\r\nKeyError: 'pegasus'\r\n```",
"Yep, I've got this too.\r\n\r\n",
"@HenryDashwood , @Kejia I can load both of these models on master branch.\r\n\r\nWhat version of transformers are you using, try doing this with master as it's not available in 3.0.2 release.\r\n\r\n`pip install -U git+https://github.com/huggingface/transformers.git`",
"Ah of course. Cheers!",
"@patil-suraj I tried installing the master version using the shared URL but still it wasn't updated to master version.\r\nCan you share more details for installing Transformers Master version?",
"Sorry, typo. should be `-U` and not `-u`\r\n\r\n`pip install -U git+https://github.com/huggingface/transformers.git`",
"I can't load the `PegasusTokenizer` for the checkpoint `google/pegasus-pubmed`:\r\n```Python\r\ntokenizer = PegasusTokenizer.from_pretrained(\"google/pegasus-pubmed\")\r\n```\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"src/download_model.py\", line 17, in <module>\r\n tokenizer = PegasusTokenizer.from_pretrained(config['model_name'])\r\n File \"/home/rafael/miniconda3/envs/torch/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1584, in from_pretrained\r\n raise EnvironmentError(\r\nOSError: Model name 'google/pegasus-pubmed' was not found in tokenizers model name list (google/pegasus-xsum). We assumed 'google/pegasus-pubmed' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\r\n```\r\n\r\nWhen using the `AutoTokenizer`:\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/pegasus-pubmed\")\r\nTraceback (most recent call last):\r\n File \"/home/rafael/miniconda3/envs/torch/lib/python3.8/site-packages/transformers/configuration_utils.py\", line 368, in get_config_dict\r\n resolved_config_file = cached_path(\r\n File \"/home/rafael/miniconda3/envs/torch/lib/python3.8/site-packages/transformers/file_utils.py\", line 957, in cached_path\r\n raise EnvironmentError(\"file {} not found\".format(url_or_filename))\r\nOSError: file google/pegasus-pubmed/config.json not found\r\n```\r\n\r\nping @sshleifer \r\n\r\nI just installed it from master, still not working for me.\r\n\r\nEnvironment Info:\r\n- `transformers` version: 3.4.0\r\n- Platform: Linux-5.4.0-51-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyTorch version (GPU?): 1.6.0 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: no\r\n- Using distributed or parallel set-up in script?: no",
"Can't replicate on master. Please post `transformers-cli env` info when having download issues, and try solutions above in the thread before posting.",
"Ok I solved my issue. \r\n**FYI:** The problem was that I saved the loaded model in the directory `google/pegesus-pubmed` once in an invalid way and from now on the `from_pretrained` method tried to load it from the local path first which did not work. Sorry for bothering you!"
] | 1,597 | 1,603 | 1,603 | NONE | null | Hi,
I've tried uploading Peagsus from PreTrainedModel and PreTrainedTokenizer but run into KeyError, I have transformers 3.0.2 - any idea why that might be happening? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6520/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/6520/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6519/comments | https://api.github.com/repos/huggingface/transformers/issues/6519/events | https://github.com/huggingface/transformers/pull/6519 | 679,764,196 | MDExOlB1bGxSZXF1ZXN0NDY4NDUxNzk0 | 6,519 | [WIP] Create SequenceClassification MultipleChoice and TokenClassification for tf_longformer | {
"login": "Groskilled",
"id": 6341192,
"node_id": "MDQ6VXNlcjYzNDExOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6341192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Groskilled",
"html_url": "https://github.com/Groskilled",
"followers_url": "https://api.github.com/users/Groskilled/followers",
"following_url": "https://api.github.com/users/Groskilled/following{/other_user}",
"gists_url": "https://api.github.com/users/Groskilled/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Groskilled/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Groskilled/subscriptions",
"organizations_url": "https://api.github.com/users/Groskilled/orgs",
"repos_url": "https://api.github.com/users/Groskilled/repos",
"events_url": "https://api.github.com/users/Groskilled/events{/privacy}",
"received_events_url": "https://api.github.com/users/Groskilled/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @Groskilled - I added a couple of comments. Let's try to first make the torch tests pass. Let me know if you need more help after implementing the comments.",
"Closing PR due to inactivity"
] | 1,597 | 1,603 | 1,603 | NONE | null | Tests for the new classes are not passing and there is no test yet for TFLongformerClassificationHead (and I am not sure it's needed).
@patrickvonplaten
Resolves #6401 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6519/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6519",
"html_url": "https://github.com/huggingface/transformers/pull/6519",
"diff_url": "https://github.com/huggingface/transformers/pull/6519.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6519.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6518/comments | https://api.github.com/repos/huggingface/transformers/issues/6518/events | https://github.com/huggingface/transformers/pull/6518 | 679,756,635 | MDExOlB1bGxSZXF1ZXN0NDY4NDQ2Mjk5 | 6,518 | [docs] Copy code button misses '...' prefixed code | {
"login": "romainr",
"id": 17945,
"node_id": "MDQ6VXNlcjE3OTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/17945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/romainr",
"html_url": "https://github.com/romainr",
"followers_url": "https://api.github.com/users/romainr/followers",
"following_url": "https://api.github.com/users/romainr/following{/other_user}",
"gists_url": "https://api.github.com/users/romainr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/romainr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/romainr/subscriptions",
"organizations_url": "https://api.github.com/users/romainr/orgs",
"repos_url": "https://api.github.com/users/romainr/repos",
"events_url": "https://api.github.com/users/romainr/events{/privacy}",
"received_events_url": "https://api.github.com/users/romainr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Feels more like a false positive failure?"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | Tested in a local build of the docs.
e.g. Just above https://huggingface.co/transformers/task_summary.html#causal-language-modeling
Copy will not copy the full code, e.g.
`for token in top_5_tokens:`
Instead we should get it all:
```
for token in top_5_tokens:
print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))
```
>>> for token in top_5_tokens:
... print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help reduce our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help increase our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help decrease our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help offset our carbon footprint.
Distilled models are smaller than the models they mimic. Using them instead of the large versions would help improve our carbon footprint.
Docs for the option fix:
https://sphinx-copybutton.readthedocs.io/en/latest/ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6518/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6518",
"html_url": "https://github.com/huggingface/transformers/pull/6518",
"diff_url": "https://github.com/huggingface/transformers/pull/6518.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6518.patch",
"merged_at": 1597916106000
} |
https://api.github.com/repos/huggingface/transformers/issues/6517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6517/comments | https://api.github.com/repos/huggingface/transformers/issues/6517/events | https://github.com/huggingface/transformers/issues/6517 | 679,744,195 | MDU6SXNzdWU2Nzk3NDQxOTU= | 6,517 | Can't load t5-11b from pre-trained | {
"login": "saareliad",
"id": 22762845,
"node_id": "MDQ6VXNlcjIyNzYyODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/22762845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saareliad",
"html_url": "https://github.com/saareliad",
"followers_url": "https://api.github.com/users/saareliad/followers",
"following_url": "https://api.github.com/users/saareliad/following{/other_user}",
"gists_url": "https://api.github.com/users/saareliad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saareliad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saareliad/subscriptions",
"organizations_url": "https://api.github.com/users/saareliad/orgs",
"repos_url": "https://api.github.com/users/saareliad/repos",
"events_url": "https://api.github.com/users/saareliad/events{/privacy}",
"received_events_url": "https://api.github.com/users/saareliad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @saareliad, \r\ncan you try:\r\n```python\r\nt5 = transformers.T5ForConditionalGeneration.from_pretrained('t5-11b', use_cdn = False)\r\n```\r\n\r\nAlso, see: https://github.com/huggingface/transformers/issues/5423\r\n\r\nBut the model cannot really be run before we take a closer look at: https://github.com/huggingface/transformers/pull/3578.",
"@patrickvonplaten mind adding a big disclaimer to the model card for this particular checkpoint? About what you just said (CDN limitation + model parallelism)",
"Thanks @patrickvonplaten ,\nOur work successfully adds (several types of) model parallellism and trains T5 and several other large transformers and is integrated with HF for quite a while.\n\nWill opensource it soon :)"
] | 1,597 | 1,597 | 1,597 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform:
- Python version: 3.8.2
- PyTorch version 1.6
### Who can help
T5: @patrickvonplaten
## Information
The model I am using: T5
## To reproduce
Steps to reproduce the behavior:
```
import transformers
transformers.T5ForConditionalGeneration.from_pretrained("t5-11b")
```
```
OSError: Can't load weights for 't5-11b'. Make sure that:
- 't5-11b' is a correct model identifier listed on 'https://huggingface.co/models'
- or 't5-11b' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
the model should be loaded.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6517/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6516/comments | https://api.github.com/repos/huggingface/transformers/issues/6516/events | https://github.com/huggingface/transformers/issues/6516 | 679,741,986 | MDU6SXNzdWU2Nzk3NDE5ODY= | 6,516 | How to enable sampling when using unigram tokenizers? | {
"login": "dennisylyung",
"id": 16577014,
"node_id": "MDQ6VXNlcjE2NTc3MDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/16577014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dennisylyung",
"html_url": "https://github.com/dennisylyung",
"followers_url": "https://api.github.com/users/dennisylyung/followers",
"following_url": "https://api.github.com/users/dennisylyung/following{/other_user}",
"gists_url": "https://api.github.com/users/dennisylyung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dennisylyung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennisylyung/subscriptions",
"organizations_url": "https://api.github.com/users/dennisylyung/orgs",
"repos_url": "https://api.github.com/users/dennisylyung/repos",
"events_url": "https://api.github.com/users/dennisylyung/events{/privacy}",
"received_events_url": "https://api.github.com/users/dennisylyung/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | NONE | null | # ❓ Questions & Help
## Details
It is mentioned on [Tokenizer summary][1] that when using unigram tokenizers, "you could sample one of the tokenization according to their probabilities". It however did not detail how to do so.
Looking at the source-code, all of `AlbertTokenizer`, `T5Tokenizer` and `XLNetTokenizer` have an argument "sample" in their `_tokenize()` method, which in turns call the encoding method from sentencepiece with sampling enabled.
```
def _tokenize(self, text, sample=False):
""" Tokenize a string. """
text = self.preprocess_text(text)
if not sample:
pieces = self.sp_model.EncodeAsPieces(text)
else:
pieces = self.sp_model.SampleEncodeAsPieces(text, 64, 0.1)
...
```
But when I check the usages of _tokenize, it seems to be only called without a "sample" argument, in `PreTrainedTokenizer`.
```
def split_on_tokens(tok_list, text):
if not text.strip():
return []
if not tok_list:
return self._tokenize(text)
```
I tried calling the `encode()` method of the tokenizer class with `sample=True`, but the keyword was not recognized.
```
albert_tokenizer = AlbertTokenizer('m.model')
albert_tokenizer.encode(msg, sample=True)
>> Keyword arguments {'sample': True} not recognized.
```
How should I enable sampling when using unigram tokenizers?
[1]: https://huggingface.co/transformers/tokenizer_summary.html#unigram
**A link to original question on the forum/Stack Overflow**:
[https://stackoverflow.com/questions/63436152/how-to-enable-sampling-when-using-unigram-tokenizers-in-huggingface](url) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6516/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6515/comments | https://api.github.com/repos/huggingface/transformers/issues/6515/events | https://github.com/huggingface/transformers/pull/6515 | 679,713,282 | MDExOlB1bGxSZXF1ZXN0NDY4NDE1MDYy | 6,515 | Support additional dictionaries for BERT Japanese tokenizers | {
"login": "singletongue",
"id": 17107587,
"node_id": "MDQ6VXNlcjE3MTA3NTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/17107587?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/singletongue",
"html_url": "https://github.com/singletongue",
"followers_url": "https://api.github.com/users/singletongue/followers",
"following_url": "https://api.github.com/users/singletongue/following{/other_user}",
"gists_url": "https://api.github.com/users/singletongue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/singletongue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/singletongue/subscriptions",
"organizations_url": "https://api.github.com/users/singletongue/orgs",
"repos_url": "https://api.github.com/users/singletongue/repos",
"events_url": "https://api.github.com/users/singletongue/events{/privacy}",
"received_events_url": "https://api.github.com/users/singletongue/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you, @JetRunner and @polm.\r\n\r\nI've fixed the test-related issues and it should be OK now."
] | 1,597 | 1,599 | 1,597 | CONTRIBUTOR | null | This PR is to support additional dictionaries for BERT Japanese tokenizers.
Specifically, we add support for `unidic_lite` and `unidic` dictionaries.
Both dictionaries are pip-installable like `ipadic` and compatible with the `fugashi` package introduced in #6086 by @polm.
(We are going to release newly pre-trained BERT models using these dictionaries as well.) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6515/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6515/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6515",
"html_url": "https://github.com/huggingface/transformers/pull/6515",
"diff_url": "https://github.com/huggingface/transformers/pull/6515.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6515.patch",
"merged_at": 1597636824000
} |
https://api.github.com/repos/huggingface/transformers/issues/6514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6514/comments | https://api.github.com/repos/huggingface/transformers/issues/6514/events | https://github.com/huggingface/transformers/issues/6514 | 679,706,279 | MDU6SXNzdWU2Nzk3MDYyNzk= | 6,514 | Unexpected output(prediction) for TokenClassification, using pipeline | {
"login": "himanshudce",
"id": 25169555,
"node_id": "MDQ6VXNlcjI1MTY5NTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/25169555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/himanshudce",
"html_url": "https://github.com/himanshudce",
"followers_url": "https://api.github.com/users/himanshudce/followers",
"following_url": "https://api.github.com/users/himanshudce/following{/other_user}",
"gists_url": "https://api.github.com/users/himanshudce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/himanshudce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/himanshudce/subscriptions",
"organizations_url": "https://api.github.com/users/himanshudce/orgs",
"repos_url": "https://api.github.com/users/himanshudce/repos",
"events_url": "https://api.github.com/users/himanshudce/events{/privacy}",
"received_events_url": "https://api.github.com/users/himanshudce/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | NONE | null | I trained the language model from scratch on my language. fine-tuned it but while predicting the results using "pipeline" but, i am not getting a proper tag for each token. it looks like it is not tokenizing the words properly and giving results on subword tokens, i also tried grouped_entities=True, but not working,
my code -
```
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer
from transformers import TokenClassificationPipeline
# Named entity recognition pipeline, passing in a specific model and tokenizer
model = AutoModelForTokenClassification.from_pretrained("./sumerianRoBERTo-finetune")
tokenizer = AutoTokenizer.from_pretrained("./sumerianRoBERTo-finetune")
nlp_grouped = TokenClassificationPipeline(
model=model,
grouped_entities=True,
tokenizer=tokenizer,
)
print(nlp_grouped('szu-nigin 1(u) 7(disz) 1/3(disz) gin2 ku3-babbar'))
```
Results -
```
[{'entity_group': 'N', 'score': 0.7584937413533529, 'word': '<s>szu-'}, {'entity_group': 'V', 'score': 0.7493271827697754, 'word': 'nigin'}, {'entity_group': 'NU', 'score': 0.9881511330604553, 'word': ' 1'}, {'entity_group': 'N', 'score': 0.8397139310836792, 'word': 'u'}, {'entity_group': 'NU', 'score': 0.7238532304763794, 'word': ') 7'}, {'entity_group': 'N', 'score': 0.6140500903129578, 'word': 'disz)'}, {'entity_group': 'NU', 'score': 0.9929361343383789, 'word': ' 1'}, {'entity_group': 'N', 'score': 0.993495523929596, 'word': '/'}, {'entity_group': 'NU', 'score': 0.9997004270553589, 'word': '3'}, {'entity_group': 'N', 'score': 0.7956433892250061, 'word': 'disz) gin'}, {'entity_group': 'NU', 'score': 0.9885044693946838, 'word': '2'}, {'entity_group': 'NE', 'score': 0.6853057146072388, 'word': ' ku'}, {'entity_group': 'N', 'score': 0.9291318953037262, 'word': '3-'}, {'entity_group': 'AJ', 'score': 0.5223987698554993, 'word': 'babbar'}, {'entity_group': 'N', 'score': 0.8513995409011841, 'word': '</s>'}]
```
and when grouped_entities=False, I am getting
```
[{'word': '<s>', 'score': 0.5089993476867676, 'entity': 'N', 'index': 0}, {'word': 'szu', 'score': 0.9983197450637817, 'entity': 'N', 'index': 1}, {'word': '-', 'score': 0.7681621313095093, 'entity': 'N', 'index': 2}, {'word': 'nigin', 'score': 0.7493271827697754, 'entity': 'V', 'index': 3}, {'word': 'Ġ1', 'score': 0.9881511330604553, 'entity': 'NU', 'index': 4}, {'word': 'u', 'score': 0.8397139310836792, 'entity': 'N', 'index': 6}, {'word': ')', 'score': 0.4481121897697449, 'entity': 'NU', 'index': 7}, {'word': 'Ġ7', 'score': 0.9995942711830139, 'entity': 'NU', 'index': 8}, {'word': 'disz', 'score': 0.6592599749565125, 'entity': 'N', 'index': 10}, {'word': ')', 'score': 0.5688402056694031, 'entity': 'N', 'index': 11}, {'word': 'Ġ1', 'score': 0.9929361343383789, 'entity': 'NU', 'index': 12}, {'word': '/', 'score': 0.993495523929596, 'entity': 'N', 'index': 13}, {'word': '3', 'score': 0.9997004270553589, 'entity': 'NU', 'index': 14}, {'word': 'disz', 'score': 0.6896834969520569, 'entity': 'N', 'index': 16}, {'word': ')', 'score': 0.6974959969520569, 'entity': 'N', 'index': 17}, {'word': 'Ġgin', 'score': 0.9997506737709045, 'entity': 'N', 'index': 18}, {'word': '2', 'score': 0.9885044693946838, 'entity': 'NU', 'index': 19}, {'word': 'Ġku', 'score': 0.6853057146072388, 'entity': 'NE', 'index': 20}, {'word': '3', 'score': 0.901140570640564, 'entity': 'N', 'index': 21}, {'word': '-', 'score': 0.9571232199668884, 'entity': 'N', 'index': 22}, {'word': 'babbar', 'score': 0.5223987698554993, 'entity': 'AJ', 'index': 23}, {'word': '</s>', 'score': 0.8513995409011841, 'entity': 'N', 'index': 24}]
```
while I am just looking for labels for space tokenized tags.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6514/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6513/comments | https://api.github.com/repos/huggingface/transformers/issues/6513/events | https://github.com/huggingface/transformers/issues/6513 | 679,703,746 | MDU6SXNzdWU2Nzk3MDM3NDY= | 6,513 | Longformer pretrained weights are not really pretrained? | {
"login": "dvirginz",
"id": 31047807,
"node_id": "MDQ6VXNlcjMxMDQ3ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/31047807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dvirginz",
"html_url": "https://github.com/dvirginz",
"followers_url": "https://api.github.com/users/dvirginz/followers",
"following_url": "https://api.github.com/users/dvirginz/following{/other_user}",
"gists_url": "https://api.github.com/users/dvirginz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dvirginz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvirginz/subscriptions",
"organizations_url": "https://api.github.com/users/dvirginz/orgs",
"repos_url": "https://api.github.com/users/dvirginz/repos",
"events_url": "https://api.github.com/users/dvirginz/events{/privacy}",
"received_events_url": "https://api.github.com/users/dvirginz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @dvirginz,\r\n\r\nCan you post a code snippet here so that we can reproduce your results or at least see what might have been falsely configured?",
"@patrickvonplaten Hi!\r\nI'm sorry to have troubled you, the problem was with how I initialized and loaded the weights for the model.\r\nIt is now working, Thanks!"
] | 1,597 | 1,597 | 1,597 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:3.0.2
- Platform:Ubuntu 18.04
- Python version:3.7
- PyTorch version (GPU?):1.6 Yes
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
@patrickvonplaten
Model I am using (Bert, XLNet ...): Longformer
The problem arises when using:
* [V] my own modified scripts
The tasks I am working on is:
* [V] my own task or dataset: (give details below)
## To reproduce
As a preprocessing step in my pipeline I train a pre-trained model on a subset of Wikipedia dataset.
When using roBERTa the results of my fine-tuning steps are as follows (LM task):

Unfortunately, when simply plugging in longformer (and pointing the pretrained path to `allenai/longformer-base-4096`

Which seems like the weights are simply random-initialized.
I both cases (roberta and longformer) I get the message stating the weights have been initialized.
What went wrong?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6513/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6512/comments | https://api.github.com/repos/huggingface/transformers/issues/6512/events | https://github.com/huggingface/transformers/pull/6512 | 679,691,064 | MDExOlB1bGxSZXF1ZXN0NDY4Mzk4OTM3 | 6,512 | [doc] lighter 'make test' | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=h1) Report\n> Merging [#6512](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **decrease** coverage by `0.89%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6512 +/- ##\n==========================================\n- Coverage 80.37% 79.48% -0.90% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22552 22302 -250 \n- Misses 5506 5756 +250 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <ø> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <ø> (-1.01%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=footer). Last update [24107c2...ca9b5e9](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | `make test` on my desktop with 12 CPU cores leads to `pytest -n 12`, which quickly starts swapping - I had to add a huge swap file, but it's still insane load-wise. So I'm proposing to document a lighter option.
And, if you're open to `make test-light` or something like that, it would be most welcome. I tested that `-n 2` and `-n 4` aren't very different speed-wise to complete the test suite, if the gpu(s) is the bottleneck. I have been using `-n 3` so far for a balanced not too high load, but still pretty fast completion. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6512/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6512",
"html_url": "https://github.com/huggingface/transformers/pull/6512",
"diff_url": "https://github.com/huggingface/transformers/pull/6512.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6512.patch",
"merged_at": 1597915466000
} |
https://api.github.com/repos/huggingface/transformers/issues/6511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6511/comments | https://api.github.com/repos/huggingface/transformers/issues/6511/events | https://github.com/huggingface/transformers/pull/6511 | 679,688,485 | MDExOlB1bGxSZXF1ZXN0NDY4Mzk2OTkw | 6,511 | [doc] Summary of the models fixes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=h1) Report\n> Merging [#6511](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **decrease** coverage by `0.43%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6511 +/- ##\n==========================================\n- Coverage 80.37% 79.94% -0.44% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22552 22431 -121 \n- Misses 5506 5627 +121 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <ø> (-0.69%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <ø> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `45.31% <0.00%> (-50.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `51.66% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=footer). Last update [24107c2...78bd9e3](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | * improve readability - typos, punctuation, clearer sentences
2 questions:
1. https://huggingface.co/transformers/model_summary.html#t5
I think the example is incorrect in the target part of the example. I think it's missing "cute".
Before:
> For instance, if we have the sentence “My dog is very cute .”, and we decide to remove the token dog, is and cute, the input becomes “My <x> very <y> .” and the target is “<x> dog is <y> . <z>”
Proposed change:
> For instance, if we have the sentence “My dog is very cute .”, and we decide to remove the tokens: "dog", "is" and "cute", the encoder input becomes “My <x> very <y> .” and the target input becomes “<x> dog is <y> cute .<z>”
2.
At https://huggingface.co/transformers/model_summary.html#full-vs-sparse-attention
- in the "LSH attention" section:
> "The attention mask is modified to mask the current token (except at the first position) because it will give a query and key equal (so very similar to each other). "
It's missing a word at the end of the sentence. Is it "equal attention"?
- Also in the "Local attention" just after the previous section, it goes:
"This is shown in Figure 2d of the paper"
but there is no link or name of the paper it's referring to.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6511/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6511",
"html_url": "https://github.com/huggingface/transformers/pull/6511",
"diff_url": "https://github.com/huggingface/transformers/pull/6511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6511.patch",
"merged_at": 1597651494000
} |
https://api.github.com/repos/huggingface/transformers/issues/6510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6510/comments | https://api.github.com/repos/huggingface/transformers/issues/6510/events | https://github.com/huggingface/transformers/pull/6510 | 679,677,140 | MDExOlB1bGxSZXF1ZXN0NDY4Mzg5MDIy | 6,510 | new Makefile target: docs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=h1) Report\n> Merging [#6510](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/84d33317aec4e07ff2bc60721c81c9d519cefd3a&el=desc) will **increase** coverage by `2.74%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6510 +/- ##\n==========================================\n+ Coverage 77.77% 80.52% +2.74% \n==========================================\n Files 156 156 \n Lines 28094 28094 \n==========================================\n+ Hits 21850 22622 +772 \n+ Misses 6244 5472 -772 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.71% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <0.00%> (+0.19%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <0.00%> (+0.83%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+1.36%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.82% <0.00%> (+1.63%)` | :arrow_up: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <0.00%> (+2.46%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=footer). Last update [84d3331...bd93fc8](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yes, it does. Please let me know if it's a problem for future PRs. It surely adds to the reviewer's efforts I can see :(\r\n\r\nThere must be a way to configure github ui to ignore whitespace difference in diff, right?\r\n\r\nOn user side this can be done via adding &w=1:\r\nhttps://github.com/huggingface/transformers/pull/6510/files?diff=split&w=1\r\nor via UI: https://stackoverflow.com/a/51755490/9201239\r\n\r\nIt doesn't look as if there is a project-wide setting to do that. I looked but I don't see this option.\r\n\r\nIt looks like it was discussed here https://github.com/sindresorhus/refined-github/issues/191, but hasn't been resolved to a completion, only providing a UI to remove whitespace manually for each PR.\r\n"
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | - add a new "docs" target to validate docs and document it
this is needed to avoid `build_doc` CI failures when editing docs
not sure why gh reports whitespace differences - odd. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6510/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6510",
"html_url": "https://github.com/huggingface/transformers/pull/6510",
"diff_url": "https://github.com/huggingface/transformers/pull/6510.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6510.patch",
"merged_at": 1598545517000
} |
https://api.github.com/repos/huggingface/transformers/issues/6509 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6509/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6509/comments | https://api.github.com/repos/huggingface/transformers/issues/6509/events | https://github.com/huggingface/transformers/pull/6509 | 679,675,675 | MDExOlB1bGxSZXF1ZXN0NDY4Mzg3OTc1 | 6,509 | [doc] multiple corrections to "Summary of the tasks" | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=h1) Report\n> Merging [#6509](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **decrease** coverage by `1.16%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6509 +/- ##\n==========================================\n- Coverage 80.37% 79.21% -1.17% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22552 22225 -327 \n- Misses 5506 5833 +327 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <ø> (-0.69%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <ø> (-2.51%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=footer). Last update [24107c2...bd1f4c5](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good to go thanks. (side-note: we tend to default to two underscores for links as otherwise, sphinx will always associate the description to the same link).",
"> side-note: we tend to default to two underscores for links as otherwise, sphinx will always associate the description to the same link\r\n\r\nI will know now - thank you: fixed here https://github.com/huggingface/transformers/pull/6541"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | - improve readability - typos, punctuation, clearer sentences, numerical bullets where needed
- fix incorrect script names
- add missing scripts and add hyper links where missing
- a minor code improvement to improve understanding at the very last section (replaced output ids with translated text)
One thing I didn't know how to fix is the outdated referance to a script that no longer exists:
> If you would like to fine-tune a model on a summarization task, you may leverage the ``examples/summarization/bart/run_train.sh`` (leveraging pytorch-lightning) script.
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6509/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6509",
"html_url": "https://github.com/huggingface/transformers/pull/6509",
"diff_url": "https://github.com/huggingface/transformers/pull/6509.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6509.patch",
"merged_at": 1597679356000
} |
https://api.github.com/repos/huggingface/transformers/issues/6508 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6508/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6508/comments | https://api.github.com/repos/huggingface/transformers/issues/6508/events | https://github.com/huggingface/transformers/pull/6508 | 679,651,488 | MDExOlB1bGxSZXF1ZXN0NDY4MzcxMTMy | 6,508 | [doc] make the text more readable, fix some typos, add some disambiguation | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=h1) Report\n> Merging [#6508](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **increase** coverage by `0.16%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6508 +/- ##\n==========================================\n+ Coverage 80.37% 80.54% +0.16% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n+ Hits 22552 22599 +47 \n+ Misses 5506 5459 -47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <ø> (-0.69%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <ø> (ø)` | |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=footer). Last update [24107c2...76e7b5c](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6508/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6508",
"html_url": "https://github.com/huggingface/transformers/pull/6508",
"diff_url": "https://github.com/huggingface/transformers/pull/6508.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6508.patch",
"merged_at": 1597676878000
} |
https://api.github.com/repos/huggingface/transformers/issues/6507 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6507/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6507/comments | https://api.github.com/repos/huggingface/transformers/issues/6507/events | https://github.com/huggingface/transformers/issues/6507 | 679,638,433 | MDU6SXNzdWU2Nzk2Mzg0MzM= | 6,507 | trainer.train() fails on 'fmikaelian/flaubert-base-uncased-squad' fine-tuning SQuAD | {
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I don't see how you preprocessed your dataset. From the code you pasted, there is no point where you actually tokenize the elements of `french_squad_train` and `french_squad_dev` (assuming `load_dataset` comes from the nlp library?)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
transformers version: 3.0.2
Platform: Windows 10 Home v2004, Conda 4.8.3, Anaconda CLI 1.7.2
Python version: 3.7.1
PyTorch version (GPU?): 1.6.0 CUDA GPU
Tensorflow version (GPU?): none
Using GPU in script?: torch.set_default_tensor_type('torch.cuda.FloatTensor')
Using distributed or parallel set-up in script?: dunno
### Who can help
@fmikaelian @julien-c
## Information
Model I am using : FlauBERT
The problem arises when using the official example script.
The tasks I am working on is an official GLUE/SQUaD task: fine-tuning 'fmikaelian/flaubert-base-uncased-squad'
## To reproduce
```
french_squad_train = load_dataset('json', data_files="""./French-Squad/SQuAD-v1.1-train_fr_ss999_awstart2_net.json""", field='data')
french_squad_dev = load_dataset('json', data_files="""./French-Squad/SQuAD-v1.1-dev_fr_ss999_awstart2_net.json""", field='data')
print (french_squad_train)
```
> {'train': Dataset(features: {'title': Value(dtype='string', id=None), 'paragraphs': [{'context': Value(dtype='string', id=None), 'qas': [{'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': [{'answer_start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}]}]}]}, num_rows: 442)}
```
model_name = 'fmikaelian/flaubert-base-uncased-squad'
config = AutoConfig.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name, config=config)
model.train()
model.to('cuda')
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total # of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=french_squad_train, # training dataset
eval_dataset=french_squad_dev # evaluation dataset
)
trainer.train()
```
> HBox(children=(FloatProgress(value=0.0, description='Epoch', max=1.0, style=ProgressStyle(description_width='i…
> HBox(children=(FloatProgress(value=0.0, description='Iteration', max=1.0, style=ProgressStyle(description_widt…
> ---------------------------------------------------------------------------
> KeyError Traceback (most recent call last)
> <ipython-input-13-3435b262f1ae> in <module>
> ----> 1 trainer.train()
>
> C:\Anaconda3\envs\xxx\lib\site-packages\transformers\trainer.py in train(self, model_path)
> 490 self._past = None
> 491
> --> 492 for step, inputs in enumerate(epoch_iterator):
> 493
> 494 # Skip past any already trained steps if resuming training
>
> C:\Anaconda3\envs\xxx\lib\site-packages\tqdm\notebook.py in __iter__(self, *args, **kwargs)
> 226 def __iter__(self, *args, **kwargs):
> 227 try:
> --> 228 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
> 229 # return super(tqdm...) will not catch exception
> 230 yield obj
>
> C:\Anaconda3\envs\xxx\lib\site-packages\tqdm\std.py in __iter__(self)
> 1128
> 1129 try:
> -> 1130 for obj in iterable:
> 1131 yield obj
> 1132 # Update and possibly print the progressbar.
>
> C:\Anaconda3\envs\xxx\lib\site-packages\torch\utils\data\dataloader.py in __next__(self)
> 361
> 362 def __next__(self):
> --> 363 data = self._next_data()
> 364 self._num_yielded += 1
> 365 if self._dataset_kind == _DatasetKind.Iterable and \
>
> C:\Anaconda3\envs\xxx\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self)
> 401 def _next_data(self):
> 402 index = self._next_index() # may raise StopIteration
> --> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
> 404 if self._pin_memory:
> 405 data = _utils.pin_memory.pin_memory(data)
>
> C:\Anaconda3\envs\xxx\lib\site-packages\torch\utils\data\_utils\fetch.py in fetch(self, possibly_batched_index)
> 42 def fetch(self, possibly_batched_index):
> 43 if self.auto_collation:
> ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
> 45 else:
> 46 data = self.dataset[possibly_batched_index]
>
> C:\Anaconda3\envs\xxx\lib\site-packages\torch\utils\data\_utils\fetch.py in <listcomp>(.0)
> 42 def fetch(self, possibly_batched_index):
> 43 if self.auto_collation:
> ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
> 45 else:
> 46 data = self.dataset[possibly_batched_index]
>
> KeyError: 0
## Expected behavior
Ongoing training
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6507/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6506 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6506/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6506/comments | https://api.github.com/repos/huggingface/transformers/issues/6506/events | https://github.com/huggingface/transformers/issues/6506 | 679,637,520 | MDU6SXNzdWU2Nzk2Mzc1MjA= | 6,506 | Loading 'fmikaelian/flaubert-base-uncased-squad' throws unexpected, difficult to comprehend warning | {
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Windows 10 Home v2004, Conda 4.8.3, Anaconda CLI 1.7.2
- Python version: 3.7.1
- PyTorch version (GPU?): 1.6.0 CUDA GPU
- Tensorflow version (GPU?): none
- Using GPU in script?: torch.set_default_tensor_type('torch.cuda.FloatTensor')
- Using distributed or parallel set-up in script?: dunno
### Who can help
@fmikaelian @julien-c
## Information
Model I am using : FlauBERT
The problem arises when using the official example script.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: loading 'fmikaelian/flaubert-base-uncased-squad'
## To reproduce
Steps to reproduce the behavior:
```
model_name = 'fmikaelian/flaubert-base-uncased-squad' #flaubert/flaubert_base_cased #flaubert/flaubert_large_cased
config = AutoConfig.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name, config=config)
```
> Some weights of the model checkpoint at fmikaelian/flaubert-base-uncased-squad were not used when initializing FlaubertForQuestionAnsweringSimple: ['qa_outputs.start_logits.dense.weight', 'qa_outputs.start_logits.dense.bias', 'qa_outputs.end_logits.dense_0.weight', 'qa_outputs.end_logits.dense_0.bias', 'qa_outputs.end_logits.LayerNorm.weight', 'qa_outputs.end_logits.LayerNorm.bias', 'qa_outputs.end_logits.dense_1.weight', 'qa_outputs.end_logits.dense_1.bias', 'qa_outputs.answer_class.dense_0.weight', 'qa_outputs.answer_class.dense_0.bias', 'qa_outputs.answer_class.dense_1.weight']
> - This IS expected if you are initializing FlaubertForQuestionAnsweringSimple from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
> - This IS NOT expected if you are initializing FlaubertForQuestionAnsweringSimple from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
> Some weights of FlaubertForQuestionAnsweringSimple were not initialized from the model checkpoint at fmikaelian/flaubert-base-uncased-squad and are newly initialized: ['qa_outputs.weight', 'qa_outputs.bias']
> You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
## Expected behavior
Not throwing a warning. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6506/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6505 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6505/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6505/comments | https://api.github.com/repos/huggingface/transformers/issues/6505/events | https://github.com/huggingface/transformers/pull/6505 | 679,631,406 | MDExOlB1bGxSZXF1ZXN0NDY4MzU3NTk1 | 6,505 | [doc] typos | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=h1) Report\n> Merging [#6505](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/895ed8f4511ce9f2d1475e7f11c776dab87461d1&el=desc) will **decrease** coverage by `0.44%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6505 +/- ##\n==========================================\n- Coverage 80.38% 79.93% -0.45% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22554 22428 -126 \n- Misses 5504 5630 +126 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=footer). Last update [24107c2...be6a262](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6505/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6505",
"html_url": "https://github.com/huggingface/transformers/pull/6505",
"diff_url": "https://github.com/huggingface/transformers/pull/6505.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6505.patch",
"merged_at": 1597633056000
} |
https://api.github.com/repos/huggingface/transformers/issues/6504 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6504/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6504/comments | https://api.github.com/repos/huggingface/transformers/issues/6504/events | https://github.com/huggingface/transformers/pull/6504 | 679,630,533 | MDExOlB1bGxSZXF1ZXN0NDY4MzU2OTc0 | 6,504 | [doc] fix invalid env vars | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=h1) Report\n> Merging [#6504](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/895ed8f4511ce9f2d1475e7f11c776dab87461d1&el=desc) will **decrease** coverage by `2.17%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6504 +/- ##\n==========================================\n- Coverage 80.38% 78.20% -2.18% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22554 21943 -611 \n- Misses 5504 6115 +611 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `66.00% <0.00%> (-3.06%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=footer). Last update [24107c2...8dda746](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | - remove invalid `ENV_` prefix.
- add a few `:` while at it
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6504/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6504",
"html_url": "https://github.com/huggingface/transformers/pull/6504",
"diff_url": "https://github.com/huggingface/transformers/pull/6504.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6504.patch",
"merged_at": 1597633900000
} |
https://api.github.com/repos/huggingface/transformers/issues/6503 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6503/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6503/comments | https://api.github.com/repos/huggingface/transformers/issues/6503/events | https://github.com/huggingface/transformers/issues/6503 | 679,613,314 | MDU6SXNzdWU2Nzk2MTMzMTQ= | 6,503 | tf BERT model produced by convert_graph_to_onnx has unclear or wrong input shapes | {
"login": "Zhen-hao",
"id": 10957195,
"node_id": "MDQ6VXNlcjEwOTU3MTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/10957195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhen-hao",
"html_url": "https://github.com/Zhen-hao",
"followers_url": "https://api.github.com/users/Zhen-hao/followers",
"following_url": "https://api.github.com/users/Zhen-hao/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhen-hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhen-hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhen-hao/subscriptions",
"organizations_url": "https://api.github.com/users/Zhen-hao/orgs",
"repos_url": "https://api.github.com/users/Zhen-hao/repos",
"events_url": "https://api.github.com/users/Zhen-hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhen-hao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"it works when I use \"this is a test.\" as the input text because it results in 7 tokens. \r\nnow the question is why the model expects a fixed number, which is 7, of tokens as input? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@Zhen-hao sees I meet the same problem. does your problem have been fixed? ",
"> @Zhen-hao sees I meet the same problem. does your problem have been fixed?\r\n\r\nI didn't fix it. also didn't check with the last released version. \r\n"
] | 1,597 | 1,607 | 1,604 | NONE | null | I tried to create a minimal notebook to reproduce [this example nb](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb), but for TF.
when I run
```python
convert(framework="tf", model="bert-base-cased", output="onnx-test-tf/bert-base-cased.onnx", opset=11)
```
the output is
```
ONNX opset version set to: 11
Loading pipeline (model: bert-base-cased, tokenizer: bert-base-cased)
HBox(children=(FloatProgress(value=0.0, description='Downloading', max=433.0, style=ProgressStyle(description_…
HBox(children=(FloatProgress(value=0.0, description='Downloading', max=230.0, style=ProgressStyle(description_…
HBox(children=(FloatProgress(value=0.0, description='Downloading', max=526681800.0, style=ProgressStyle(descri…
Some weights of the model checkpoint at bert-base-cased were not used when initializing TFBertModel: ['mlm___cls', 'nsp___cls']
- This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFBertModel were initialized from the model checkpoint at bert-base-cased.
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFBertModel for predictions without further training.
Creating folder onnx-test-tf
/!\ Please note TensorFlow doesn't support exporting model > 2Gb /!\
Using framework TensorFlow: 2.1.0, keras2onnx: 1.7.0
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input token_type_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
Found output output_1 with shape: {0: 'batch'}
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertModel.call of <transformers.modeling_tf_bert.TFBertModel object at 0x7f3ddec46810>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertModel.call of <transformers.modeling_tf_bert.TFBertModel object at 0x7f3ddec46810>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertMainLayer.call of <transformers.modeling_tf_bert.TFBertMainLayer object at 0x7f3f0427cb50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Num'
WARNING: AutoGraph could not transform <bound method TFBertMainLayer.call of <transformers.modeling_tf_bert.TFBertMainLayer object at 0x7f3f0427cb50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Num'
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3dddcc8390>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3dddcc8390>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3dddcc8a50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3dddcc8a50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3dddcd08d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3dddcd08d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd46f890>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd46f890>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd477510>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd477510>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd477a90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd477a90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd48c710>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd48c710>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd498390>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd498390>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd498910>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd498910>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddceea590>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddceea590>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddcef3210>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddcef3210>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddcef3790>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddcef3790>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddcf08410>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddcf08410>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddcf08f90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddcf08f90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddcf12610>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddcf12610>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd42d2d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd42d2d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd42df10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd42df10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd4364d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd4364d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd449150>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd449150>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd449d90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd449d90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd452350>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd452350>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd45cf90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd45cf90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3e6c10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3e6c10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3f21d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3f21d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3fde10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3fde10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd405a90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd405a90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd40e050>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd40e050>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd419c50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd419c50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3a48d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3a48d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3a4e50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3a4e50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3b5ad0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3b5ad0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3be750>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3be750>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3becd0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3becd0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3cf950>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3cf950>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd35e5d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd35e5d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd35eb50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd35eb50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertPooler.call of <transformers.modeling_tf_bert.TFBertPooler object at 0x7f3ddcddda90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertPooler.call of <transformers.modeling_tf_bert.TFBertPooler object at 0x7f3ddcddda90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
tf executing eager_mode: True
tf.keras model eager_mode: False
The ONNX operator number change on the optimization: 2575 -> 1670
```
when I run inference on input
```
{'input_ids': [array([ 101, 146, 1169, 1631, 1103, 3974, 117, 1169, 1128, 136, 102],
dtype=int32)], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)]}
```
I got the error
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nix/store/xws61xnjc03fjiwfh7ci5cwgg1chmp3l-python3.7-onnxruntime-1.4.0/lib/python3.7/site-packages/onnxruntime/capi/session.py", line 110, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: token_type_ids for the following indices
index: 1 Got: 11 Expected: 7
Please fix either the inputs or the model.
```
When I run it via the C API of onnxruntime, I got this error message:
```
Error: OrtError(Status { error_code: InvalidArgument, error_msg: "Got invalid dimensions for input: attention_mask for the following indices\n index: 1 Got: 418 Expected: 7\n Please fix either the inputs or the model." })
```
When I print out the model input shapes, I see
```
input 0: "attention_mask" ["N", 7] Int32
input 1: "input_ids" ["N", 7] Int32
input 2: "token_type_ids" ["N", 7] Int32
```
It is not clear to me how I should prepare the input to run it with onnxruntime.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- `transformers` version: 3.0.2
- Platform: Linux-5.4.57-x86_64-with-glibc2.2.5
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [x] the official example scripts: (give details below)
https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb
## To reproduce
```python
from transformers.convert_graph_to_onnx import convert
output_model_path = "onnx-test-tf/bert-base-cased.onnx"
convert(framework="tf", model="bert-base-cased", output=output_model_path, opset=11)
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
encoded = tokenizer("I can feel the magic, can you?", add_special_tokens=True)
import numpy
inputs_onnx = {k_: [numpy.array(v_, dtype=numpy.int32)] for k_, v_ in encoded.items()}
from onnxruntime import InferenceSession, SessionOptions, get_all_providers
sess_options = onnxruntime.SessionOptions()
session = onnxruntime.InferenceSession(output_model_path, sess_options, providers=['CPUExecutionProvider'])
results = session.run(None, inputs_onnx)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6503/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6502 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6502/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6502/comments | https://api.github.com/repos/huggingface/transformers/issues/6502/events | https://github.com/huggingface/transformers/issues/6502 | 679,602,939 | MDU6SXNzdWU2Nzk2MDI5Mzk= | 6,502 | Truncated last sentence after bart finetuning on custom dataset. | {
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"What was your training command?\r\n",
"I used the `finetune_tiny_bart.sh` script in the seq2seq examples. @sshleifer \r\n\r\nIf that helps to figure out the source of the problem, as I know the position_embeddings of bart-large-cnn model is 1026 (with addition of SOS, and EOS tokens). Since my task is long summarization, I changed it to 2050, and let the model learn the whole on my custom dataset; Additionally, as I mentioned earlier, I have also increased the `min_length` and `max_length` in the BART config class. But the problem still remains.",
"I've never really trained with such large length parameters, but we have been seeing similar problems for many models. I think these are the lines causing the issue, will try to get a fix soon.\r\nhttps://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L144",
"I have been facing this issue in the new versions of finetune.sh as well... even for T5...\r\neg.\r\nIn only the first quarter century after the breakup of the Soviet Union, Azerbaijan has impressed the Caucasus region and the world with its progress. Although it still must work diligently to enhance citizen inputs into its governance structures, continue to expand its productive capacity beyond the energy sectors, and distribute its new wealth equitably among its entire population, the country has faced the complex challenges of independence with a mostly steady hand. Much has been achieved in rediscovery of a proud national identity, new resource abundance, sound transportation infrastructure, and a thriving capital city that is now a vibrant modern regional hub. Among the most important next steps for policy priority over the coming decades will be in sustaining the progress already made with continuing \"greener\" approaches to development, and increasing diversification of the economy beyond just the oil and natural gas sectors. Initiatives already in place have started along this road, but will need to be strengthened over",
"I would love to replicate (need data) or have one of you test on the branch with my proposed fix:\r\n\r\nhttps://github.com/huggingface/transformers/pull/6654\r\n\r\n```bash\r\ngit fetch\r\ngit checkout batch-parity-cleaner\r\n```",
"That branch is broken right now, I will comment when it's fixed.",
"Should work now!",
"Hey,\r\nWas trying out your branch, the earlier version atleast ran fine.\r\nAfter pulling the latest like you mentioned...getting back this:\r\n f\"Mbart is using sequence lengths {self.max_source_length}, {self.max_target_length}. \"\r\nValidation sanity check: 0it [00:00, ?it/s]Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\nKeyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\nKeyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\nKeyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\nKeyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\nKeyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\nKeyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\nKeyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\n\r\nI assume this has to do with the translation code? Any suggestions how to get around this?",
"Fixed on the branch, sorry about that!",
"So...should I try now?",
"Yah!",
"Still the same :(",
"OK. Which dataset are you using? I can't really debug without being able to see what a batch looks like when it goes into the model.",
"I am using a custom dataset but you can try with BillSum as well and you should be able to reproduce the issue.\r\nAnd btw here I was talking about this particular issue:\r\n> Hey,\r\n> Was trying out your branch, the earlier version atleast ran fine.\r\n> After pulling the latest like you mentioned...getting back this:\r\n> f\"Mbart is using sequence lengths {self.max_source_length}, {self.max_target_length}. \"\r\n> Validation sanity check: 0it [00:00, ?it/s]Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\n> Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\n> Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\n> Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\n> Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\n> Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\n> Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\n> Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized.\r\n> \r\n> I assume this has to do with the translation code? Any suggestions how to get around this?\r\n\r\nI am not able to start training itself with your branch.\r\nLet me know if you need anymore info.",
"Yes a training command I can paste into my terminal to run on billsum and reproduce your failure.",
"I just tried this and got the same:\r\n`./finetune.sh \\\r\n --data_dir \"../../../BillSum\"\r\n --train_batch_size=2 \r\n --eval_batch_size=8 \r\n --output_dir=\"/content/models/t5_narrative_512/\" \r\n --num_train_epochs 2 \r\n --model_name_or_path=\"t5-base\" --n_val 1000 \r\n --val_check_interval 0.5 \r\n --max_source_length=512 --max_target_length=150 --val_max_target_length=150 --test_max_target_length=150 `",
"@patil-suraj we think these should be fixed both for t5 and bart, right?",
"Yes, AFAIK these issues are fixed now. @amanpreet692 could you try this with the latest master branch ?",
"I used distilbart:\r\n\r\n`tokenizer_dbart = BartTokenizer.from_pretrained('sshleifer/distilbart-cnn-6-6')`\r\n`model_dbart = BartForConditionalGeneration.from_pretrained('sshleifer/distilbart-cnn-6-6')`\r\n\r\nThe last sentence of the summary obtained from the model is sometimes truncated.\r\n\r\nIs this expected? @sshleifer ",
"@patil-suraj Sorry I got back to this only now, I checked out the latest from repo today and ran finetune.sh for finetuning and could still see this issue,\r\neg. \r\nResearch on inventory risk management based on abc analysis. The traditional ABC analysis is a kind of management method from the ABC curve. ABC curve is also called Pareto (Pareto) curve. The basic idea is, \" vital few and the majority of the general \". In all the inventory, the cumulative percentage of species ranged from 5% to 15% and the average amount of funds occupied the cumulative percentages of 60% ~ 80% items identified as A class; the cumulative proportion of funds is 20% ~ 30% of the goods, identified as B class; and the rest as class C. The different objects use different management methods and means. In the China's enterprises,\r\n\r\nThe command I used is the same as above, only I removed the fp16 parameter from the script.",
"Hi @amanpreet692,\r\nCould you post the arguments you are passing to `generate` ? for ex. `num_beams, max_length, length_penalty` etc\r\n",
"Hey, I haven't tinker with the arguments to generate so I guess they should be the same as in config for distilbart:\r\n\"early_stopping\": true,\r\n\"length_penalty\": 2.0,\r\n\"max_length\": 142,\r\n\"min_length\": 56,\r\n\"no_repeat_ngram_size\": 3,\r\n\"num_beams\": 4\r\nLet me know if you need anything else.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,607 | 1,607 | NONE | null |
- `transformers` version: 3.0.2
- Platform:
- Python version: 3.6
- PyTorch version (GPU?): 1.4
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Single GPU
### Who can help
@sshleifer
## Information
Model I am using (Bert, XLNet ...): BART
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Finetuning BART-large-xsum on my custom dataset with differing configs: min_length=590, max_length=620.
2. Doing inference on the trained model.
3. Sentences (specifically, last sentence) that BART produces oftentimes (~90% of the cases) is incomplete.
## Expected behavior
I would expect to have a complete and neat output, without having truncated outputs. Should mention that when I do the inference on the raw bart-large-cnn (or -xsum) checkpoint, I do not see this problem and all the outputs are complete. It sounds to me that finetuned-bart on custom dataset is not able to emit <EOS> token.
I also checked this thread: https://github.com/huggingface/transformers/issues/5674 which faces the same problem, but couldn't find the answer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6502/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6501 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6501/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6501/comments | https://api.github.com/repos/huggingface/transformers/issues/6501/events | https://github.com/huggingface/transformers/issues/6501 | 679,588,124 | MDU6SXNzdWU2Nzk1ODgxMjQ= | 6,501 | Longformer slow than Bert | {
"login": "Maybewuss",
"id": 38156589,
"node_id": "MDQ6VXNlcjM4MTU2NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/38156589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maybewuss",
"html_url": "https://github.com/Maybewuss",
"followers_url": "https://api.github.com/users/Maybewuss/followers",
"following_url": "https://api.github.com/users/Maybewuss/following{/other_user}",
"gists_url": "https://api.github.com/users/Maybewuss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Maybewuss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maybewuss/subscriptions",
"organizations_url": "https://api.github.com/users/Maybewuss/orgs",
"repos_url": "https://api.github.com/users/Maybewuss/repos",
"events_url": "https://api.github.com/users/Maybewuss/events{/privacy}",
"received_events_url": "https://api.github.com/users/Maybewuss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"well BERT can only generate up to 512 tokens and both models are Autoencoding models and usually not used for causal language generation (Autoregressive models are used for this). You can check out the difference here: https://huggingface.co/transformers/model_summary.html."
] | 1,597 | 1,597 | 1,597 | NONE | null | When i set max_length = 2048, i found Longformer's speed is slower than commen bert, why? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6501/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6500 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6500/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6500/comments | https://api.github.com/repos/huggingface/transformers/issues/6500/events | https://github.com/huggingface/transformers/issues/6500 | 679,554,882 | MDU6SXNzdWU2Nzk1NTQ4ODI= | 6,500 | Always got RuntimeError while converting ALBERT model to TorchScript (.pt file) | {
"login": "xf05888",
"id": 33285394,
"node_id": "MDQ6VXNlcjMzMjg1Mzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/33285394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xf05888",
"html_url": "https://github.com/xf05888",
"followers_url": "https://api.github.com/users/xf05888/followers",
"following_url": "https://api.github.com/users/xf05888/following{/other_user}",
"gists_url": "https://api.github.com/users/xf05888/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xf05888/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xf05888/subscriptions",
"organizations_url": "https://api.github.com/users/xf05888/orgs",
"repos_url": "https://api.github.com/users/xf05888/repos",
"events_url": "https://api.github.com/users/xf05888/events{/privacy}",
"received_events_url": "https://api.github.com/users/xf05888/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | NONE | null | I am trying to convert ALBERT to a `.pt` file from the original albert model from transformers.(I am not very familiar with TorchScript so I want the `.pt` to be clean)
The code I ran (following the tutorial from [https://huggingface.co/transformers/torchscript.html](https://huggingface.co/transformers/torchscript.html)):
```
from transformers import AlbertModel, AlbertTokenizer, AlbertConfig
import torch
enc = AlbertTokenizer.from_pretrained("albert-xxlarge-v2")
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = enc.tokenize(text)
masked_index = 8
tokenized_text[masked_index] = '[MASK]'
indexed_tokens = enc.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
dummy_input = [tokens_tensor, segments_tensors]
config = AlbertConfig(vocab_size_or_config_json_file=73000, hidden_size=4096,
num_hidden_layers=12, num_attention_heads=64, intermediate_size=16384, torchscript=True)
model = AlbertModel(config)
model.eval()
model = AlbertModel.from_pretrained("albert-xxlarge-v2", torchscript=True)
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
torch.jit.save(traced_model, "albert-xxlarge-v2.pt")
```
But the second last line threw out a error:
`RuntimeError: The size of tensor a (15) must match the size of tensor b (14) at non-singleton dimension 3`
From the tutorial:
```
The trace is created relatively to the inputs’ dimensions. It is therefore constrained by the dimensions of the dummy input, and will not work for any other sequence length or batch size. When trying with a different size, an error such as:
The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2
```
So I tried changing `vocab_size_or_config_json_file` to a larger value, but still got the same error.
Am I doing something wrong? Thanks for any advice.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6500/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6499 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6499/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6499/comments | https://api.github.com/repos/huggingface/transformers/issues/6499/events | https://github.com/huggingface/transformers/pull/6499 | 679,552,055 | MDExOlB1bGxSZXF1ZXN0NDY4Mjk3NzE1 | 6,499 | Add examples/bert-loses-patience who can help | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6499/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6499",
"html_url": "https://github.com/huggingface/transformers/pull/6499",
"diff_url": "https://github.com/huggingface/transformers/pull/6499.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6499.patch",
"merged_at": 1597566617000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6498 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6498/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6498/comments | https://api.github.com/repos/huggingface/transformers/issues/6498/events | https://github.com/huggingface/transformers/issues/6498 | 679,532,290 | MDU6SXNzdWU2Nzk1MzIyOTA= | 6,498 | Could not output hidden states using TFBertModel | {
"login": "YLi999",
"id": 55957237,
"node_id": "MDQ6VXNlcjU1OTU3MjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/55957237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YLi999",
"html_url": "https://github.com/YLi999",
"followers_url": "https://api.github.com/users/YLi999/followers",
"following_url": "https://api.github.com/users/YLi999/following{/other_user}",
"gists_url": "https://api.github.com/users/YLi999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YLi999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YLi999/subscriptions",
"organizations_url": "https://api.github.com/users/YLi999/orgs",
"repos_url": "https://api.github.com/users/YLi999/repos",
"events_url": "https://api.github.com/users/YLi999/events{/privacy}",
"received_events_url": "https://api.github.com/users/YLi999/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Sorry for the format of the two codes. I have modified them and posted them here:\r\n\r\n1. \r\n```python\r\nfrom transformers import TFBertModel, BertConfig\r\nimport tensorflow as tf\r\ndef single_bert():\r\n id = Input((128,), dtype=tf.int32)\r\n mask = Input((128,), dtype=tf.int32)\r\n atn = Input((128,), dtype=tf.int32)\r\n bert_config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)\r\n bert_model = TFBertModel.from_pretrained('bert-base-uncased', config = bert_config)\r\n embedding = bert_model(id, attention_mask=mask, token_type_ids=atn)[2]\r\n model = tf.keras.Model(inputs=[id, mask, atn], outputs=embedding)\r\n return model\r\nmodel = single_bert()\r\nmodel.summary()\r\n```\r\n2. \r\n```python\r\nfrom transformers import TFBertModel, BertConfig\r\nimport tensorflow as tf\r\ndef single_bert():\r\n id = Input((128,), dtype=tf.int32)\r\n mask = Input((128,), dtype=tf.int32)\r\n atn = Input((128,), dtype=tf.int32)\r\n bert_model = TFBertModel.from_pretrained('bert-base-uncased')\r\n embedding = bert_model(id, attention_mask=mask, token_type_ids=atn, output_hidden_states=True)[2]\r\n model = tf.keras.Model(inputs=[id, mask, atn], outputs=embedding)\r\n return model\r\nmodel = single_bert()\r\nmodel.summary()\r\n```",
"I have ran your code with minor edits:\r\n\r\nembedding = bert_model(id, attention_mask=mask, token_type_ids=atn, output_hidden_states=True)\r\n\r\nFor the variable embedding, it only output 2 element in a tuple\r\n(<tf.Tensor 'tf_bert_model/Identity:0' shape=(None, 128, 768) dtype=float32>,\r\n <tf.Tensor 'tf_bert_model/Identity_1:0' shape=(None, 768) dtype=float32>)\r\n\r\nSo I think you would want to extract the last embedding layer index -1 or just 1 instead of index 2 (non-existent). \r\nI have seen other people with index 2 (such as : [https://github.com/huggingface/transformers/issues/4048](url) ), but of course their tuple has length more than 2, you should investigate more in your code. I even tried to have your code structured like theirs \r\n\r\n` embedding = bert_model(input = [id, atn])`\r\n\r\nBut it gives out the same output, may be it's just because of the different pre-trained model itself that give out different tuple length, so try to investigate more \r\n\r\n",
"Hello!\r\n\r\nThere are three possibilities to use `TFBertModel` either with a list, a dic or positional argumentst:\r\n1) With list: you have to explicitely give a list of size 10 corresponding to `[input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training]`\r\n2) With a dict. This is the recommended way, as you can specify only the keys you need.\r\n3) With positional arguments (as proposed by @vuhluu)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,604 | 1,604 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic (on Google Colab)
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@jplu
## Information
1. When using TFBertModel, I tried to output the hidden states. Firstly, I tried to set config.output_hidden_states=True, but it gave the "tuple index out of range" error. The code is:
from transformers import TFBertModel, BertConfig
import tensorflow as tf
def single_bert():
id = Input((128,), dtype=tf.int32)
mask = Input((128,), dtype=tf.int32)
atn = Input((128,), dtype=tf.int32)
bert_config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)
bert_model = TFBertModel.from_pretrained('bert-base-uncased', config = bert_config)
embedding = bert_model(id, attention_mask=mask, token_type_ids=atn)[2]
model = tf.keras.Model(inputs=[id, mask, atn], outputs=embedding)
return model
model = single_bert()
model.summary()
2. I have also tried to pass "output_hidden_states=True", but it still gave "tuple index out of range" error:
from transformers import TFBertModel, BertConfig
import tensorflow as tf
def single_bert():
id = Input((128,), dtype=tf.int32)
mask = Input((128,), dtype=tf.int32)
atn = Input((128,), dtype=tf.int32)
bert_model = TFBertModel.from_pretrained('bert-base-uncased')
embedding = bert_model(id, attention_mask=mask, token_type_ids=atn, output_hidden_states=True)[2]
model = tf.keras.Model(inputs=[id, mask, atn], outputs=embedding)
return model
model = single_bert()
model.summary()
## To reproduce
Steps to reproduce the behavior:
1.
2.
## Expected behavior
I need to add some custom layers on top of the output hidden states and fine-tune the whole model, so firstly, I have to get the hidden states of Bert.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6498/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/6498/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6497 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6497/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6497/comments | https://api.github.com/repos/huggingface/transformers/issues/6497/events | https://github.com/huggingface/transformers/issues/6497 | 679,512,068 | MDU6SXNzdWU2Nzk1MTIwNjg= | 6,497 | BERT and SpanBERT for Coreference Resolution | {
"login": "sayanb-7c6",
"id": 10998051,
"node_id": "MDQ6VXNlcjEwOTk4MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10998051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayanb-7c6",
"html_url": "https://github.com/sayanb-7c6",
"followers_url": "https://api.github.com/users/sayanb-7c6/followers",
"following_url": "https://api.github.com/users/sayanb-7c6/following{/other_user}",
"gists_url": "https://api.github.com/users/sayanb-7c6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayanb-7c6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayanb-7c6/subscriptions",
"organizations_url": "https://api.github.com/users/sayanb-7c6/orgs",
"repos_url": "https://api.github.com/users/sayanb-7c6/repos",
"events_url": "https://api.github.com/users/sayanb-7c6/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayanb-7c6/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Commenting for visibility - is this available now ? can't seem to find it, would love to use this for a question-answering project i'm working on! ",
"I'd also like to see this model incorporated into the core list of supported models. I did note that you can download it from the community models here though: https://huggingface.co/SpanBERT/spanbert-base-cased",
"Are there any translations of the above repository (https://github.com/mandarjoshi90/coref) into the awesome HuggingFace API ? That would be very cool to test :D !",
"I would like to work on this...but will need some guidance"
] | 1,597 | 1,608 | null | NONE | null | # 🌟 New model addition
## Model description
This is a recent approach for co-reference resolution using BERT, implemented from the papers [BERT for Coreference Resolution: Baselines and Analysis](https://arxiv.org/abs/1908.09091) and [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529), which is the current state of the art on OntoNotes (79.6 F1). It uses tensorflow 1.14.0.
Reason why this is interesting is it achieves strong improvements on the OntoNotes (+3.9 F1) and GAP (+11.5 F1) benchmarks. Also, I think it would be a nice addition to huggingface library, as it has only the neuralcoref as the coreference resolution module.
## Open source status
* [x] the model implementation is available: (https://github.com/mandarjoshi90/coref)
* [x] the model weights are available: (https://github.com/facebookresearch/SpanBERT)
* [x] who are the authors: (@mandarjoshi90, @jkkummerfeld, @wenyudu)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6497/reactions",
"total_count": 19,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 5
} | https://api.github.com/repos/huggingface/transformers/issues/6497/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6496 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6496/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6496/comments | https://api.github.com/repos/huggingface/transformers/issues/6496/events | https://github.com/huggingface/transformers/pull/6496 | 679,504,491 | MDExOlB1bGxSZXF1ZXN0NDY4MjYzOTM3 | 6,496 | Add Model Card for electra-base-german-uncased | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=h1) Report\n> Merging [#6496](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6496 +/- ##\n==========================================\n+ Coverage 80.37% 80.44% +0.06% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n+ Hits 22552 22571 +19 \n+ Misses 5506 5487 -19 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <ø> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <ø> (+0.25%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (+29.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=footer). Last update [24107c2...a9ce8ff](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | This adds the model card for electra-base-german-uncased.
Could you please also have a look into #6495 because something went wrong with the upload.
Thanks
Philip | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6496/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6496",
"html_url": "https://github.com/huggingface/transformers/pull/6496",
"diff_url": "https://github.com/huggingface/transformers/pull/6496.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6496.patch",
"merged_at": 1597633353000
} |
https://api.github.com/repos/huggingface/transformers/issues/6495 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6495/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6495/comments | https://api.github.com/repos/huggingface/transformers/issues/6495/events | https://github.com/huggingface/transformers/issues/6495 | 679,502,893 | MDU6SXNzdWU2Nzk1MDI4OTM= | 6,495 | Model Upload does not show up `german-nlp-group/electra-base-german-uncased` | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The files are there: https://cdn.huggingface.co/german-nlp-group/electra-base-german-uncased/tokenizer_config.json\r\n\r\nBut it simply does not show up...",
"Maybe related to #6478",
"Having the same problem here. Uploaded a new model (`salti/xlm-roberta-large-arabic_qa`) earlier this morning and it doesn't show up in the model hub, although I can download it and use it using the `from_pretrained` method.",
"@julien-c @Pierrci ",
"Maybe some sync service just needs a restart? :-)",
"Not a sync service, but a (uncaught) user error :)\r\n\r\nFixed: https://huggingface.co/german-nlp-group/electra-base-german-uncased#german-electra-uncased"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | Hi,
yesterday I uploaded a new model to `german-nlp-group/electra-base-german-uncased`:
```bash
$ transformers-cli s3 ls --organization german-nlp-group
Neither PyTorch nor TensorFlow >= 2.0 have been found.Models won't be available and only tokenizers, configurationand file/data utilities can be used.
Filename LastModified ETag Size
------------------------------------------------- ------------------------ ---------------------------------- ---------
electra-base-german-uncased/config.json 2020-08-14T17:13:01.000Z "10c75064301189f269b4898d4265cd61" 467
electra-base-german-uncased/pytorch_model.bin 2020-08-14T17:13:37.000Z "a621e1cb07af0a08aaa643af52f9f189" 444881731
electra-base-german-uncased/tokenizer_config.json 2020-08-14T17:43:33.000Z "7f6d7cb22bc6342b9c942da874754264" 86
electra-base-german-uncased/vocab.txt 2020-08-14T17:43:31.000Z "e9fa1e40c556fc02c62ebaa214a52dc4" 275501
```
But it does not show up. See here: https://huggingface.co/german-nlp-group
What happened here? Could you fix that?
Thanks
Philip | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6495/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6494 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6494/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6494/comments | https://api.github.com/repos/huggingface/transformers/issues/6494/events | https://github.com/huggingface/transformers/pull/6494 | 679,499,721 | MDExOlB1bGxSZXF1ZXN0NDY4MjYwMjAy | 6,494 | [testing] a new TestCasePlus subclass + get_auto_remove_tmp_dir() | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=h1) Report\n> Merging [#6494](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/895ed8f4511ce9f2d1475e7f11c776dab87461d1&el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `31.81%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6494 +/- ##\n==========================================\n+ Coverage 80.38% 80.55% +0.17% \n==========================================\n Files 156 156 \n Lines 28058 28079 +21 \n==========================================\n+ Hits 22554 22619 +65 \n+ Misses 5504 5460 -44 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `48.80% <33.33%> (-3.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (+29.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=footer). Last update [24107c2...f695e5f](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | I present to you a new `TestCasePlus` class, which is an extension of `testutil.TestCase`. Currently it only has one extra feature, but I'm sure there will be more in the future, hence the more generic name.
So the intention was to provide:
- an easy way to create unique temp dirs in test modules and get them automatically removed at the end of the test, regardless of whether a test succeeded or not.
- an easy way not to remove the temp dir for debug purposes
- provide a hardcoded temp dir for debug purposes (and secure so that `rm -r /something` won't happen)
- optionally, clean up the temp dir right away if a hardcoded path is provided
Some ideas were discussed here: https://github.com/huggingface/transformers/issues/6471
So this PR implements this feature and uses it in 2 test modules that currently don't have a complete solution, and removing much much code on the way.
Usage:
Feature 1: Flexible auto-removable temp dirs which are guaranteed to get removed at the end of test.
In all the following scenarios the temp dir will be auto-removed at the end of test, unless `after=False`.
1. create a unique temp dir and delete it at the end, `tmp_dir` will contain the path to the created temp dir
```
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir()
```
2. create a temp dir of my choice and delete it at the end - useful for debug when you want to monitor a specific directory
```
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir(tmp_dir="./tmp/run/test")
```
or just:
```
tmp_dir = self.get_auto_remove_tmp_dir("./tmp/run/test")
```
3. create a temp dir of my choice and do not delete it at the end - useful for when you want to look at the temp results
```
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir(tmp_dir="./tmp/run/test", after=False)
```
or just:
```
tmp_dir = self.get_auto_remove_tmp_dir("./tmp/run/test", False)
```
4. create a temp dir of my choice and ensure to delete it right away - useful for when you disabled deletion in the previous test run and want to make sure the that tmp dir is empty before the new test is run
```
def test_whatever(self):
tmp_dir = self.get_auto_remove_tmp_dir(tmp_dir="./tmp/run/test", before=True)
```
Note 1: In order to run the equivalent of `rm -r` safely, only subdirs of the project repository checkout are allowed if an explicit `tmp_dir` is used, so that by mistake no `/tmp` or similar important part of the filesystem will get nuked. i.e. please always pass paths that start with `./`
Note 2: Each test can register multiple temp dirs and they all will get auto-removed, unless requested otherwise.
So you can see from the 4 main possible scenarios, during debug one needs to tweak only one line of code.
There is only one small remaining deficiency: Since the temp dir is pre-created, the tests will not be able to test things like `--output_dir` creation in examples - i.e. the dir will already be there. So if needed, the code can be extended to have a flag to not create the dir, but only register it for deletion. Though it'd be tricky for when `tmp_dir` is not passed explicitly and we rely on `tempfile`- I guess it can create and immediately delete the temp dir and save and reuse its path - I don't know whether there might be a race condition here. But chances are that this is not really needed.
Thank you for reading. Ideas and suggestions for improvements are welcome.
@JetRunner, @LysandreJik, @sshleifer, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6494/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6494",
"html_url": "https://github.com/huggingface/transformers/pull/6494",
"diff_url": "https://github.com/huggingface/transformers/pull/6494.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6494.patch",
"merged_at": 1597666339000
} |
https://api.github.com/repos/huggingface/transformers/issues/6493 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6493/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6493/comments | https://api.github.com/repos/huggingface/transformers/issues/6493/events | https://github.com/huggingface/transformers/pull/6493 | 679,353,219 | MDExOlB1bGxSZXF1ZXN0NDY4MTQzNjU5 | 6,493 | Fixes paths with spaces in seq2seq example | {
"login": "KylePiira",
"id": 17210104,
"node_id": "MDQ6VXNlcjE3MjEwMTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/17210104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KylePiira",
"html_url": "https://github.com/KylePiira",
"followers_url": "https://api.github.com/users/KylePiira/followers",
"following_url": "https://api.github.com/users/KylePiira/following{/other_user}",
"gists_url": "https://api.github.com/users/KylePiira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KylePiira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KylePiira/subscriptions",
"organizations_url": "https://api.github.com/users/KylePiira/orgs",
"repos_url": "https://api.github.com/users/KylePiira/repos",
"events_url": "https://api.github.com/users/KylePiira/events{/privacy}",
"received_events_url": "https://api.github.com/users/KylePiira/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=h1) Report\n> Merging [#6493](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/895ed8f4511ce9f2d1475e7f11c776dab87461d1&el=desc) will **increase** coverage by `0.21%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6493 +/- ##\n==========================================\n+ Coverage 80.38% 80.59% +0.21% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n+ Hits 22554 22613 +59 \n+ Misses 5504 5445 -59 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (+29.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=footer). Last update [24107c2...057a225](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | Fixes https://github.com/huggingface/transformers/issues/6477 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6493/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6493",
"html_url": "https://github.com/huggingface/transformers/pull/6493",
"diff_url": "https://github.com/huggingface/transformers/pull/6493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6493.patch",
"merged_at": 1597599399000
} |
https://api.github.com/repos/huggingface/transformers/issues/6492 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6492/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6492/comments | https://api.github.com/repos/huggingface/transformers/issues/6492/events | https://github.com/huggingface/transformers/pull/6492 | 679,325,543 | MDExOlB1bGxSZXF1ZXN0NDY4MTIxMjA4 | 6,492 | Fixed label datatype for STS-B | {
"login": "amodaresi",
"id": 15783079,
"node_id": "MDQ6VXNlcjE1NzgzMDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/15783079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amodaresi",
"html_url": "https://github.com/amodaresi",
"followers_url": "https://api.github.com/users/amodaresi/followers",
"following_url": "https://api.github.com/users/amodaresi/following{/other_user}",
"gists_url": "https://api.github.com/users/amodaresi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amodaresi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amodaresi/subscriptions",
"organizations_url": "https://api.github.com/users/amodaresi/orgs",
"repos_url": "https://api.github.com/users/amodaresi/repos",
"events_url": "https://api.github.com/users/amodaresi/events{/privacy}",
"received_events_url": "https://api.github.com/users/amodaresi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also, the CI wants you to run `make style` :)",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=h1) Report\n> Merging [#6492](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **decrease** coverage by `1.18%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6492 +/- ##\n==========================================\n- Coverage 80.37% 79.19% -1.19% \n==========================================\n Files 156 156 \n Lines 28058 28059 +1 \n==========================================\n- Hits 22552 22221 -331 \n- Misses 5506 5838 +332 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <ø> (-0.69%)` | :arrow_down: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `48.91% <0.00%> (-0.18%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <ø> (-3.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=footer). Last update [24107c2...29e9a98](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | The STS Benchmark has decimal labels instead of integers.
But inside the `glue_convert_examples_to_features` function, when you're using TensorFlow datasets it is casting the label as an integer in the returned TF data generator.
With this simple edit, the function changes its casting datatype according to the selected task. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6492/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6492/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6492",
"html_url": "https://github.com/huggingface/transformers/pull/6492",
"diff_url": "https://github.com/huggingface/transformers/pull/6492.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6492.patch",
"merged_at": 1597752580000
} |
https://api.github.com/repos/huggingface/transformers/issues/6491 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6491/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6491/comments | https://api.github.com/repos/huggingface/transformers/issues/6491/events | https://github.com/huggingface/transformers/issues/6491 | 679,311,440 | MDU6SXNzdWU2NzkzMTE0NDA= | 6,491 | Whole Word Masking Implementation | {
"login": "luffycodes",
"id": 22951144,
"node_id": "MDQ6VXNlcjIyOTUxMTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/22951144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luffycodes",
"html_url": "https://github.com/luffycodes",
"followers_url": "https://api.github.com/users/luffycodes/followers",
"following_url": "https://api.github.com/users/luffycodes/following{/other_user}",
"gists_url": "https://api.github.com/users/luffycodes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luffycodes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luffycodes/subscriptions",
"organizations_url": "https://api.github.com/users/luffycodes/orgs",
"repos_url": "https://api.github.com/users/luffycodes/repos",
"events_url": "https://api.github.com/users/luffycodes/events{/privacy}",
"received_events_url": "https://api.github.com/users/luffycodes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Would be a great improvement :+1: \r\n\r\nHere's btw. the commit that introduced WWM in BERT:\r\n\r\nhttps://github.com/google-research/bert/commit/0fce551b55caabcfba52c61e18f34b541aef186a",
"BERT using wordpiece tokenizer, however, roberta uses byte-piece tokenizer. I think the implementations may be slightly different, if not starkly different (due to different start token indicators).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | NONE | null | # 🚀 Feature request
Currently, training the models from scratch like Roberta do not support whole word masking (e.g., language modeling examples). Only pre-trained models are available. Is it possible to include whole word masking in the input layers?
## Motivation
Whole word masking leads to performance boosts. So, adding this feature would be useful if someone wants to train the models from scratch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6491/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6491/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6490 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6490/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6490/comments | https://api.github.com/repos/huggingface/transformers/issues/6490/events | https://github.com/huggingface/transformers/pull/6490 | 679,280,076 | MDExOlB1bGxSZXF1ZXN0NDY4MDgzNjAw | 6,490 | [Doc] add more MBart and other doc | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger do you think it would be a good idea to add more fine-tuning info for MBart, since it requires input processed in a different way than other models as it is multilingual model ?",
"@sshleifer ,@sgugger added DPR in readme. ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=h1) Report\n> Merging [#6490](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/895ed8f4511ce9f2d1475e7f11c776dab87461d1&el=desc) will **decrease** coverage by `0.46%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6490 +/- ##\n==========================================\n- Coverage 80.38% 79.91% -0.47% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22554 22423 -131 \n- Misses 5504 5635 +131 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYmFydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.51%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=footer). Last update [895ed8f...e1c522b](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great! Thanks for the PR."
] | 1,597 | 1,597 | 1,597 | MEMBER | null | This PR
1. adds example for MBart
2. adds MBart in pre_trained models list and readme (Pegasus was missing from readme, so also added that).
@sshleifer , @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6490/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6490",
"html_url": "https://github.com/huggingface/transformers/pull/6490",
"diff_url": "https://github.com/huggingface/transformers/pull/6490.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6490.patch",
"merged_at": 1597681827000
} |
https://api.github.com/repos/huggingface/transformers/issues/6489 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6489/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6489/comments | https://api.github.com/repos/huggingface/transformers/issues/6489/events | https://github.com/huggingface/transformers/pull/6489 | 679,278,581 | MDExOlB1bGxSZXF1ZXN0NDY4MDgyNDU2 | 6,489 | GitHub Template: Tag @stefan-it for token classification related bug reports | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=h1) Report\n> Merging [#6489](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe61c05b85f98846779bb490a747875e7d54ec2a&el=desc) will **decrease** coverage by `1.47%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6489 +/- ##\n==========================================\n- Coverage 80.59% 79.11% -1.48% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22612 22198 -414 \n- Misses 5446 5860 +414 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `48.80% <0.00%> (-46.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.24% <0.00%> (-3.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.99% <0.00%> (-1.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=footer). Last update [fe61c05...26634da](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@julien-c 🤔"
] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | Hi,
this PR adds myself as person to tag for all token classification related bug reports :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6489/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/6489/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6489",
"html_url": "https://github.com/huggingface/transformers/pull/6489",
"diff_url": "https://github.com/huggingface/transformers/pull/6489.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6489.patch",
"merged_at": 1597754334000
} |
https://api.github.com/repos/huggingface/transformers/issues/6488 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6488/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6488/comments | https://api.github.com/repos/huggingface/transformers/issues/6488/events | https://github.com/huggingface/transformers/pull/6488 | 679,267,658 | MDExOlB1bGxSZXF1ZXN0NDY4MDczODc0 | 6,488 | Fix TPU Convergence bug introduced by PR#6151 | {
"login": "jysohn23",
"id": 19496130,
"node_id": "MDQ6VXNlcjE5NDk2MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/19496130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jysohn23",
"html_url": "https://github.com/jysohn23",
"followers_url": "https://api.github.com/users/jysohn23/followers",
"following_url": "https://api.github.com/users/jysohn23/following{/other_user}",
"gists_url": "https://api.github.com/users/jysohn23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jysohn23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jysohn23/subscriptions",
"organizations_url": "https://api.github.com/users/jysohn23/orgs",
"repos_url": "https://api.github.com/users/jysohn23/repos",
"events_url": "https://api.github.com/users/jysohn23/events{/privacy}",
"received_events_url": "https://api.github.com/users/jysohn23/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,604 | 1,597 | COLLABORATOR | null | Currently with the bug introduced we're taking two optimizer steps per
batch: one global one, where `xm.optimizer_step` injects a CRS between
all cores in training, and one without. This has been affecting training
accuracy (for example, XLNet GLUE on MNLI is not converging, etc.). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6488/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6488",
"html_url": "https://github.com/huggingface/transformers/pull/6488",
"diff_url": "https://github.com/huggingface/transformers/pull/6488.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6488.patch",
"merged_at": 1597423658000
} |
https://api.github.com/repos/huggingface/transformers/issues/6487 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6487/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6487/comments | https://api.github.com/repos/huggingface/transformers/issues/6487/events | https://github.com/huggingface/transformers/issues/6487 | 679,253,414 | MDU6SXNzdWU2NzkyNTM0MTQ= | 6,487 | about encoder and decoder input when using seq2seq model | {
"login": "jungwhank",
"id": 53588015,
"node_id": "MDQ6VXNlcjUzNTg4MDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/53588015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungwhank",
"html_url": "https://github.com/jungwhank",
"followers_url": "https://api.github.com/users/jungwhank/followers",
"following_url": "https://api.github.com/users/jungwhank/following{/other_user}",
"gists_url": "https://api.github.com/users/jungwhank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungwhank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungwhank/subscriptions",
"organizations_url": "https://api.github.com/users/jungwhank/orgs",
"repos_url": "https://api.github.com/users/jungwhank/repos",
"events_url": "https://api.github.com/users/jungwhank/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungwhank/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @jungwhank \r\nfor Bert2Bert, `pad_token` is used as `decoder_start_token_id` and the `input_ids` and `labels` begin with `cls_token_id` (`[CLS]` for bert ) and end with `sep_token_id` (`[SEP]` for bert).\r\n\r\nFor training all you need to do is \r\n```python3\r\ninput_text = \"some input text\"\r\ntarget_text = \"some target text\"\r\ninput_ids = tokenizer(input_text, add_special_tokens=True, return_tensors=\"pt\")[\"input_ids\"]\r\ntarget_ids = tokenizer(target_text, add_special_tokens=True, return_tensors=\"pt\")[\"input_ids\"]\r\nmodel(input_ids=input_ids, decoder_input_ids=target_ids, labels=target_ids)\r\n```\r\nThe EncoderDecoderModel class takes care adding `pad_token` to the `decoder_input_ids`.\r\n\r\nfor inference \r\n```python3\r\nmodel.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)\r\n```\r\n\r\nHope this clarifies your question. Also pinging @patrickvonplaten for more info.",
"Hi, @patil-suraj \r\nThanks for answering.\r\nis it same for BartForConditionalGeneration?\r\nActually, I wanna do kind of translation task and is it same `decoder_inputs_ids` and `labels`?",
"@patil-suraj's answer is correct! For the `EncoderDecoder` framework, one should set `model.config.decoder_start_token_id` to the BOS token (which in BERT's case does not exist so that we simply use CLS token).\r\n\r\nBart is a bit different:\r\n- if you want to generate from a pretrained model, all you have to do is: `model.generate(input_ids)`. `input_ids` always refer to the encoder input tokens for Seq2Seq models and it depends on you if you want to add special tokens or not - this is not done automatically in the generate function.\r\n- if you want to have more control and just do one forward pass, you should define both `input_ids` and `decoder_input_ids` and in this case the `decoder_input_ids` should start with Bart's `decoder_start_token_id` `model.config.decoder_start_token_id`:\r\n\r\n`model(input_ids, decoder_input_ids=decoder_input_ids)`",
"@patrickvonplaten \r\nthanks for answering!\r\nBut I have a question that Is there `decoder_start_token_id` in BartConfig?\r\nShould I just make my `decoder_input_ids` start with Bart's `model.config.bos_token_id` or set `model.config.decoder_start_token_id` = token_id?",
"I think I solved the problem. Thanks\r\n",
"@jungwhank Great ! Consider joining the awesome[ HF forum ](https://discuss.huggingface.co/), if you haven't already :) It's the best place to ask such questions. The whole community is there to help you and your questions will also help the community."
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hello, I'm trying to using seq2seq model (such as bart and EncoderDecoderModel(bert2bert))
And I'm little bit confused about input_ids, decoder_input_ids, tgt in model inputs.
As I know in seq2seq model, decoder_input should have special token(\<s> or something) before the sentence and target should have special token(\</s> or somethin) after the sentence. for example, `decoder_input = <s> A B C D E` , `target = A B C D E</s>`
so my question is
1. Should I put the these special tokens in decoder_inputs_ids and tgt_ids when using seq2seq model in this library?
or can i just pass the decoder_input_ids and tgt_ids without any special token ids?
2. Also, should I put `add_special_tokens=True` for encoder input_ids and put \</s> or \<eos> token after target ids?
for example, `input = a b c d e, decoder_input = <s>A B C D E, target = A B C D E</s>`
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. --> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6487/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6486 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6486/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6486/comments | https://api.github.com/repos/huggingface/transformers/issues/6486/events | https://github.com/huggingface/transformers/issues/6486 | 679,154,589 | MDU6SXNzdWU2NzkxNTQ1ODk= | 6,486 | from_pretrained() never works | {
"login": "sadaszewski",
"id": 1378525,
"node_id": "MDQ6VXNlcjEzNzg1MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1378525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadaszewski",
"html_url": "https://github.com/sadaszewski",
"followers_url": "https://api.github.com/users/sadaszewski/followers",
"following_url": "https://api.github.com/users/sadaszewski/following{/other_user}",
"gists_url": "https://api.github.com/users/sadaszewski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadaszewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadaszewski/subscriptions",
"organizations_url": "https://api.github.com/users/sadaszewski/orgs",
"repos_url": "https://api.github.com/users/sadaszewski/repos",
"events_url": "https://api.github.com/users/sadaszewski/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadaszewski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! This is probably an error with your network. Are you behind a firewall? Does it work on any other machine on the same network?",
"Many thanks for a quick response. It is possible. It works on another machine on another network. Is there any way to debug what it tries to download and why it fails? Any idea why the downloads work in pytorch_transformers but not in transformers?",
"Hi @sadaszewski ,\r\n\r\nI think you can use the following script for just making a get request to the xlnet configuration file:\r\n\r\n```python\r\nimport requests\r\n\r\nr = requests.get(\"https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json\")\r\n\r\nprint(r.text)\r\n```\r\n\r\nwould be interesting to see the response then :)",
"Well but doesn't it seem like that's the only file it actually **manages** to get? As you can see in the printout it shows the config of the model... It fails loading weights I guess. How do I check those?",
"Oh, I can remember a recent location/CDN change. So the json configuration is loaded from the s3 link, but the model weight is located at `https://cdn.huggingface.co/xlnet-large-cased-pytorch_model.bin` -> could you check if you have access to this file?",
"And in `pytorch-transformers` the model was downloaded from:\r\n\r\n```bash\r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-pytorch_model.bin\r\n```",
"I can confirm that it was a problem specific to my setup with trusted certificate for cdn.huggingface.co. Now fixed by specifying REQUESTS_CA_BUNDLE. Nevertheless it was nowhere to be found in the exception thrown by transformers that ultimately it has been caused by requests TLS handshake error. It would be very helpful if you considered adding exception chaining - https://www.python.org/dev/peps/pep-3134/ . Thanks for all your speedy replies!"
] | 1,597 | 1,597 | 1,597 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.5.1 (yes)
- Tensorflow version (GPU?):
- Using GPU in script?: not relevant
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik , @TevenLeScao , @mfuntowicz
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): any
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. `import transformers as pt`
2. `pt.AutoModelForSequenceClassification.from_pretrained(<any_valid_model_id>)`
3. Observe the error below
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
>>> pt.AutoModelForSequenceClassification.from_pretrained('xlnet-base-cased')
I0814 15:00:47.832349 46912496391360 configuration_utils.py:264] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json from cache at /xxx/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6
I0814 15:00:47.832984 46912496391360 configuration_utils.py:300] Model config XLNetConfig {
"architectures": [
"XLNetLMHeadModel"
],
"attn_type": "bi",
"bi_data": false,
"bos_token_id": 1,
"clamp_len": -1,
"d_head": 64,
"d_inner": 3072,
"d_model": 768,
"dropout": 0.1,
"end_n_top": 5,
"eos_token_id": 2,
"ff_activation": "gelu",
"initializer_range": 0.02,
"layer_norm_eps": 1e-12,
"mem_len": null,
"model_type": "xlnet",
"n_head": 12,
"n_layer": 12,
"pad_token_id": 5,
"reuse_len": null,
"same_length": false,
"start_n_top": 5,
"summary_activation": "tanh",
"summary_last_dropout": 0.1,
"summary_type": "last",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 250
}
},
"untie_r": true,
"vocab_size": 32000
}
Traceback (most recent call last):
File "/xxx/.conda/envs/xxx/lib/python3.6/site-packages/transformers/modeling_utils.py", line 655, in from_pretrained
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/xxx/.conda/envs/xxx/lib/python3.6/site-packages/transformers/modeling_auto.py", line 1363, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/xxx/.conda/envs/xxx/lib/python3.6/site-packages/transformers/modeling_utils.py", line 662, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load weights for 'xlnet-base-cased'. Make sure that:
- 'xlnet-base-cased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'xlnet-base-cased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
A pretrained model should be loaded. This worked (and still works) great in `pytorch_transformers`. I switched to `transformers` because XLNet-based models stopped working in `pytorch_transformers`. But surprise surprise in `transformers` no model whatsoever works for me. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6486/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6485 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6485/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6485/comments | https://api.github.com/repos/huggingface/transformers/issues/6485/events | https://github.com/huggingface/transformers/pull/6485 | 679,128,141 | MDExOlB1bGxSZXF1ZXN0NDY3OTYwMDY3 | 6,485 | Add tests/test_tokenization_reformer.py | {
"login": "D-Roberts",
"id": 4791217,
"node_id": "MDQ6VXNlcjQ3OTEyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4791217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/D-Roberts",
"html_url": "https://github.com/D-Roberts",
"followers_url": "https://api.github.com/users/D-Roberts/followers",
"following_url": "https://api.github.com/users/D-Roberts/following{/other_user}",
"gists_url": "https://api.github.com/users/D-Roberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/D-Roberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D-Roberts/subscriptions",
"organizations_url": "https://api.github.com/users/D-Roberts/orgs",
"repos_url": "https://api.github.com/users/D-Roberts/repos",
"events_url": "https://api.github.com/users/D-Roberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/D-Roberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=h1) Report\n> Merging [#6485](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9a8c168f56fe3c0e21d554a577ac03beb004ef89&el=desc) will **increase** coverage by `0.58%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6485 +/- ##\n==========================================\n+ Coverage 80.03% 80.61% +0.58% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n+ Hits 22456 22620 +164 \n+ Misses 5602 5438 -164 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <0.00%> (+39.06%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=footer). Last update [b5ba758...66f97dd](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,608 | 1,597 | CONTRIBUTOR | null | As titled. Attends to issue [#6333](https://github.com/huggingface/transformers/issues/6333). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6485/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6485",
"html_url": "https://github.com/huggingface/transformers/pull/6485",
"diff_url": "https://github.com/huggingface/transformers/pull/6485.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6485.patch",
"merged_at": 1597942724000
} |
https://api.github.com/repos/huggingface/transformers/issues/6484 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6484/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6484/comments | https://api.github.com/repos/huggingface/transformers/issues/6484/events | https://github.com/huggingface/transformers/issues/6484 | 679,127,783 | MDU6SXNzdWU2NzkxMjc3ODM= | 6,484 | Assertion error when training a new RoBERTa from scratch | {
"login": "erip",
"id": 2348806,
"node_id": "MDQ6VXNlcjIzNDg4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2348806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erip",
"html_url": "https://github.com/erip",
"followers_url": "https://api.github.com/users/erip/followers",
"following_url": "https://api.github.com/users/erip/following{/other_user}",
"gists_url": "https://api.github.com/users/erip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erip/subscriptions",
"organizations_url": "https://api.github.com/users/erip/orgs",
"repos_url": "https://api.github.com/users/erip/repos",
"events_url": "https://api.github.com/users/erip/events{/privacy}",
"received_events_url": "https://api.github.com/users/erip/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This may be due to an embedding dimension issue, but may also be due to a CUDA OOM error earlier that has been misreported in my experience. To verify that it is an embedding dimension issue, can you try using the `--no_cuda` flag?",
"Sure - let me give it a shot. The one issue is that the data is large so featurizing the inputs takes a long time (and isn't cached), so it may take several hours to report back.",
"@LysandreJik I confirmed it was indeed an embedding issue:\r\n\r\n```log\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 281, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 245, in main\r\n trainer.train(model_path=model_path)\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/trainer.py\", line 499, in train\r\n tr_loss += self._training_step(model, inputs, optimizer)\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/trainer.py\", line 622, in _training_step\r\n outputs = model(**inputs)\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_roberta.py\", line 239, in forward\r\n output_hidden_states=output_hidden_states,\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 753, in forward\r\n input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_roberta.py\", line 68, in forward\r\n input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 179, in forward\r\n position_embeddings = self.position_embeddings(position_ids)\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/sparse.py\", line 126, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/functional.py\", line 1814, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nIndexError: index out of range in self\r\n```\r\n\r\nWhat's not immediately clear is _why_ it's happening. My understanding is the process goes...\r\n\r\n1. Load the tokenizer.\r\n2. Encode each line (forcing the indices to necessarily fall in the range of |vocab|)\r\n3. Train",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-3.10.0-862.14.4.el7.x86_64-x86_64-with-centos-7.5.1804-Core
- Python version: 3.6.10
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: <No.
### Who can help
Maybe @LysandreJik ? :-)
## Information
Model I am using RoBERTa:
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
The dataset is a simple line-by-line dataset.
## To reproduce
Steps to reproduce the behavior:
1. Train a tokenizer according to [this](https://huggingface.co/blog/how-to-train#2-train-a-tokenizer)
2. Split line-by-line dataset into train and eval
3. Run below:
```sh
python run_language_modeling.py \
--output_dir $MODEL_DIR/myBERT-small-v1 \
--model_type roberta \
--mlm \
--config_name $MODEL_DIR/myBERT-small \
--tokenizer_name $MODEL_DIR/myBERT-small \
--do_train \
--do_eval \
--per_device_train_batch_size 8 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--save_total_limit 2 \
--save_steps 2000 \
--per_gpu_train_batch_size 16 \
--evaluate_during_training \
--line_by_line \
--train_data_file $HOME/myBERT/train.txt \
--eval_data_file $HOME/myBERT/eval.txt \
--seed 42
```
```log
08/13/2020 14:23:20 - INFO - transformers.configuration_utils - loading configuration file /home/erippeth/myBERT/model/myBERT-small/config.json
08/13/2020 14:23:20 - INFO - transformers.configuration_utils - Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"type_vocab_size": 1,
"vocab_size": 52000
}
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - Model name '/home/erippeth/myBERT/model/myBERT-small' not found in model shortcut name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). Assuming '/home/erippeth/myBERT/model/myBERT-small' is a path, a model identifier, or url to a directory containing tokenizer files.
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - Didn't find file /home/erippeth/myBERT/model/myBERT-small/added_tokens.json. We won't load it.
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - Didn't find file /home/erippeth/myBERT/model/myBERT-small/special_tokens_map.json. We won't load it.
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - Didn't find file /home/erippeth/myBERT/model/myBERT-small/tokenizer_config.json. We won't load it.
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - Didn't find file /home/erippeth/myBERT/model/myBERT-small/tokenizer.json. We won't load it.
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file /home/erippeth/myBERT/model/myBERT-small/vocab.json
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file /home/erippeth/myBERT/model/myBERT-small/merges.txt
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file None
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file None
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file None
08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file None
08/13/2020 14:23:21 - INFO - __main__ - Training new model from scratch
/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_auto.py:709: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
08/13/2020 14:23:27 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at /home/erippeth/myBERT/train.txt
08/13/2020 17:40:20 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at /home/erippeth/myBERT/eval.txt
08/13/2020 18:56:31 - WARNING - transformers.trainer - You are instantiating a Trainer but Tensorboard is not installed. You should consider installing it.
08/13/2020 18:56:31 - INFO - transformers.trainer - You are instantiating a Trainer but W&B is not installed. To use wandb logging, run `pip install wandb; wandb login` see https://docs.wandb.com/huggingface.
08/13/2020 18:56:31 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
08/13/2020 18:56:31 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
08/13/2020 18:56:31 - INFO - transformers.trainer - ***** Running training *****
08/13/2020 18:56:31 - INFO - transformers.trainer - Num examples = 16661098
08/13/2020 18:56:31 - INFO - transformers.trainer - Num Epochs = 5
08/13/2020 18:56:31 - INFO - transformers.trainer - Instantaneous batch size per device = 8
08/13/2020 18:56:31 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16
08/13/2020 18:56:31 - INFO - transformers.trainer - Gradient Accumulation steps = 1
08/13/2020 18:56:31 - INFO - transformers.trainer - Total optimization steps = 5206595
^MEpoch: 0%| | 0/5 [00:00<?, ?it/s]
^MIteration: 0%| | 0/1041319 [00:00<?, ?it/s]ESC[A
^MIteration: 0%| | 1/1041319 [00:01<508:20:24, 1.76s/it]ESC[A
^MIteration: 0%| | 2/1041319 [00:02<395:24:33, 1.37s/it]ESC[A
^MIteration: 0%| | 3/1041319 [00:02<306:50:22, 1.06s/it]ESC[A/opt/conda/conda-bld/pytorch_1595629416375/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [229,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1595629416375/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [229,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
...
/opt/conda/conda-bld/pytorch_1595629416375/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [275,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Iteration: 0%| | 3/1041319 [00:03<332:03:04, 1.15s/it]
Epoch: 0%| | 0/5 [00:03<?, ?it/s]
Traceback (most recent call last):
File "run_language_modeling.py", line 281, in <module>
main()
File "run_language_modeling.py", line 245, in main
trainer.train(model_path=model_path)
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/trainer.py", line 499, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/trainer.py", line 622, in _training_step
outputs = model(**inputs)
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 239, in forward
output_hidden_states=output_hidden_states,
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 762, in forward
output_hidden_states=output_hidden_states,
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 439, in forward
output_attentions,
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 371, in forward
hidden_states, attention_mask, head_mask, output_attentions=output_attentions,
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 315, in forward
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions,
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 258, in forward
context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
RuntimeError: CUDA error: device-side assert triggered
```
## Expected behavior
Model should train without failure, but instead it fails with an assertion error. I believe this is related to an embedding dimension issue, but the config's `vocab_size` matches the length of the newly-trained tokenizer and this is the embedding dimension set in the training script. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6484/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6483 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6483/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6483/comments | https://api.github.com/repos/huggingface/transformers/issues/6483/events | https://github.com/huggingface/transformers/issues/6483 | 679,093,615 | MDU6SXNzdWU2NzkwOTM2MTU= | 6,483 | Regarding GPU use for LM | {
"login": "shubhujf",
"id": 10284584,
"node_id": "MDQ6VXNlcjEwMjg0NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/10284584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shubhujf",
"html_url": "https://github.com/shubhujf",
"followers_url": "https://api.github.com/users/shubhujf/followers",
"following_url": "https://api.github.com/users/shubhujf/following{/other_user}",
"gists_url": "https://api.github.com/users/shubhujf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shubhujf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shubhujf/subscriptions",
"organizations_url": "https://api.github.com/users/shubhujf/orgs",
"repos_url": "https://api.github.com/users/shubhujf/repos",
"events_url": "https://api.github.com/users/shubhujf/events{/privacy}",
"received_events_url": "https://api.github.com/users/shubhujf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
" Hi,\r\nI am running example given in README.md of language_modeling using following command:\r\nexport TRAIN_FILE=/path/to/dataset/wiki.train.raw\r\nexport TEST_FILE=/path/to/dataset/wiki.test.raw\r\n\r\npython run_language_modeling.py \\\r\n --output_dir=output \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=gpt2 \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE\r\n\r\nIt has started training but it is not using GPU (TITAN X) at all, when I see throug nvidia-smi command \r\nI am new to this So Can you please let me know if I'm missing anything here.\r\n\r\nThanks.",
"This is probably because torch doesn't detect that you have a GPU. Can you try launching a python console and running the following?\r\n\r\n```py\r\nimport torch\r\nprint(torch.cuda.is_available())\r\n```",
"Yeah, I also found out about it later after posting this issue.\r\nI had to install cuda 9.1 and reboot the server then it worked.\r\nThank You for your reply :)"
] | 1,597 | 1,597 | 1,597 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Hi,
I am running example given in README.md of language_modeling using following command:
export TRAIN_FILE=/path/to/dataset/wiki.train.raw
export TEST_FILE=/path/to/dataset/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
It has started training but it is not using GPU (TITAN X) at all, when I see throug nvidia-smi command
I am new to this So Can you please let me know if I'm missing anything here.
Thanks.-->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6483/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6482 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6482/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6482/comments | https://api.github.com/repos/huggingface/transformers/issues/6482/events | https://github.com/huggingface/transformers/issues/6482 | 679,073,144 | MDU6SXNzdWU2NzkwNzMxNDQ= | 6,482 | Longformer Memory Consumption query | {
"login": "PrudhviRaj12",
"id": 12591606,
"node_id": "MDQ6VXNlcjEyNTkxNjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/12591606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PrudhviRaj12",
"html_url": "https://github.com/PrudhviRaj12",
"followers_url": "https://api.github.com/users/PrudhviRaj12/followers",
"following_url": "https://api.github.com/users/PrudhviRaj12/following{/other_user}",
"gists_url": "https://api.github.com/users/PrudhviRaj12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PrudhviRaj12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PrudhviRaj12/subscriptions",
"organizations_url": "https://api.github.com/users/PrudhviRaj12/orgs",
"repos_url": "https://api.github.com/users/PrudhviRaj12/repos",
"events_url": "https://api.github.com/users/PrudhviRaj12/events{/privacy}",
"received_events_url": "https://api.github.com/users/PrudhviRaj12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"A question that might be of interest to @patrickvonplaten :)",
"Hey @PrudhviRaj12, \r\nmy best answer would be to try it out :-) If can definitely work if you have enough GPU RAM. \r\nMy best guess would be that in the scenario described by you above the Longformer version would require ca. `num_chunks` * `required_gpu_ram_for_roberta` and `num_chunks` in your case would be 4096 / 256 = 16. So you would need a lot of RAM to run `batch_size=64` and `max_length=4096` with Longformer, most likely not enough for one GPU (even if fp16).",
"I would also suggest adding gradient_checkpointing=True when you load your model with from_pretrained. This recent addition to the HF code base allowed me to go from using BERT with a max sequence length of 128-256 before running out of memory to now being able to use Longformer with a max seq length of up to 4096 on the same GPU setup!\r\n\r\nThis thread helped me and may also help you:\r\nhttps://github.com/allenai/longformer/issues/80",
"Thanks @patrickvonplaten - I misunderstood the paper then. @HugToDebug thanks for your suggestion - I tried that but I am getting this warning \r\n\r\n```\r\nNone of the inputs have requires_grad=True. Gradients will be None\r\nwarnings.warn(\"None of the inputs have requires_grad=True. Gradients will be None\"\r\n```\r\n\r\nwhen calling model.forward(sequence inputs, attention masks) with any model (be it longformer or bert or roberta) and the performance of the model is completely off of the same batch experimental setting with and without gradient checkpointing. Probably I am doing something wrong, I'll check that thread. \r\n\r\nI am only training the last N layers of the bert/roberta for my task, and I am setting requires grad = False for all the other layers and I am getting that warning. When I remove that condition of setting requires grad = False for some layers and leaving them true for all, I am not getting that warning. Any idea how to get around that issue?",
"Update: I was able to get rid of that warning by making one of the embedding weight matrices trainable (in my case - Roberta, token type embedding). It was only adding 768 more trainable parameters, but I am getting OOM. I had to cut down the batch size 4x to get it running on one gpu without OOM. Not sure why adding just 768 trainable params had that of an impact.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | NONE | null | Hello,
Apologies if I am misunderstanding it, but if I use roberta with a max sequence length of 256 and I can run, for example, say, a batch size of 64 on one gpu for a task, with the longformer, can I use the same batch size with a window length of 256 with the max sequence length being 4096? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6482/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6481 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6481/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6481/comments | https://api.github.com/repos/huggingface/transformers/issues/6481/events | https://github.com/huggingface/transformers/issues/6481 | 679,071,647 | MDU6SXNzdWU2NzkwNzE2NDc= | 6,481 | what's the difference between TFBertOutput and TFBertSelfOutput? | {
"login": "xiongma",
"id": 30991932,
"node_id": "MDQ6VXNlcjMwOTkxOTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30991932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiongma",
"html_url": "https://github.com/xiongma",
"followers_url": "https://api.github.com/users/xiongma/followers",
"following_url": "https://api.github.com/users/xiongma/following{/other_user}",
"gists_url": "https://api.github.com/users/xiongma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiongma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiongma/subscriptions",
"organizations_url": "https://api.github.com/users/xiongma/orgs",
"repos_url": "https://api.github.com/users/xiongma/repos",
"events_url": "https://api.github.com/users/xiongma/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiongma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @policeme, fair point they seem to be exactly the same. There is a logical difference though `TFBertOutput` is the output of a `TFBertLayer` while `TFBertSelfOutput` is the output a `TFBertAtteniton` (Self-attention -> thus \"SelfOutput\"). \r\nBut yeah this might seem a bit confusing at first."
] | 1,597 | 1,597 | 1,597 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
`TFBertOutput` with `TFBertSelfOutput` seem same in their codes. why did you write two same layers? is there for some reasons? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6481/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6480 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6480/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6480/comments | https://api.github.com/repos/huggingface/transformers/issues/6480/events | https://github.com/huggingface/transformers/pull/6480 | 679,071,062 | MDExOlB1bGxSZXF1ZXN0NDY3OTEyMTgx | 6,480 | Import accuracy_score | {
"login": "gijswijnholds",
"id": 10464259,
"node_id": "MDQ6VXNlcjEwNDY0MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10464259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gijswijnholds",
"html_url": "https://github.com/gijswijnholds",
"followers_url": "https://api.github.com/users/gijswijnholds/followers",
"following_url": "https://api.github.com/users/gijswijnholds/following{/other_user}",
"gists_url": "https://api.github.com/users/gijswijnholds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gijswijnholds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gijswijnholds/subscriptions",
"organizations_url": "https://api.github.com/users/gijswijnholds/orgs",
"repos_url": "https://api.github.com/users/gijswijnholds/repos",
"events_url": "https://api.github.com/users/gijswijnholds/events{/privacy}",
"received_events_url": "https://api.github.com/users/gijswijnholds/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=h1) Report\n> Merging [#6480](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9a8c168f56fe3c0e21d554a577ac03beb004ef89&el=desc) will **decrease** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6480 +/- ##\n==========================================\n- Coverage 80.03% 79.96% -0.07% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22456 22437 -19 \n- Misses 5602 5621 +19 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `71.83% <0.00%> (-23.95%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.53% <0.00%> (-22.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=footer). Last update [9a8c168...ab9eb7f](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6480/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6480",
"html_url": "https://github.com/huggingface/transformers/pull/6480",
"diff_url": "https://github.com/huggingface/transformers/pull/6480.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6480.patch",
"merged_at": 1597407376000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6479 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6479/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6479/comments | https://api.github.com/repos/huggingface/transformers/issues/6479/events | https://github.com/huggingface/transformers/issues/6479 | 679,060,569 | MDU6SXNzdWU2NzkwNjA1Njk= | 6,479 | [TFTrainer] gradient accumulation error | {
"login": "maurice-g",
"id": 2892585,
"node_id": "MDQ6VXNlcjI4OTI1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2892585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maurice-g",
"html_url": "https://github.com/maurice-g",
"followers_url": "https://api.github.com/users/maurice-g/followers",
"following_url": "https://api.github.com/users/maurice-g/following{/other_user}",
"gists_url": "https://api.github.com/users/maurice-g/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maurice-g/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maurice-g/subscriptions",
"organizations_url": "https://api.github.com/users/maurice-g/orgs",
"repos_url": "https://api.github.com/users/maurice-g/repos",
"events_url": "https://api.github.com/users/maurice-g/events{/privacy}",
"received_events_url": "https://api.github.com/users/maurice-g/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We should wait for @jplu to come back from holiday for this, since he wrote that part of the code.",
"Good catch @maurice-g!!\r\n\r\nIt is my fault, I did that part in a hury and I should have been more careful. This will be fixed in a next PR (currently doing it)\r\n\r\n`n_replicas` is important here because we have to get the number of tuple (features, labels) corresponding to the batch size per GPU and `self.args.train_batch_size` gives the total batch size (batch size per GPU * number of replicas).",
"Should be fixed in https://github.com/huggingface/transformers/pull/6713 :+1: ",
"thanks for looking into this @jplu\r\n\r\nOne further remark on your PR #6713: The code now works _iff_ the features are a dict, but does not anymore if the features are a raw tensor (which worked before). IMO this should work for both, therefore there needs to be a conditional check on the type and then both situations should be handled. Or do you think that's not a relevant case?",
"Keeping open until @jplu answers your question @maurice-g ",
"You are right, it is not working anymore with list/tuple/raw tensors. This is on purpose because I'm gonna push the usage of dictionaries only in TF at some point. Is it a big issue for you to use dictionaries?",
"Ok, works for me, just wanted to point it out."
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: master (#9a8c168)
- Tensorflow version: 2.3.0
### Who can help
Trainer: @sgugger
tensorflow: @jplu
## Information
When using >1 `gradient_accumulation_steps` with TFTrainer and model inputs which are *not* simple tensors (for example dicts) the trainer fails. Also, to me it looks like there are logic issues in the way the `reduced_features` are computed for the gradient accumulation (not sure though).
Issue is here: https://github.com/huggingface/transformers/blob/9a8c168f56fe3c0e21d554a577ac03beb004ef89/src/transformers/trainer_tf.py#L602
## To reproduce
```python
import tensorflow as tf
from transformers import TFT5ForConditionalGeneration, TFTrainer, TFTrainingArguments
input_ids = [[1, 2, 3], [1, 2, 3]]
labels = [1, 2, 3]
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': input_ids}, labels))
model = TFT5ForConditionalGeneration.from_pretrained("t5-base")
training_args = TFTrainingArguments(
output_dir='./results', # output directory
logging_steps=100,
max_steps=2,
save_steps=2000,
per_device_train_batch_size=2, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_steps=0, # number of warmup steps for learning rate scheduler
weight_decay=0.0, # strength of weight decay
learning_rate=5e-5,
gradient_accumulation_steps=2
)
with training_args.strategy.scope():
model = TFT5ForConditionalGeneration.from_pretrained("t5-base")
trainer = TFTrainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=dataset, # training dataset
)
trainer.train()
```
## Issues
### Error
Produces error `TypeError: unhashable type: 'slice'`. Also `/` produce floats on python3, which I guess is not intended here.
Solution in same spirit could be conditional use of
```
reduced_features = {
ft: features[ft][:self.args.train_batch_size // self.args.n_replicas]
for ft in features
}
```
Already mentioned here https://github.com/huggingface/transformers/pull/6038#issuecomment-664706046
### Logic issue
I don't understand what the `n_replicas` has to do with the gradient accumulation here? Shouldn't the denominator rather be `gradient_accumulation_steps`? And shouldn't it actually use the different slices of the features, and not always the "first" slice?
Might be totally misunderstanding this.
Also this line doesn't seem to have any purpose: https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L607
Happy to provide PR if someone can give me a hint on the logic issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6479/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6478 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6478/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6478/comments | https://api.github.com/repos/huggingface/transformers/issues/6478/events | https://github.com/huggingface/transformers/issues/6478 | 679,008,435 | MDU6SXNzdWU2NzkwMDg0MzU= | 6,478 | Upladed model is not indexed | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The model is also not listed on your [page ](https://huggingface.co/mrm8488). Can you try re-uploading ?",
"If you load it in your code ```mrm8488/t5-base-finetuned-boolq``` it works! Maybe a problem indexing it.",
"cc @julien-c ",
"Hi everyone, has there been a way fix this? I also uploaded a model (t5-podcast-summarisation) that hasn't shown up on the model hub. I am able to load it in my code using `paulowoicho/t5-podcast-summarisation` though",
"Fixed:\r\n- https://huggingface.co/mrm8488/t5-base-finetuned-boolq\r\n- https://huggingface.co/paulowoicho/t5-podcast-summarisation"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | # ❓ Questions & Help
Hi guys, I uploaded a model several hours ago (t5-base-finetuned-boolq) and it is not indexed in the model hub search engine yet!
Thanks, Manu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6478/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6477 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6477/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6477/comments | https://api.github.com/repos/huggingface/transformers/issues/6477/events | https://github.com/huggingface/transformers/issues/6477 | 678,909,675 | MDU6SXNzdWU2Nzg5MDk2NzU= | 6,477 | finetune.py: error: unrecognized arguments | {
"login": "KylePiira",
"id": 17210104,
"node_id": "MDQ6VXNlcjE3MjEwMTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/17210104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KylePiira",
"html_url": "https://github.com/KylePiira",
"followers_url": "https://api.github.com/users/KylePiira/followers",
"following_url": "https://api.github.com/users/KylePiira/following{/other_user}",
"gists_url": "https://api.github.com/users/KylePiira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KylePiira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KylePiira/subscriptions",
"organizations_url": "https://api.github.com/users/KylePiira/orgs",
"repos_url": "https://api.github.com/users/KylePiira/repos",
"events_url": "https://api.github.com/users/KylePiira/events{/privacy}",
"received_events_url": "https://api.github.com/users/KylePiira/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | ### Who can help
examples/distillation: @VictorSanh
examples/seq2seq: @sshleifer
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Try running seq2seq/finetune.sh with data_dir or output_dir with escaped spaces in it
2. You'll get a `finetune.py: error: unrecognized arguments`
This is bad because Google Drive mounts at `/content/drive/My Drive/` in Colab and thus the example scripts won't work if saving or reading from Drive.
I've created a [Colab Notebook](https://colab.research.google.com/drive/1N-8m9FC9GbAywVJZAgSBkLqe24SPRfl8?usp=sharing) with repro.
The fix I've found is to change:
```
python finetune.py \
--learning_rate=3e-5 \
--fp16 \
--gpus 1 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 0.1 \
$@
```
to
```
python finetune.py \
--learning_rate=3e-5 \
--fp16 \
--gpus 1 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 0.1 \
"$@"
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6477/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6476 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6476/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6476/comments | https://api.github.com/repos/huggingface/transformers/issues/6476/events | https://github.com/huggingface/transformers/issues/6476 | 678,871,475 | MDU6SXNzdWU2Nzg4NzE0NzU= | 6,476 | Question about loss computing in BartForConditionalGeneration | {
"login": "JamesHujy",
"id": 48405323,
"node_id": "MDQ6VXNlcjQ4NDA1MzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/48405323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamesHujy",
"html_url": "https://github.com/JamesHujy",
"followers_url": "https://api.github.com/users/JamesHujy/followers",
"following_url": "https://api.github.com/users/JamesHujy/following{/other_user}",
"gists_url": "https://api.github.com/users/JamesHujy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamesHujy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamesHujy/subscriptions",
"organizations_url": "https://api.github.com/users/JamesHujy/orgs",
"repos_url": "https://api.github.com/users/JamesHujy/repos",
"events_url": "https://api.github.com/users/JamesHujy/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamesHujy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @JamesHujy , yes when training BART you need to shift `labels` and `decoder_input_ids`.\r\n\r\n```python3\r\ntarget_text = \"some target text\"\r\nenc = tokenizer(target_text , return_tensors=\"pt\")\r\ntarget_ids = enc[\"input_ids\"]\r\ndecoder_input_ids = target_ids[:, :-1].contiguous()\r\nlabels = target_ids[:, 1:].clone() \r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | NONE | null | I notice that in [BartForConditionalGeneration](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L1043), the labels and logits are not shifted when computing cross-entropy loss. Should I provide a pre-possessed shifted labels to the model for training?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6476/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6475 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6475/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6475/comments | https://api.github.com/repos/huggingface/transformers/issues/6475/events | https://github.com/huggingface/transformers/pull/6475 | 678,857,696 | MDExOlB1bGxSZXF1ZXN0NDY3NzM1OTU2 | 6,475 | Use hash to clean the test dirs | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=h1) Report\n> Merging [#6475](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/05810cd80a5ca83065e0dbe5335c030c4a435ddb&el=desc) will **decrease** coverage by `1.12%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6475 +/- ##\n==========================================\n- Coverage 80.55% 79.42% -1.13% \n==========================================\n Files 153 153 \n Lines 28001 28001 \n==========================================\n- Hits 22556 22241 -315 \n- Misses 5445 5760 +315 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.01% <0.00%> (+23.16%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=footer). Last update [05810cd...646a7dc](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Agreed, thanks for the fix!"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | This one solves it once for all. What do you think? @sgugger @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6475/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6475",
"html_url": "https://github.com/huggingface/transformers/pull/6475",
"diff_url": "https://github.com/huggingface/transformers/pull/6475.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6475.patch",
"merged_at": 1597390480000
} |
https://api.github.com/repos/huggingface/transformers/issues/6474 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6474/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6474/comments | https://api.github.com/repos/huggingface/transformers/issues/6474/events | https://github.com/huggingface/transformers/issues/6474 | 678,854,141 | MDU6SXNzdWU2Nzg4NTQxNDE= | 6,474 | Training Data of xlm-roberta-large-finetuned-conll03-* models | {
"login": "wangxinyu0922",
"id": 17926734,
"node_id": "MDQ6VXNlcjE3OTI2NzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/17926734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangxinyu0922",
"html_url": "https://github.com/wangxinyu0922",
"followers_url": "https://api.github.com/users/wangxinyu0922/followers",
"following_url": "https://api.github.com/users/wangxinyu0922/following{/other_user}",
"gists_url": "https://api.github.com/users/wangxinyu0922/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangxinyu0922/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangxinyu0922/subscriptions",
"organizations_url": "https://api.github.com/users/wangxinyu0922/orgs",
"repos_url": "https://api.github.com/users/wangxinyu0922/repos",
"events_url": "https://api.github.com/users/wangxinyu0922/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangxinyu0922/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
}
] | [
"Pinging @stefan-it ",
"Hi @wangxinyu0922 ,\r\n\r\nthe models are only trained on the corresponding training data sets, that means development data was not used for training :)",
"That's great! Thank you!",
"> \r\n> \r\n> Hi @wangxinyu0922 ,\r\n> \r\n> the models are only trained on the corresponding training data sets, that means development data was not used for training :)\r\n\r\n@stefan-it \r\nBy the way, what is the accuracy of the model on the four datasets? The models are trained on document context or sentence context? I believe different context will affect the performance."
] | 1,597 | 1,602 | 1,597 | NONE | null | Hi, I'm curious about the training data of xlm-r models finetuned on conll ner datasets (e.g. xlm-roberta-large-finetuned-conll03-german, xlm-roberta-large-finetuned-conll03-english), are the models trained on train+dev sets? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6474/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6473 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6473/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6473/comments | https://api.github.com/repos/huggingface/transformers/issues/6473/events | https://github.com/huggingface/transformers/pull/6473 | 678,846,547 | MDExOlB1bGxSZXF1ZXN0NDY3NzI2OTk4 | 6,473 | [sched] polynomial_decay_schedule use default power=1.0 | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | As discussed in https://github.com/huggingface/transformers/pull/6361 we weren't sure why fairseq's `polynomial_decay_schedule` `power` default was `1.0`, and decided to go with `2.0` as the latter does something polynomial.
I got the devs at fairseq to answer this question: https://github.com/pytorch/fairseq/issues/2466#issuecomment-673146603
> myleott wrote:
> This is based on the original BERT code, which implemented a linear decay via a polynomial schedule with power=1.0: https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/optimization.py#L37
So, perhaps we do the same or we don't.
If we don't - then the doc needs to be fixed that the default is `power=2.0` as currently it says `1.0` - my mistake. If we do (this PR), then the doc is already correct.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6473/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6473",
"html_url": "https://github.com/huggingface/transformers/pull/6473",
"diff_url": "https://github.com/huggingface/transformers/pull/6473.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6473.patch",
"merged_at": 1597667593000
} |
https://api.github.com/repos/huggingface/transformers/issues/6472 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6472/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6472/comments | https://api.github.com/repos/huggingface/transformers/issues/6472/events | https://github.com/huggingface/transformers/issues/6472 | 678,839,647 | MDU6SXNzdWU2Nzg4Mzk2NDc= | 6,472 | "BertEncoder' object has no attribute 'output_hidden_states" | {
"login": "thanish",
"id": 4056145,
"node_id": "MDQ6VXNlcjQwNTYxNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4056145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thanish",
"html_url": "https://github.com/thanish",
"followers_url": "https://api.github.com/users/thanish/followers",
"following_url": "https://api.github.com/users/thanish/following{/other_user}",
"gists_url": "https://api.github.com/users/thanish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thanish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thanish/subscriptions",
"organizations_url": "https://api.github.com/users/thanish/orgs",
"repos_url": "https://api.github.com/users/thanish/repos",
"events_url": "https://api.github.com/users/thanish/events{/privacy}",
"received_events_url": "https://api.github.com/users/thanish/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think you will have to tweak the model here a bit to make it work. Before you pass arguments to the model's call function, can you add this line:\r\n```python\r\nmodel.output_hidden_states = False\r\n```\r\n\r\nand see whether the error persists",
"Same issue here. Problem is not solved after setting\r\n'''\r\nmodel.output_hidden_states = False\r\n'''",
"Solved by upgrading transformers"
] | 1,597 | 1,597 | 1,597 | NONE | null | Hi I have trained a Bert token classification model for the Italian language using the "dbmdz/bert-base-italian-uncased". I have trained the model in a machine running Pytorch-1.4.0 and transformer 3.0.2, when I installed it few days back as it's the latest version.
I copied the saved best model to a server that runs Pytorch-1.4.0 & transformer version 2.3.0. I sent a request to the model to get the predictions, but I got the following warnings.
# Inference code
```
tokenizer = transformers.BertTokenizer.from_pretrained("dbmdz/bert-base-italian-uncased", do_lower_case=False)
Assuming I have tokenized the requested text into the variable "tokens"
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
segments_ids = [0] * len(tokens)
tokens_tensor = torch.tensor([indexed_tokens]).to(device)
segments_tensors = torch.tensor([segments_ids]).to(device)
logit = model(tokens_tensor, token_type_ids=None, attention_mask=segments_tensors)
```
# Warnings
```
Model name 'dbmdz/bert-base-italian-uncased' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1). Assuming 'dbmdz/bert-base-italian-uncased' is a path or url to a directory containing tokenizer files.
Didn't find file dbmdz/bert-base-italian-uncased/added_tokens.json. We won't load it.
Didn't find file dbmdz/bert-base-italian-uncased/special_tokens_map.json. We won't load it.
Didn't find file dbmdz/bert-base-italian-uncased/tokenizer_config.json. We won't load it.
loading file https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/bert-base-italian-uncased/vocab.txt from cache at /root/.cache/torch/transformers/02b5ab8ef6a3a1d4af18c318bb4c53155a59a3893dd557b922d2467b269cd405.5cbaac66fdfadbe363aad01956dac0be9bf700f2c8c87012dc078b87e2fa4181
loading file None
loading file None
loading file None
```
```
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertForTokenClassification' has changed. Saved a reverse patch to BertForTokenClassification.patch. Run `patch -p0 < BertForTokenClassification.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertModel' has changed. Saved a reverse patch to BertModel.patch. Run `patch -p0 < BertModel.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertEmbeddings' has changed. Saved a reverse patch to BertEmbeddings.patch. Run `patch -p0 < BertEmbeddings.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.normalization.LayerNorm' has changed. Saved a reverse patch to LayerNorm.patch. Run `patch -p0 < LayerNorm.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertEncoder' has changed. Saved a reverse patch to BertEncoder.patch. Run `patch -p0 < BertEncoder.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.container.ModuleList' has changed. Saved a reverse patch to ModuleList.patch. Run `patch -p0 < ModuleList.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertLayer' has changed. Saved a reverse patch to BertLayer.patch. Run `patch -p0 < BertLayer.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertAttention' has changed. Saved a reverse patch to BertAttention.patch. Run `patch -p0 < BertAttention.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertSelfAttention' has changed. Saved a reverse patch to BertSelfAttention.patch. Run `patch -p0 < BertSelfAttention.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. Saved a reverse patch to Linear.patch. Run `patch -p0 < Linear.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertSelfOutput' has changed. Saved a reverse patch to BertSelfOutput.patch. Run `patch -p0 < BertSelfOutput.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertIntermediate' has changed. Saved a reverse patch to BertIntermediate.patch. Run `patch -p0 < BertIntermediate.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertOutput' has changed. Saved a reverse patch to BertOutput.patch. Run `patch -p0 < BertOutput.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertPooler' has changed. Saved a reverse patch to BertPooler.patch. Run `patch -p0 < BertPooler.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
./torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.activation.Tanh' has changed. Saved a reverse patch to Tanh.patch. Run `patch -p0 < Tanh.patch` to revert your changes.
warnings.warn(msg, SourceChangeWarning)
```
and finally it ended with the below error.
```
"BertEncoder' object has no attribute 'output_hidden_states".
```
Can someone help me understand Is it because of the Pytorch, transformer version mismatch between the trained model on a machine and the inference on the server? or if "dbmdz/bert-base-italian-uncased" is available in the 2.3.0 version or not? or is there any other way I can make this work instead of retraining the model at a lower version to match the version of the server?
Assuming that changing the versions in the server is not quite possible as of now.
Appreciate your help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6472/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6471 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6471/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6471/comments | https://api.github.com/repos/huggingface/transformers/issues/6471/events | https://github.com/huggingface/transformers/issues/6471 | 678,812,527 | MDU6SXNzdWU2Nzg4MTI1Mjc= | 6,471 | [testing] automatically clean up temp dirs during teardown | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Our @JetRunner has already gone and proposed a fix using hashes: https://github.com/huggingface/transformers/pull/6475\r\n\r\nI think #6475 makes sense but your proposals also resonate with me, especially the 1.5. Using `tempfile.TemporaryDirectory` seems cleaner to me than manually removing the folder afterwards. The hardcoded paths are already set-up thanks to @JetRunner's hashes, but it does make it harder to debug as hashes are not understandable from a human point of view.\r\n\r\n@JetRunner, would love your input on @stas00's proposals!",
"Thanks @stas00 and @LysandreJik!\r\nBoth idea 1 and 1.5 look good to me! Idea 2 is not flexible enough and I am worried about using the same temp dir for all test cases (if my understanding is right). Maybe idea 1 is good enough and idea 1.5 seems to be a little over complicated since people can just quickly change the directory name from `tmp_dir.name` to their local path for debugging and then do some cleaning themselves.\r\nYes, I agree `temporary_dir` looks much better than `rmtree`. `rmtree` looks super scary. \r\nAlso I wonder how can you trace the temp dir if the test is interrupted? Will it still be cleaned?",
"Thank you for looking at my suggestions!\r\n\r\n> Our @JetRunner has already gone and proposed a fix using hashes: #6475\r\n\r\nNeat! A few notes:\r\n- it hasn't solved the problem of guaranteed cleanup. if the test asserts half way the clean up will not be run.\r\n- I like that it ends up with the same dir name for a given test all the time\r\n- it doesn't tell me what that `output_dir` is, have to take extra steps to figure it out - e.g. `ls -lt` \r\n- it's a bit too much copy-n-paste - the scaffolding is starting to dominate the test. It can be made into a function in testing_utils and there is no need to manually push `--output_dir` into `testargs`, could just use f\" .... {output_dir}\" into the existing list of `testargs`\r\n\r\n> it does make it harder to debug as hashes are not understandable from a human point of view.\r\n\r\nI concur. Though `tempfile`'s output is cryptic too: `/tmp/tmp0vpwv7ok`\r\n",
"> Idea 2 is not flexible enough and I am worried about using the same temp dir for all test cases (if my understanding is right).\r\n\r\nRight. That approach is problematic if you have concurrent tests running with `pytest -n 2+`. Good observation! It could be easily fixed though by for example using the test name as a unique string or a hash of it.\r\n\r\nWhile idea 2 is super-smooth - no changes to the test! It's too far removed from where things happen from the perspective of the developer working on the test.\r\n\r\n> Maybe idea 1 is good enough and idea 1.5 seems to be a little over complicated since people can just quickly change the directory name from tmp_dir.name to their local path for debugging and then do some cleaning themselves.\r\n\r\nYou will have to comment out the `with ` line and re-indent the rest of the code (or replace with `if 1:`) if you want to switch to local path, since `tempfile` doesn't support such override - it's not debug-needs-friendly.\r\n\r\n> Yes, I agree temporary_dir looks much better than rmtree. rmtree looks super scary.\r\n\r\nI'm glad we both find it scary\r\n\r\n> Also I wonder how can you trace the temp dir if the test is interrupted? Will it still be cleaned?\r\n\r\nI'm not sure what you mean by 'trace'.\r\n\r\nIt does the right thing wrt guaranteed cleanup. Testing In ipython:\r\n```\r\nimport tempfile\r\ntry:\r\n with tempfile.TemporaryDirectory() as tmp_dir:\r\n print(f\"{tmp_dir} will be removed at the end of the test\")\r\n !ls -l $tmp_dir\r\n assert False\r\nexcept: \r\n pass \r\nfinally:\r\n !ls -l $tmp_dir\r\n```\r\n```\r\n/tmp/tmp0vpwv7ok will be removed at the end of the test\r\ntotal 0\r\nls: cannot access '/tmp/tmp0vpwv7ok': No such file or directory\r\n```\r\n\r\nit looks like it stringified `tmp_dir` and didn't need `tmp_dir.name`.\r\n\r\nWhat I don't like the most about idea 1, is that it'll constantly change the path, so you have to print it out all the time - and it's not an easy feat to find out that print out with the huge dump of std streams and then you have to copy-n-paste the unique string - very inefficient debug-wise. I'd say quite terrible. but as we said replacing it with:\r\n\r\n```\r\n- with tempfile.TemporaryDirectory() as tmp_dir:\r\n+ if 1:\r\n tmp_dir=\"./local/path\"\r\n```\r\nwill do the trick. hence the idea 1.5, which will do this for you. plus let you control whether to delete or not.\r\n\r\n-----\r\n\r\nOne more cons of pre-creating a temp dir, regardless of how it is done is that it'll lead to not testing script's capacity to correctly create a non-existing dir for its outputs.\r\n",
"> > Yes, I agree temporary_dir looks much better than rmtree. rmtree looks super scary.\r\n> \r\n> I'm glad we both find it scary\r\n\r\nIf we end up using it in a context manager I wonder whether it'd be a smart idea to protect the developer from wiping out parts of their system, by refusing to delete that dir unless it was created by the context manager - i.e. it'll assert if the dir already exists. And, of course, provide a flag `i_know_what_I_am_doing_dammit=True` which will bypass the baby-gate.\r\n\r\nI don't know. This isn't great either - it will interfere with testing - I just don't like `rm -r` happening anywhere where I don't explicitly see it, including what it's deleting.\r\n",
"I am okay with all these solutions and they have their own pros and cons! \n\nFor Idea 1.5, I still think if the user (i.e., developer in this case) wants to use their own directory, we should not handle the cleaning part. On one hand, cleaning may have a risk of deleting parts of the user's file system by mistake; on the other hand, I don't think it's a good idea to make this function too complicated.\n\nIdea 2 LGTM too as long as you solve the contradiction in directory and rmtree is considered acceptable.",
"If we aren't cleaning up automatically the hardcoded path, then it defeats the purpose of 1.5 completely, i.e. best then to use 1.0 - i.e. use generic ` tempfile.TemporaryDirectory`.\r\n\r\nSo we start using:\r\n```\r\nfrom tempfile import TemporaryDirectory\r\n[...]\r\n with TemporaryDirectory() as tmp_dir:\r\n print(f\"{tmp_dir} will be removed at the end of the test\")\r\n```\r\nand the developer working on the test and wanting a fixed path, will have to re-write this with:\r\n```\r\nfrom tempfile import TemporaryDirectory\r\n[...]\r\n\r\n # with TemporaryDirectory() as tmp_dir:\r\n if 1:\r\n tmp_dir=\"./local/path\"\r\n print(f\"{tmp_dir} will be removed at the end of the test\")\r\n import shutil\r\n shutil.rmtree(tmp_dir, ignore_errors=True)\r\n```\r\nThat's a lot to type :(\r\n",
"But with 1.5 we don't have to bother to reindent, right?",
"You mean, as in:\r\n```\r\n with temp_dir_ctx() as tmp_dir:\r\n do_something_with(tmp_dir.name)\r\n```\r\nvs:\r\n```\r\n with temp_dir_ctx(path=\"/use/my/path\") as tmp_dir:\r\n do_something_with(tmp_dir.name)\r\n```\r\nno need to reindent indeed, but it'll be very confusing as it will behave differently if `path` is passed (no clean up)",
"Moreover, if we do tell the dev to use `shutil.rmtree(tmp_dir, ignore_errors=True)` we are back at square one - it won't be run if assert will happen before it, so the next test run will be \"contaminated\".\r\n\r\nI was thinking that in this particular situation, we actually need to wipe the dir out **before** the test is run. i.e. this is the real need. It's much easier to ensure it happens, because we can do it first things first, so no assert to expect.\r\n\r\nThe after test clean up is a different need.",
"Fair enough! I don't really have a preference here so let's go with what you think makes the most sense!",
"It's clear that I want to have the cake and eat it too. I want a super-safe solution, yet, with minimal coding inside the test. I think that perhaps I have to choose one or the other. I just feel uncomfortable to take responsibility for creating a gun that someone might shoot their foot with (could be mine). If I were a robot my positronic brain would have melted right now.",
"Haha don't worry. All these solutions are better than what we have right now (#6475)",
"OK, I thought of something.\r\n\r\nWe use 1.5 as originally proposed in the first comment, but in addition we require that the hardcoded path is a subdir of the cwd dir . Assert if it is not. Then in the worst case scenario something unwanted will get wiped under the cloned git dir, but the system is safe.",
"I agree. And we can listen to the community when the PR is done.",
"# Idea 2.5\r\n```\r\nclass ExamplesTests(TestsWithTempDir):\r\n[...]\r\n def test_whatever(self):\r\n tmp_dir = self.remove_at_teardown(\"./tmp/dir\")\r\n # code whatever, and nothing else to write, no extra indent/scope needed\r\n```\r\n\r\nThis will require subclassing `unittest.TestCase`, to facilitate registry of one or more dirs to clean up via a new method `remove_at_teardown`, and the clean up of those dirs will get run automatically via its `def tearDown(self)`method which will do all the work (needs to be written).\r\n\r\nThis is even simpler and solves most of the deficiencies of the previous ideas.\r\n\r\n- we still require a sub-dir for safety, will be validated at registry time.\r\n- this idea drops the use of temp dir as it's not user-friendly debug wise. So we go back to hardcoded paths.\r\n- it's flexible, you can add several tmp dirs to remove.\r\n- if you want to keep the dir, just comment out the registry call\r\n\r\nif we want to ensure the dir is clean from the get-go, we can use another method that will attempt to delete at addition time and during teardown. `self.remove_now_and_at_teardown` or a flag `remove_at_teardown(now=True)`.\r\n\r\nThoughts?",
"Cool. However, it is not practical to prevent others from copying and pasting the code fragment and the same path will be a problem for parallel testing (as we discussed). In this case, I believe you can use a hash (like #6475). However, temporary dir is still a cool idea that I don't want to give up. Good to hear from @sshleifer @sgugger ",
"I agree. I will code something that will support both, so by default we will use a unique tmp dir but for debug it'll allow for a hardcoded path. I will send a PR soonish.\r\n\r\nThank you for the wonderful practical feedback, @JetRunner ",
"Done: https://github.com/huggingface/transformers/pull/6494",
"Thanks for taking care of it! Closing this as resolved."
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | Recently, a crucial fix was applied to several tests https://github.com/huggingface/transformers/pull/6453/files. the temp dir wasn't getting cleaned and subsequent tests were unreliable as they were tapping into invalid old data. The remaining issue is that the added fix is not guaranteed to run. And it repeats itself many times.
I thought of several ways to ensure the removal of the temp dir and how to make it easier to use in the tests. Here are some ideas I came up with
## Idea 1
Using a simple `tempfile.TemporaryDirectory` context manager:
```
from tempfile import TemporaryDirectory
class ExamplesTests(unittest.TestCase):
def test_run_pl_glue(self):
with TemporaryDirectory() as tmp_dir:
testargs = f"""
run_pl_glue.py
--output_dir {tmp_dir.name}
[...]
```
Pros:
- generic code
- very localized
- can be done multiple times in the same test and having a fresh temp dir
Cons:
- can't pass a specific fixed dir so that it's easier to debug - could write a custom context manager that supports both random and fixed path.
- tricky to debug - if one wants the temp dir to not be removed while developing the test - could write a custom version that supports an argument that will not remove the temp dir
- have to reach into the object to get the actual path with `obj.name`
## Idea 1.5
This one solves the cons of idea 1
Write a custom context manager that takes a hard-coded path and a flag to clean up or not to make it easy to debug, to be built on top of `tempfile.TemporaryDirectory`.
I haven't written it yet. But the core should be similar to `TestsWithTempDir` shown in the next idea, plus a context manager.
But here is how it would be used:
```
from transformers.test_utils import temp_dir_ctx
class ExamplesTests(unittest.TestCase):
def test_run_pl_glue(self):
with temp_dir_ctx(cleanup=True, path=None) as tmp_dir:
testargs = f"""
run_pl_glue.py
--output_dir {tmp_dir.name}
[...]
```
So that we could have:
Most of the time with minimal extra code, which will use a random path and auto-delete:
```
with temp_dir_ctx() as tmp_dir:
do_something_with(tmp_dir.name)
```
If we want a specific tmp path:
```
with temp_dir_ctx(path="/use/my/path") as tmp_dir:
do_something_with(tmp_dir.name)
```
if we are debugging and don't want the auto-deletion
```
with temp_dir_ctx(cleanup=False) as tmp_dir:
do_something_with(tmp_dir.name)
```
the only remaining cons:
- have to reach into the object to get the actual path with `obj.name` - can fix with `def __str__(self): return self.name`
## Idea 2
Solve the problem on the test class level, so that the tests don't need to do anything at all to clean up the temp dir. This solution uses `unittest.TestCase`'s `setUp`/`tearDown` fixtures.
```
from pathlib import Path
import tempfile
import shutil
class TestsWithTempDir(unittest.TestCase):
"""
This class is for tests that need to automatically remove a temp dir at the end of the test
regardless of its success or failure.
if no `tmp_dir` is passed a unique temp dir is created. if it's passed that passed dir is used instead.
In either case that path is created and `self.tmp_dir` is set to the path that was used.
Example 1: Let the system choose the path
class ExamplesTests(TestsWithTempDir):
def test_run_something(self):
print(f"{self.tmp_dir} will be removed at the end of the test")
Example 2: Use the path I supply
class ExamplesTests(TestsWithTempDir):
def __init__():
super.__init__(tmp_dir="./foo/bar")
def test_run_something(self):
print(f"{self.tmp_dir} will be removed at the end of the test")
"""
def __init__(self, tmp_dir=None):
self.tmp_dir = tmp_dir
self.tmp_dir_obj = None
def setUp(self):
if self.tmp_dir:
Path(self.tmp_dir).mkdir(parents=True, exist_ok=True)
else:
self.tmp_dir_obj = tempfile.TemporaryDirectory()
self.tmp_dir = self.tmp_dir_obj.name
def tearDown(self):
if self.tmp_dir_obj:
del self.tmp_dir_obj
else:
shutil.rmtree(self.tmp_dir, ignore_errors=True)
```
Pros:
- moves the cleaning up responsibility away from the test, leaving the test focused to just what it tests
- very flexible - can handle custom and random paths
- debug should be relatively easy - just need to add another option or a method to not tear-down (I haven't implemented it yet)
Cons:
- only supports one tmp dir per test - won't work if multiple executions happen in the same test
- the action is far removed from the code - could be hard to see - I'm especially concerned with running `shutil.rmtree` at a distance - it'd be easy to make a mistake of passing `/tmp/foo` instead of `./tmp/foo` or worse. I'd rather not use `shutil.rmtree` at all unless it's right there when the developer can see what they are removing.
-----
After contemplating these different solutions, I feel that locality is more important than behind the scenes magic, so I feel the best solution would be Idea 1.5 - i.e. a custom context manager that makes it easy to debug, to be built on top of `tempfile.TemporaryDirectory`, and also supports a hardcoded tmp path.
Please, let me know if any of these resonate with you and then I can code a PR that can be seen in action.
Thank you!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6471/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6470 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6470/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6470/comments | https://api.github.com/repos/huggingface/transformers/issues/6470/events | https://github.com/huggingface/transformers/pull/6470 | 678,712,392 | MDExOlB1bGxSZXF1ZXN0NDY3NjE1MTUx | 6,470 | Generation doc | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=h1) Report\n> Merging [#6470](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/05810cd80a5ca83065e0dbe5335c030c4a435ddb&el=desc) will **decrease** coverage by `0.17%`.\n> The diff coverage is `96.73%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6470 +/- ##\n==========================================\n- Coverage 80.55% 80.37% -0.18% \n==========================================\n Files 153 156 +3 \n Lines 28001 28058 +57 \n==========================================\n- Hits 22556 22552 -4 \n- Misses 5445 5506 +61 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.87% <ø> (-0.36%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <ø> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <ø> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <ø> (+4.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `91.66% <87.50%> (+0.64%)` | :arrow_up: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <95.31%> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.68% <97.87%> (+0.71%)` | :arrow_up: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.47% <100.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/configuration\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21iYXJ0LnB5) | `100.00% <100.00%> (ø)` | |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=footer). Last update [05810cd...d9cbc03](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | Add documentation (and clean docstrings) of `GenerationMixin` and `TFGenerationMixin`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6470/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6470",
"html_url": "https://github.com/huggingface/transformers/pull/6470",
"diff_url": "https://github.com/huggingface/transformers/pull/6470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6470.patch",
"merged_at": 1597412800000
} |
https://api.github.com/repos/huggingface/transformers/issues/6469 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6469/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6469/comments | https://api.github.com/repos/huggingface/transformers/issues/6469/events | https://github.com/huggingface/transformers/pull/6469 | 678,663,856 | MDExOlB1bGxSZXF1ZXN0NDY3NTc0Mzky | 6,469 | Fix typo | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=h1) Report\n> Merging [#6469](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e0b1dc8954b87c18f77a82000e81e02683b8eb1&el=desc) will **increase** coverage by `0.76%`.\n> The diff coverage is `87.43%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6469 +/- ##\n==========================================\n+ Coverage 79.77% 80.53% +0.76% \n==========================================\n Files 148 153 +5 \n Lines 27214 28001 +787 \n==========================================\n+ Hits 21710 22552 +842 \n+ Misses 5504 5449 -55 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/data/test\\_generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <ø> (-0.91%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <ø> (ø)` | |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `51.92% <28.57%> (-20.81%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <37.50%> (-0.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <50.00%> (+1.79%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `90.90% <52.94%> (-5.68%)` | :arrow_down: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `91.02% <66.66%> (-1.19%)` | :arrow_down: |\n| ... and [61 more](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=footer). Last update [7bc0056...1e75c22](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6469/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6469",
"html_url": "https://github.com/huggingface/transformers/pull/6469",
"diff_url": "https://github.com/huggingface/transformers/pull/6469.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6469.patch",
"merged_at": 1597345268000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6468 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6468/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6468/comments | https://api.github.com/repos/huggingface/transformers/issues/6468/events | https://github.com/huggingface/transformers/issues/6468 | 678,647,132 | MDU6SXNzdWU2Nzg2NDcxMzI= | 6,468 | convert_graph_to_onnx not working as expected. | {
"login": "Zhen-hao",
"id": 10957195,
"node_id": "MDQ6VXNlcjEwOTU3MTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/10957195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhen-hao",
"html_url": "https://github.com/Zhen-hao",
"followers_url": "https://api.github.com/users/Zhen-hao/followers",
"following_url": "https://api.github.com/users/Zhen-hao/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhen-hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhen-hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhen-hao/subscriptions",
"organizations_url": "https://api.github.com/users/Zhen-hao/orgs",
"repos_url": "https://api.github.com/users/Zhen-hao/repos",
"events_url": "https://api.github.com/users/Zhen-hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhen-hao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"my model is \r\n```python\r\nclass TFBertForMultiClassification(TFBertPreTrainedModel):\r\n '''BERT Model class for multi-label classification using a softmax output layer '''\r\n\r\n def __init__(self, config, *inputs, **kwargs):\r\n super(TFBertForMultiClassification, self).__init__(config, *inputs, **kwargs)\r\n self.num_labels = config.num_labels\r\n self.bert = TFBertMainLayer(config, name='bert')\r\n self.bert.trainable = False\r\n self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)\r\n self.classifier = tf.keras.layers.Dense(config.num_labels,\r\n kernel_initializer=get_initializer(config.initializer_range),\r\n name='classifier',\r\n activation='sigmoid')\r\n self.config = config\r\n \r\n def get_config(self):\r\n return self.config\r\n\r\n def call(self, inputs, **kwargs):\r\n outputs = self.bert(inputs, **kwargs)\r\n pooled_output = outputs[1]\r\n pooled_output = self.dropout(pooled_output, training=kwargs.get('training', False))\r\n logits = self.classifier(pooled_output)\r\n logits = tf.keras.backend.expand_dims(logits, axis=-1)\r\n outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here\r\n return outputs # logits, (hidden_states), (attentions)\r\n```",
"created a smaller example to reproduce the problem.\r\nhttps://github.com/huggingface/transformers/issues/6503"
] | 1,597 | 1,597 | 1,597 | NONE | null | # ❓ Questions & Help
not sure if there is a bug.
when running
```python
from transformers.convert_graph_to_onnx import convert
convert(framework="tf", model = my_fine_tuned_bert_model, output="onnx-fine-tuned/model.onnx", opset=11, tokenizer=tokenizer)
```
I got the following log/output
```
ONNX opset version set to: 11
Loading pipeline (model: <__main__.TFBertForMultiClassification object at 0x7f2c37ba9b50>, tokenizer: <transformers.tokenization_bert.BertTokenizerFast object at 0x7f2c37ba9ad0>)
Creating folder onnx-fine-tuned
/!\ Please note TensorFlow doesn't support exporting model > 2Gb /!\
Using framework TensorFlow: 2.1.0, keras2onnx: 1.7.0
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input token_type_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch'}
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertForMultiClassification.call of <__main__.TFBertForMultiClassification object at 0x7f2c37ba9b50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Num'
WARNING: AutoGraph could not transform <bound method TFBertForMultiClassification.call of <__main__.TFBertForMultiClassification object at 0x7f2c37ba9b50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Num'
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertMainLayer.call of <transformers.modeling_tf_bert.TFBertMainLayer object at 0x7f2c3dc34910>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Num'
WARNING: AutoGraph could not transform <bound method TFBertMainLayer.call of <transformers.modeling_tf_bert.TFBertMainLayer object at 0x7f2c3dc34910>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Num'
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c371ffe90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c371ffe90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c3769bb50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c3769bb50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c3769bed0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c3769bed0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37682e10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37682e10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c8ca90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c8ca90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c9b050>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c9b050>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37ca6c90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37ca6c90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37cae910>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37cae910>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37caee90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37caee90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3783eb10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3783eb10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37834790>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37834790>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37834d10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37834d10>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3781e990>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3781e990>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37814610>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37814610>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37814b90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37814b90>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37778850>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37778850>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c377704d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c377704d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37770a50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37770a50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3775b6d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3775b6d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37751350>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37751350>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c377518d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c377518d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c42550>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c42550>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c4b1d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c4b1d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c4b750>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c4b750>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c5f3d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c5f3d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c5ffd0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c5ffd0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c685d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c685d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c7c210>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c7c210>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c7ce50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c7ce50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c07410>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c07410>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c1a0d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c1a0d0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c1acd0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c1acd0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c24290>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c24290>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c2ded0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c2ded0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c35b50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c35b50>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37bc3110>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37bc3110>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:AutoGraph could not transform <bound method TFBertPooler.call of <transformers.modeling_tf_bert.TFBertPooler object at 0x7f2c37bc3cd0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
WARNING: AutoGraph could not transform <bound method TFBertPooler.call of <transformers.modeling_tf_bert.TFBertPooler object at 0x7f2c37bc3cd0>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Bad argument number for Name: 3, expecting 4
tf executing eager_mode: True
tf.keras model eager_mode: False
The ONNX operator number change on the optimization: 2579 -> 1674
```
should I ignore the warning?
The shape of the exported onnx model is
```
graph_name: tf_bert_for_multi_classification
domain: onnxmltools
description:
input 0: "attention_mask" ["N", 7] Int32
input 1: "input_ids" ["N", 7] Int32
input 2: "token_type_ids" ["N", 7] Int32
output 0: "output_1" ["N", 4404, 1] Float
```
I don't think that's correct. where are "N" and 7 from?
when I try to run the model on input
```
{'input_ids': array([ 101, 146, 1169, 1631, 1103, 3974, 117, 1169, 1128, 136, 102]),
'token_type_ids': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
'attention_mask': array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])}
```
I got error
```
>>> results = session.run(None, inputs_onnx)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nix/store/xws61xnjc03fjiwfh7ci5cwgg1chmp3l-python3.7-onnxruntime-1.4.0/lib/python3.7/site-packages/onnxruntime/capi/session.py", line 110, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (N11onnxruntime17PrimitiveDataTypeIlEE) , expected: (N11onnxruntime17PrimitiveDataTypeIiEE)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6468/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6467 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6467/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6467/comments | https://api.github.com/repos/huggingface/transformers/issues/6467/events | https://github.com/huggingface/transformers/issues/6467 | 678,615,996 | MDU6SXNzdWU2Nzg2MTU5OTY= | 6,467 | Error: 'GPT2Model' object has no attribute '_step' when converting tf-based checkpoint into pytorch | {
"login": "publicstaticvo",
"id": 42710459,
"node_id": "MDQ6VXNlcjQyNzEwNDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/42710459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/publicstaticvo",
"html_url": "https://github.com/publicstaticvo",
"followers_url": "https://api.github.com/users/publicstaticvo/followers",
"following_url": "https://api.github.com/users/publicstaticvo/following{/other_user}",
"gists_url": "https://api.github.com/users/publicstaticvo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/publicstaticvo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/publicstaticvo/subscriptions",
"organizations_url": "https://api.github.com/users/publicstaticvo/orgs",
"repos_url": "https://api.github.com/users/publicstaticvo/repos",
"events_url": "https://api.github.com/users/publicstaticvo/events{/privacy}",
"received_events_url": "https://api.github.com/users/publicstaticvo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"same problem",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,604 | 1,604 | NONE | null | I'm trying to convert a tensorflow-based GPT-2 checkpoint into pytorch using `convert_gpt2_checkpoint_to_pytorch`, and get errors like:
```
INFO:transformers.modeling_gpt2:Converting TensorFlow checkpoint from /content/model.ckpt-220000
INFO:transformers.modeling_gpt2:Loading TF weight global_step with shape []
INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/LayerNorm_embed_norm/beta with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/LayerNorm_embed_norm/beta/adafactor_v with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/LayerNorm_embed_norm/gamma with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/LayerNorm_embed_norm/gamma/adafactor_v with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/pos_embed with shape [1024, 1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/pos_embed/adafactor_vc with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/pos_embed/adafactor_vr with shape [1024]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/word_embed with shape [8021, 1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/word_embed/adafactor_vc with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/word_embed/adafactor_vr with shape [8021]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer00/LayerNorm_mlp_ln0/beta with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer00/LayerNorm_mlp_ln0/beta/adafactor_v with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer00/LayerNorm_mlp_ln0/gamma with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer00/LayerNorm_mlp_ln0/gamma/adafactor_v with shape [1536]
...
INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer47/value_layer/bias/adafactor_v with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer47/value_layer/kernel with shape [1536, 1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer47/value_layer/kernel/adafactor_vc with shape [1536]
INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer47/value_layer/kernel/adafactor_vr with shape [1536]
---------------------------------------------------------------------------
ModuleAttributeError Traceback (most recent call last)
<ipython-input-38-45b704eacf86> in <module>()
1 from transformers.convert_gpt2_original_tf_checkpoint_to_pytorch import convert_gpt2_checkpoint_to_pytorch
----> 2 convert_gpt2_checkpoint_to_pytorch('./model.ckpt-220000', '', 'pytorch')
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
770 return modules[name]
771 raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
--> 772 type(self).__name__, name))
773
774 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
ModuleAttributeError: 'GPT2Model' object has no attribute '_step'
```
It seems that the program cannot convert `global_step` layer into pytorch. Is there any solution to this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6467/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6466 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6466/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6466/comments | https://api.github.com/repos/huggingface/transformers/issues/6466/events | https://github.com/huggingface/transformers/pull/6466 | 678,611,430 | MDExOlB1bGxSZXF1ZXN0NDY3NTMwNzQy | 6,466 | add custom datasets tutorial | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=h1) Report\n> Merging [#6466](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e0b1dc8954b87c18f77a82000e81e02683b8eb1&el=desc) will **decrease** coverage by `1.34%`.\n> The diff coverage is `83.53%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6466 +/- ##\n==========================================\n- Coverage 79.77% 78.42% -1.35% \n==========================================\n Files 148 153 +5 \n Lines 27214 28001 +787 \n==========================================\n+ Hits 21710 21960 +250 \n- Misses 5504 6041 +537 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/data/test\\_generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <ø> (-0.91%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.15% <ø> (-0.20%)` | :arrow_down: |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <4.00%> (-54.16%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <7.14%> (-70.00%)` | :arrow_down: |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `51.92% <28.57%> (-20.81%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <37.50%> (-0.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <50.00%> (+1.79%)` | :arrow_up: |\n| ... and [72 more](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=footer). Last update [7bc0056...31ea640](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Don't mind the failing test, it's been fixed on `master`."
] | 1,597 | 1,598 | 1,597 | CONTRIBUTOR | null | A tutorial showing examples for working with custom datasets on several tasks. Goals:
1. Keep it general. The point is to show people how to use their own datasets, so don't use any processors or utilities that are dataset-specific.
2. Show several tasks with different data formats. I include sequence classification with IMDb, token classification with W-NUT NER, and question answering with squad 2.0. Also link to how to train a language model blog post.
3. Prepare the data in a way that works with Trainer, TFTrainer, native PyTorch, and native TensorFlow with keras's `fit` method. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6466/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6466",
"html_url": "https://github.com/huggingface/transformers/pull/6466",
"diff_url": "https://github.com/huggingface/transformers/pull/6466.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6466.patch",
"merged_at": 1597670135000
} |
https://api.github.com/repos/huggingface/transformers/issues/6465 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6465/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6465/comments | https://api.github.com/repos/huggingface/transformers/issues/6465/events | https://github.com/huggingface/transformers/issues/6465 | 678,545,911 | MDU6SXNzdWU2Nzg1NDU5MTE= | 6,465 | Longformer convert error | {
"login": "Maybewuss",
"id": 38156589,
"node_id": "MDQ6VXNlcjM4MTU2NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/38156589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maybewuss",
"html_url": "https://github.com/Maybewuss",
"followers_url": "https://api.github.com/users/Maybewuss/followers",
"following_url": "https://api.github.com/users/Maybewuss/following{/other_user}",
"gists_url": "https://api.github.com/users/Maybewuss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Maybewuss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maybewuss/subscriptions",
"organizations_url": "https://api.github.com/users/Maybewuss/orgs",
"repos_url": "https://api.github.com/users/Maybewuss/repos",
"events_url": "https://api.github.com/users/Maybewuss/events{/privacy}",
"received_events_url": "https://api.github.com/users/Maybewuss/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Error(s) in loading state_dict for RobertaLongForMaskedLM:\r\nsize mismatch for embeddings.position_ids: copying a param with shape torch.Size([1, 512]) from checkpoint, the shape in current model is torch.Size([1, 4096]).",
"Hey @Maybewuss,\r\n\r\nThis is a community notebook, so we don't really plan on maintaining this notebook with current library changes. \r\nRegarding your question I would suggest to post it on https://discuss.huggingface.co/ and/or to contact the author @ibeltagy - maybe he can help you.\r\n\r\nBefore that it would be nice if you can create a notebook which can be used to re-create your error (replacing RoBERTA with BERT in the above notebook)",
"@patrickvonplaten Is there a way of converting existing 'short' models to Longformer? The notebook above (from allennlp) seem not to be useful since you can't automatically convert their 'long' model to Longformer Huggingface's class. The only way I see is to manually remap nodes.",
"Yeah, it is not straight-forward to convert *any* HF model to its \"long\" version. You will need to write some special code for this yourself I think. The notebook should work more as an example for how it can be done with a model like Roberta",
"I faced the same error with roberta. Size mismatch was in the position embedding and position ids. Adding the following lines to `create_long_model` helped:\r\n```{python}\r\nmodel.roberta.embeddings.position_embeddings.weight.data = new_pos_embed # add after this line\r\nmodel.roberta.embeddings.position_embeddings.num_embeddings = len(new_pos_embed.data)\r\n# first, check that model.roberta.embeddings.position_embeddings.weight.data.shape is correct — has to be 4096 (default) of your desired length\r\nmodel.roberta.embeddings.position_ids = torch.arange(\r\n 0, model.roberta.embeddings.position_embeddings.num_embeddings\r\n)[None]\r\n```\r\n\r\nFor some reason number of embeddings didn't change after adding new weight tensor, so we fix it and also add new position ids.\r\nI use torch==1.6.0 and transformers==3.4.0",
"@NadiaRom Been trying this implementation, but the forward pass in `RobertaLongSelfAttention` gets too many inputs in the forward pass. \r\n\r\n```python\r\nclass RobertaLongSelfAttention(LongformerSelfAttention):\r\n def forward(\r\n self,\r\n hidden_states,\r\n attention_mask=None,\r\n head_mask=None,\r\n encoder_hidden_states=None,\r\n encoder_attention_mask=None,\r\n output_attentions=False,\r\n ):\r\n return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions)\r\n\r\n```\r\n\r\nAnd doesnt work with the current implementation in the transformer library [of the forward pass](https://github.com/huggingface/transformers/blob/c89bdfbe720bc8f41c7dc6db5473a2cb0955f224/src/transformers/models/longformer/modeling_longformer.py#L415)\r\n\r\nAny thought on how to solve this and use the conversion script in the current transformers release (3.5.1)?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"@MarkusSagen, were you able to solve the `forward()` issue? ",
"@versae I only looked at it for a couple of hours and decided it was easier to roll back to an earlier version of transformers. If anyone implements a fix, I would be very interested to hear 😊👌",
"@MarkusSagen, [this PR makes it work for 4.2.0](https://github.com/allenai/longformer/pull/166/), and with a couple of changes it also works for 4.9.0."
] | 1,597 | 1,628 | 1,614 | NONE | null | When i install transformers from source and convert bert to "long vesion", [failed.](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6465/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6464 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6464/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6464/comments | https://api.github.com/repos/huggingface/transformers/issues/6464/events | https://github.com/huggingface/transformers/pull/6464 | 678,517,773 | MDExOlB1bGxSZXF1ZXN0NDY3NDUxNzc5 | 6,464 | [BartTokenizerFast] add BartTokenizerFast in AutoTokenizer | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=h1) Report\n> Merging [#6464](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54c687e97c92efe6eba9e537bd98b47d9005a279&el=desc) will **decrease** coverage by `2.57%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6464 +/- ##\n==========================================\n- Coverage 79.91% 77.33% -2.58% \n==========================================\n Files 153 153 \n Lines 28005 28005 \n==========================================\n- Hits 22379 21657 -722 \n- Misses 5626 6348 +722 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.72% <100.00%> (ø)` | |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-63.30%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=footer). Last update [a442f87...c1b241e](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | MEMBER | null | This PR adds BartTokenizerFast in AutoTokenizer.
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6464/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6464",
"html_url": "https://github.com/huggingface/transformers/pull/6464",
"diff_url": "https://github.com/huggingface/transformers/pull/6464.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6464.patch",
"merged_at": 1597334891000
} |
https://api.github.com/repos/huggingface/transformers/issues/6463 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6463/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6463/comments | https://api.github.com/repos/huggingface/transformers/issues/6463/events | https://github.com/huggingface/transformers/pull/6463 | 678,513,372 | MDExOlB1bGxSZXF1ZXN0NDY3NDQ3OTkz | 6,463 | [LongformerTokenizerFast] add LongformerTokenizerFast in AutoTokenizer | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=h1) Report\n> Merging [#6463](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54c687e97c92efe6eba9e537bd98b47d9005a279&el=desc) will **increase** coverage by `0.18%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6463 +/- ##\n==========================================\n+ Coverage 79.91% 80.09% +0.18% \n==========================================\n Files 153 153 \n Lines 28005 28005 \n==========================================\n+ Hits 22379 22431 +52 \n+ Misses 5626 5574 -52 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.72% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.58% <0.00%> (+27.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=footer). Last update [54c687e...7f1278b](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | MEMBER | null | This PR adds LongformerTokenizerFast in AutoTokenizer. Fixes #6459
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6463/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6463",
"html_url": "https://github.com/huggingface/transformers/pull/6463",
"diff_url": "https://github.com/huggingface/transformers/pull/6463.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6463.patch",
"merged_at": 1597334804000
} |
https://api.github.com/repos/huggingface/transformers/issues/6462 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6462/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6462/comments | https://api.github.com/repos/huggingface/transformers/issues/6462/events | https://github.com/huggingface/transformers/pull/6462 | 678,408,339 | MDExOlB1bGxSZXF1ZXN0NDY3MzYxMjcw | 6,462 | minor typo fix in modeling_utils | {
"login": "prajjwal1",
"id": 24690051,
"node_id": "MDQ6VXNlcjI0NjkwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prajjwal1",
"html_url": "https://github.com/prajjwal1",
"followers_url": "https://api.github.com/users/prajjwal1/followers",
"following_url": "https://api.github.com/users/prajjwal1/following{/other_user}",
"gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions",
"organizations_url": "https://api.github.com/users/prajjwal1/orgs",
"repos_url": "https://api.github.com/users/prajjwal1/repos",
"events_url": "https://api.github.com/users/prajjwal1/events{/privacy}",
"received_events_url": "https://api.github.com/users/prajjwal1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6462/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6462",
"html_url": "https://github.com/huggingface/transformers/pull/6462",
"diff_url": "https://github.com/huggingface/transformers/pull/6462.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6462.patch",
"merged_at": 1597325809000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6461 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6461/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6461/comments | https://api.github.com/repos/huggingface/transformers/issues/6461/events | https://github.com/huggingface/transformers/pull/6461 | 678,405,951 | MDExOlB1bGxSZXF1ZXN0NDY3MzU5MzEz | 6,461 | Sort unique_no_split_tokens to make it deterministic | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=h1) Report\n> Merging [#6461](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d94aecd516c7540a94b9d781ef28d7375a796bc&el=desc) will **decrease** coverage by `0.47%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6461 +/- ##\n==========================================\n- Coverage 80.09% 79.62% -0.48% \n==========================================\n Files 153 153 \n Lines 28005 28005 \n==========================================\n- Hits 22430 22298 -132 \n- Misses 5575 5707 +132 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-70.95%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=footer). Last update [9d94aec...dfb7549](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I've actually switched in the last version of transformers from a `set` to a `list` for the same reason (not deterministic for `nlp`). Are you sure this really solve the problem @lhoestq ?\r\nAlso regarding backward compatibility, I'm fine with changing this from a list to a set @sgugger ",
"Maybe we should rather have a sorted list?",
"`sorted` should solves the issue.\r\n\r\nI just tested and a `set` doesn't solve it actually. I'll change to `sorted`, thanks @thomwolf ",
"This is such an important use-case (and potential source of regression) for us that we may want to add a test on that in `nlp` or `transformers` in a not too far future.",
"Yes definitely. Not sure how to test consistency across sessions in the CI though.\r\nI guess we could have tests with hardcoded hashes for some tokenizers but I'm not sure that's ideal.\r\n\r\nOr maybe there's a way to do two CI jobs in a row: one to generate the hashes in a first session, and one to verify that they're the same in another session."
] | 1,597 | 1,597 | 1,597 | MEMBER | null | The `unique_no_split_tokens` attribute of tokenizers is not deterministic, and it makes the hashing in the `nlp` lib return different hashes for the same tokenizer over different sessions.
To fix that I changed its type to a `set` instead of a `list`.
Fix #6460 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6461/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6461",
"html_url": "https://github.com/huggingface/transformers/pull/6461",
"diff_url": "https://github.com/huggingface/transformers/pull/6461.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6461.patch",
"merged_at": 1597394218000
} |
https://api.github.com/repos/huggingface/transformers/issues/6460 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6460/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6460/comments | https://api.github.com/repos/huggingface/transformers/issues/6460/events | https://github.com/huggingface/transformers/issues/6460 | 678,405,042 | MDU6SXNzdWU2Nzg0MDUwNDI= | 6,460 | Hashing a tokenizer using the 🤗 nlp lib is not deterministic | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,597 | 1,597 | 1,597 | MEMBER | null | In the `nlp` library it is common to use a tokenizer on a dataset.
The library takes care of caching the results, so that if you run the tokenization twice, it will reuse the previous results.
To make the caching work, we compute a hash of the tokenizer.
However the `unique_no_split_tokens` attribute of tokenizers is not deterministic, and it makes the hashing return different hashes for the same tokenizer over different sessions.
`unique_no_split_tokens` can be a list like `['[CLS]', '[MASK]', '[PAD]', '[SEP]', '[UNK]']` for example. But it happens that re-loading a tokenizer in another session shuffles the tokens in the list.
For example this code doesn't always return the same output over different sessions:
```python
from transformers import AutoTokenizer
model_name = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(model_name)
print(tokenizer.unique_no_split_tokens)
```
Reproduce on google colab: https://colab.research.google.com/drive/1nyskaLavcTCkXibZBlYX71bkG476uSzz?usp=sharing | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6460/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6459 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6459/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6459/comments | https://api.github.com/repos/huggingface/transformers/issues/6459/events | https://github.com/huggingface/transformers/issues/6459 | 678,340,367 | MDU6SXNzdWU2NzgzNDAzNjc= | 6,459 | Autotokenizer not returning instance of LongformerTokenizerFast | {
"login": "pratikdk",
"id": 20542313,
"node_id": "MDQ6VXNlcjIwNTQyMzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/20542313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pratikdk",
"html_url": "https://github.com/pratikdk",
"followers_url": "https://api.github.com/users/pratikdk/followers",
"following_url": "https://api.github.com/users/pratikdk/following{/other_user}",
"gists_url": "https://api.github.com/users/pratikdk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pratikdk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pratikdk/subscriptions",
"organizations_url": "https://api.github.com/users/pratikdk/orgs",
"repos_url": "https://api.github.com/users/pratikdk/repos",
"events_url": "https://api.github.com/users/pratikdk/events{/privacy}",
"received_events_url": "https://api.github.com/users/pratikdk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @pratikdk thank you for reporting this! Just made a PR, will be fixed soon. Till then you can use `LongformerTokenizerFast` class"
] | 1,597 | 1,597 | 1,597 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Google Colab
## Information
Model I am using : Longformer
Path: **'allenai/longformer-base-4096'** and **'allenai/longformer-large-4096'**
The problem arises when trying to load 'Fast' version for Longformer using Autotokenizer, the returned tokenizer instance is an object of LongformerTokenizer and not LongformerTokenizerFast.

I require the offset mappings for a sub task of extracting word embeddings.
## To reproduce
Just as in the screenshot i am adding the code below to instantiate the tokenizer object:
```
longformer_tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path = 'allenai/longformer-base-4096', use_fast=True)
print(longformer_tokenizer.is_fast)
print(longformer_tokenizer)
```
And since its not an instance of transformers.LongformerTokenizerFast, I cannot `return_offsets_mapping=True`
As in the below code throws `NotImplementedError`
```
longformer_encoded_dict = longformer_tokenizer.encode_plus(text=sequence_3,
add_special_tokens = True,
max_length = 75,
truncation = True,
pad_to_max_length = False,
return_token_type_ids = False,
return_attention_mask = True,
return_overflowing_tokens = False,
return_special_tokens_mask = False,
return_offsets_mapping=True)
```
**Error**
`
NotImplementedError: return_offsets_mapping is not available when using Python tokenizers.To use this feature, change your tokenizer to one deriving from transformers.PreTrainedTokenizerFast.`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
@mfuntowicz
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6459/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6458 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6458/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6458/comments | https://api.github.com/repos/huggingface/transformers/issues/6458/events | https://github.com/huggingface/transformers/issues/6458 | 678,318,571 | MDU6SXNzdWU2NzgzMTg1NzE= | 6,458 | Unknown task zero-shot-classification | {
"login": "amarlearning",
"id": 9383897,
"node_id": "MDQ6VXNlcjkzODM4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9383897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amarlearning",
"html_url": "https://github.com/amarlearning",
"followers_url": "https://api.github.com/users/amarlearning/followers",
"following_url": "https://api.github.com/users/amarlearning/following{/other_user}",
"gists_url": "https://api.github.com/users/amarlearning/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amarlearning/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amarlearning/subscriptions",
"organizations_url": "https://api.github.com/users/amarlearning/orgs",
"repos_url": "https://api.github.com/users/amarlearning/repos",
"events_url": "https://api.github.com/users/amarlearning/events{/privacy}",
"received_events_url": "https://api.github.com/users/amarlearning/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! this task is only available on the `master` branch as of now. You can install it as such: `pip install git+https://github.com/huggingface/transformers`.\r\n\r\nIt will be in the next release!",
"This is still happening on Databricks even though I re-installed the package several times today. Any thoughts?",
"@Tolga28A Can you document which exact command(s) you run on Databricks (and how)?",
"pip install git+https://github.com/huggingface/transformers\r\nfrom transformers import pipeline\r\nclassifier = pipeline('zero-shot-classification')\r\n\r\nand the output is:\r\n\r\n\r\nKeyError: \"Unknown task zero-shot-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_en_to_fr', 'translation_en_to_de', 'translation_en_to_ro', 'text-generation']\"\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n<command-2362828626522668> in <module>\r\n----> 1 classifier = pipeline('zero-shot-classification')\r\n\r\n/local_disk0/.ephemeral_nfs/envs/pythonEnv-a6d3a5c1-2f0b-495b-828f-f792f8695d17/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs)\r\n 1819 # Retrieve the task\r\n 1820 if task not in SUPPORTED_TASKS:\r\n-> 1821 raise KeyError(\"Unknown task {}, available tasks are {}\".format(task, list(SUPPORTED_TASKS.keys())))\r\n 1822 \r\n 1823 framework = framework or get_framework(model)\r\n\r\nKeyError: \"Unknown task zero-shot-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_en_to_fr', 'translation_en_to_de', 'translation_en_to_ro', 'text-generation']\"",
"Start from a brand new venv or uninstall transformers before re-installing?"
] | 1,597 | 1,597 | 1,597 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Ubuntu 18
- Python version: 3.7
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I downloaded transformer version 3.0.2
2. From transformer, I imported pipeline
3. And from the pipeline, I was trying to load this task `zero-shot-classification` and then I got the error.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-12-1f0825594ce1> in <module>
----> 1 classifier = pipeline("zero-shot-classification")
~/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs)
1819 # Retrieve the task
1820 if task not in SUPPORTED_TASKS:
-> 1821 raise KeyError("Unknown task {}, available tasks are {}".format(task, list(SUPPORTED_TASKS.keys())))
1822
1823 framework = framework or get_framework(model)
KeyError: "Unknown task zero-shot-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_en_to_fr', 'translation_en_to_de', 'translation_en_to_ro', 'text-generation']"
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6458/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6457 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6457/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6457/comments | https://api.github.com/repos/huggingface/transformers/issues/6457/events | https://github.com/huggingface/transformers/pull/6457 | 678,286,517 | MDExOlB1bGxSZXF1ZXN0NDY3MjU5MzE1 | 6,457 | Add POS tagging and Phrase chunking token classification examples | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @vblagoje , thanks for adding this :+1: \r\n\r\nGermEval dataset is currently not available - it seems that they've relaunched the shared task website. This dataset removal will also affect libraries such as Flair or `nlp` so I will try to find another mirror, thanks for reporting it!\r\n\r\nFor PoS tagging it would be awesome if you could also report/output accuracy after training - just import `accuracy_score` from the `seqeval` package :)",
"Thanks for the review @stefan-it Let me know if there are any additional suggestions. Perhaps we can add appropriate URLs for the GermEval dataset and remove the chunking example if needed. ",
"This looks great, thanks! Note that there is a big rework of the examples to use the nlp library and Trainer in the pipeline. We're polishing the APIs before we start converting every script. I'll tag you when we get to this one to make sure we don't break anything.\r\n\r\nIn the meantime, could you take care of the styling issue so we can merge?",
"Ok @sgugger please do ping me and I'll make sure that all token classification examples work as expected, perhpas I can help with the transition. I am not sure why CI fails for styling, more specifically isort `ERROR: examples/token-classification/tasks.py Imports are incorrectly sorted.` It passes both on my working laptop and training machine. Could you please tell me how imports are incorrectly sorted in [tasks.py](https://github.com/vblagoje/transformers/blob/token_classification_examples/examples/token-classification/tasks.py) ?",
"It may be because of the dep you're adding to examples. It should probably be added in the `known_third_party` list [here](https://github.com/huggingface/transformers/blob/master/setup.cfg).",
"Ok @sgugger `check_code_quality` passes now, but there are other new failures. On a first look, they seem transient/unrelated to this PR? ",
"Looks flaky, re-triggered the CI"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | This PR adds POS tagging and Phrase chunking examples to token classification examples. The current example (NER) is minimally adjusted to allow users to experiment with their token classification model training easily. Although experimenting with token classifications other than NER token classification is already possible for skilled developers, this PR lowers the barrier to entry even further and demonstrates HF extensibility.
The adjustments made consist of:
- extracting [TokenClassificationTask](https://github.com/vblagoje/transformers/blob/6caa8c1946fb9e3fb76fad081833805b25b182df/examples/token-classification/utils_ner.py#L69) superclass
- implementing the specific task particulars (reading of InputExample etc.) in task [subclasses](https://github.com/vblagoje/transformers/blob/6caa8c1946fb9e3fb76fad081833805b25b182df/examples/token-classification/tasks.py)
- "dynamic loading" of a task [subclass](https://github.com/vblagoje/transformers/blob/6caa8c1946fb9e3fb76fad081833805b25b182df/examples/token-classification/run_ner.py#L118) depending on the token classification task trained
I also noticed that:
- [NER dataset](https://github.com/vblagoje/transformers/blob/6caa8c1946fb9e3fb76fad081833805b25b182df/examples/token-classification/run.sh#L1) used is unavailable and should be replaced. I didn't replace it in this PR
- PL training needs to be slightly retrofitted to adjust for the latest PL's BaseTransformer master changes. I made the change to make sure my changes work for these new examples
If you think adding one rather than two token task classification example is enough (say POS tagging) let me know - I'll remove the other. Also, please let me know if any additional adjustments are needed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6457/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6457",
"html_url": "https://github.com/huggingface/transformers/pull/6457",
"diff_url": "https://github.com/huggingface/transformers/pull/6457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6457.patch",
"merged_at": 1597334991000
} |
https://api.github.com/repos/huggingface/transformers/issues/6456 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6456/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6456/comments | https://api.github.com/repos/huggingface/transformers/issues/6456/events | https://github.com/huggingface/transformers/issues/6456 | 678,277,298 | MDU6SXNzdWU2NzgyNzcyOTg= | 6,456 | Open-Retrieval Question Answering (ORQA) | {
"login": "antoniolanza1996",
"id": 40452030,
"node_id": "MDQ6VXNlcjQwNDUyMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/40452030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antoniolanza1996",
"html_url": "https://github.com/antoniolanza1996",
"followers_url": "https://api.github.com/users/antoniolanza1996/followers",
"following_url": "https://api.github.com/users/antoniolanza1996/following{/other_user}",
"gists_url": "https://api.github.com/users/antoniolanza1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antoniolanza1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antoniolanza1996/subscriptions",
"organizations_url": "https://api.github.com/users/antoniolanza1996/orgs",
"repos_url": "https://api.github.com/users/antoniolanza1996/repos",
"events_url": "https://api.github.com/users/antoniolanza1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/antoniolanza1996/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | CONTRIBUTOR | null | # 🌟 New model addition
Open-Retrieval Question Answering system (ORQA) was introduced in the paper https://arxiv.org/abs/1906.00300. This approach is very useful for those who work on Open Domain Question Answering.
<!-- Important information -->
## Open source status
* [x] the model implementation is available: All the implementation code has been released in https://github.com/google-research/language/tree/master/language/orqa
* [x] the model weights are available: `gs://orqa-data/`
* [x] who are the authors: @kentonl et al. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6456/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6456/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6455 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6455/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6455/comments | https://api.github.com/repos/huggingface/transformers/issues/6455/events | https://github.com/huggingface/transformers/issues/6455 | 678,231,060 | MDU6SXNzdWU2NzgyMzEwNjA= | 6,455 | MASS : A generalization of BERT and GPT | {
"login": "Jeevesh8",
"id": 48825663,
"node_id": "MDQ6VXNlcjQ4ODI1NjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/48825663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jeevesh8",
"html_url": "https://github.com/Jeevesh8",
"followers_url": "https://api.github.com/users/Jeevesh8/followers",
"following_url": "https://api.github.com/users/Jeevesh8/following{/other_user}",
"gists_url": "https://api.github.com/users/Jeevesh8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jeevesh8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jeevesh8/subscriptions",
"organizations_url": "https://api.github.com/users/Jeevesh8/orgs",
"repos_url": "https://api.github.com/users/Jeevesh8/repos",
"events_url": "https://api.github.com/users/Jeevesh8/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jeevesh8/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Can also try MP-Net of theirs next . ",
"Sorry, just saw the request for MP-Net [here](https://github.com/huggingface/transformers/issues/4308) . Seems I was behind. So, shall I close this issue, or does anyone still want separate MASS model here ? @RyanHuangNLP",
"@RyanHuangNLP @StillKeepTry , @tobyoup , @xutaatmicrosoftdotcom ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,606 | 1,606 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
MASS is a novel pre-training method for sequence to sequence based language generation tasks. It randomly masks a sentence fragment in the encoder, and then predicts it in the decoder. In this way, MASS can jointly train the encoder and decoder to develop the capability of representation extraction and language modeling. This pre-training is very helpful when the encoder and decoder are shared between multiple languages.
## Open source status
- [x] the model implementation is available : The model is implemented upon fair-seq [here.](https://github.com/microsoft/MASS)
- [x] the model weights are available: Pre-trained model on various language pairs, for unsupervised translation, supervised translation and abstractive summarization are provided on the GitHub repo itself.
- [x] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu are the authors: ( @StillKeepTry , @tobyoup , @xutaatmicrosoftdotcom )
This is my first time contributing to this repository, so forgive me for any mistake . Please let know whether I should do it or not. Also, if anyone wants to come along and help, please let me know that too ! 😀 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6455/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6454 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6454/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6454/comments | https://api.github.com/repos/huggingface/transformers/issues/6454/events | https://github.com/huggingface/transformers/issues/6454 | 678,222,948 | MDU6SXNzdWU2NzgyMjI5NDg= | 6,454 | Memory Issue while following LM tutorial | {
"login": "raceee",
"id": 43013378,
"node_id": "MDQ6VXNlcjQzMDEzMzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/43013378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raceee",
"html_url": "https://github.com/raceee",
"followers_url": "https://api.github.com/users/raceee/followers",
"following_url": "https://api.github.com/users/raceee/following{/other_user}",
"gists_url": "https://api.github.com/users/raceee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raceee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raceee/subscriptions",
"organizations_url": "https://api.github.com/users/raceee/orgs",
"repos_url": "https://api.github.com/users/raceee/repos",
"events_url": "https://api.github.com/users/raceee/events{/privacy}",
"received_events_url": "https://api.github.com/users/raceee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @raceee ,\r\nGPU: RTX 2060 6G VRAM (x2) is GB GPU, so I don't think you will be able to use `batch_size` 64 with it. Try lowering your batch_size if your are running into OOM.\r\n\r\nAS for big dataset take a look at the [nlp](https://github.com/huggingface/nlp) package, it will allow you to load and process data lazily, so you won't face the RAM issue. ",
"I just shrunk my train data set. Per advice #4668",
"> Hi @raceee ,\r\n> GPU: RTX 2060 6G VRAM (x2) is GB GPU, so I don't think you will be able to use `batch_size` 64 with it. Try lowering your batch_size if your are running into OOM.\r\n> \r\n> AS for big dataset take a look at the [nlp](https://github.com/huggingface/nlp) package, it will allow you to load and process data lazily, so you won't face the RAM issue.\r\n\r\nHI @patil-suraj .. is there a code snippet that I could refer to. LineByLineTextDataset doesnt crash for me but takes forever."
] | 1,597 | 1,602 | 1,597 | NONE | null | (Didn't get answer)
https://stackoverflow.com/questions/63387831/memory-issue-while-following-lm-tutorial
SPECS:
torch==1.5.0
transformers==3.0.2
OS: Windows 10
CUDA: 10.1
GPU: RTX 2060 6G VRAM (x2)
RAM: 32GB
tutorial: https://huggingface.co/blog/how-to-train
Hello I am trying to train my own language model and I have had some memory issues. I have tried to run some of this code in Pycharm on my computer and then trying to replicate in my Collab Pro Notebook.
## First, my code
```
from transformers import RobertaConfig, RobertaTokenizerFast, RobertaForMaskedLM, LineByLineTextDataset
from transformers import DataCollatorForLanguageModeling, Trainer, TrainingArguments
config = RobertaConfig(vocab_size=60000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6,
type_vocab_size=1)
tokenizer = RobertaTokenizerFast.from_pretrained("./MODEL DIRECTORY", max_len=512)
model = RobertaForMaskedLM(config=config)
print("making dataset")
dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path="./total_text.txt", block_size=128)
print("making c")
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
training_args = TrainingArguments(output_dir="./MODEL DIRECTORY", overwrite_output_dir=True, num_train_epochs=1,
per_gpu_train_batch_size=64, save_steps=10000, save_total_limit=2)
print("Building trainer")
trainer = Trainer(model=model, args=training_args, data_collator=data_collator, train_dataset=dataset,
prediction_loss_only=True)
trainer.train()
trainer.save_model("./MODEL DIRECTORY")
```
`"./total_text.txt"` being a 1.7GB text file.
## PyCharm Attempt
This code on pycharm builds the dataset and then would throw an error saying that my preferred gpu was running out of memory, and that Torch was already using 3.7GiB of memory.
I tried:
* import gc doing a gc clear to try to flush what ever was going on my gpu
* Decreasing my batch size for my gpu (training only happened on a batch size of 8 resulting in 200,000+ epochs that all took 1.17 seconds)
* Setting my `os.environ["CUDA_VISIBLE_OBJECTS"] =""` so that torch would have to use my CPU and not my GPU. Still threw same gpu memory error...
So succumbing to the fact that torch, for the time being, was forcing itself to use my gpu, I decided to go to Collab.
## Collab Attempt
Collab has different issues with my code. It does not have the memory to build the dataset, and crashes due to RAM shortages. I purchased a Pro account and then increased the usable RAM to 25GB, still memory shortages.
Cheers! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6454/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6453 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6453/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6453/comments | https://api.github.com/repos/huggingface/transformers/issues/6453/events | https://github.com/huggingface/transformers/pull/6453 | 678,205,738 | MDExOlB1bGxSZXF1ZXN0NDY3MTkyNDE4 | 6,453 | Clean directory after script testing | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=h1) Report\n> Merging [#6453](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **decrease** coverage by `2.51%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6453 +/- ##\n==========================================\n- Coverage 79.89% 77.37% -2.52% \n==========================================\n Files 153 153 \n Lines 27902 27902 \n==========================================\n- Hits 22291 21588 -703 \n- Misses 5611 6314 +703 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-63.30%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=footer). Last update [4ffea5c...05950f1](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I think cleaning the output dir seems to be a better solution than modifying `output_dir`, since people may still copy those `output_dir` in the future. What do you think? @LysandreJik "
] | 1,597 | 1,620 | 1,597 | CONTRIBUTOR | null | #6421 #6433
This PR cleans the directory after each script testing to prevent bugs like this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6453/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6453",
"html_url": "https://github.com/huggingface/transformers/pull/6453",
"diff_url": "https://github.com/huggingface/transformers/pull/6453.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6453.patch",
"merged_at": 1597336444000
} |
https://api.github.com/repos/huggingface/transformers/issues/6452 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6452/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6452/comments | https://api.github.com/repos/huggingface/transformers/issues/6452/events | https://github.com/huggingface/transformers/issues/6452 | 678,154,775 | MDU6SXNzdWU2NzgxNTQ3NzU= | 6,452 | getting error while training bert language model. "ValueError: Expected input batch_size (8) to match target batch_size (1024)." | {
"login": "bharathrajcl",
"id": 46653822,
"node_id": "MDQ6VXNlcjQ2NjUzODIy",
"avatar_url": "https://avatars.githubusercontent.com/u/46653822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bharathrajcl",
"html_url": "https://github.com/bharathrajcl",
"followers_url": "https://api.github.com/users/bharathrajcl/followers",
"following_url": "https://api.github.com/users/bharathrajcl/following{/other_user}",
"gists_url": "https://api.github.com/users/bharathrajcl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bharathrajcl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharathrajcl/subscriptions",
"organizations_url": "https://api.github.com/users/bharathrajcl/orgs",
"repos_url": "https://api.github.com/users/bharathrajcl/repos",
"events_url": "https://api.github.com/users/bharathrajcl/events{/privacy}",
"received_events_url": "https://api.github.com/users/bharathrajcl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"the same probleme ??\r\n\r\nExpected input batch_size (15) to match target batch_size (0).",
"Same problem",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I have the same problem.",
"Same here! Anyone resolved it yet? @bharathrajcl @chaima-ai @gborodin @RufusGladiuz "
] | 1,597 | 1,646 | 1,607 | NONE | null | from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
%%time
from transformers import LineByLineTextDataset,TextDataset
paths = '/content/drive/My Drive/MyFile.txt'
dataset = TextDataset(
tokenizer=tokenizer,
file_path=paths,
block_size=128,
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./EsperBERTo",
overwrite_output_dir=True,
num_train_epochs=1,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
prediction_loss_only=True,
)
%%time
trainer.train()
**'**''''''''''''''''''''''''''''''''''''''Getting error after executing trainer.train() ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''****
ValueError Traceback (most recent call last)
<ipython-input-12-0c647bc3a8b8> in <module>()
----> 1 get_ipython().run_cell_magic('time', '', 'trainer.train()')
10 frames
<decorator-gen-60> in time(self, line, cell, local_ns)
<timed eval> in <module>()
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2214 if input.size(0) != target.size(0):
2215 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
-> 2216 .format(input.size(0), target.size(0)))
2217 if dim == 2:
2218 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
**ValueError: Expected input batch_size (8) to match target batch_size (1024).**
please help me to resolve this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6452/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6452/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6451 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6451/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6451/comments | https://api.github.com/repos/huggingface/transformers/issues/6451/events | https://github.com/huggingface/transformers/issues/6451 | 678,111,323 | MDU6SXNzdWU2NzgxMTEzMjM= | 6,451 | ERROR: No matching distribution found for tokenizers==0.8.1.rc1 (from transformers) | {
"login": "ggaemo",
"id": 8081512,
"node_id": "MDQ6VXNlcjgwODE1MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8081512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggaemo",
"html_url": "https://github.com/ggaemo",
"followers_url": "https://api.github.com/users/ggaemo/followers",
"following_url": "https://api.github.com/users/ggaemo/following{/other_user}",
"gists_url": "https://api.github.com/users/ggaemo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggaemo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggaemo/subscriptions",
"organizations_url": "https://api.github.com/users/ggaemo/orgs",
"repos_url": "https://api.github.com/users/ggaemo/repos",
"events_url": "https://api.github.com/users/ggaemo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggaemo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | NONE | null | ```
ERROR: Could not find a version that satisfies the requirement tokenizers==0.8.1.rc1 (from transformers) (from versions: 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.1.0, 0.1.1, 0.2.0, 0.2.1, 0.3.0, 0.4.0, 0.4.1, 0.4.2, 0.5.0, 0.5.1, 0.5.2, 0.6.0, 0.7.0, 0.8.0, 0.8.1)
ERROR: No matching distribution found for tokenizers==0.8.1.rc1 (from transformers)
```
The error above occurs when I pip install transformers from an anaconda environment of

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6451/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6450 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6450/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6450/comments | https://api.github.com/repos/huggingface/transformers/issues/6450/events | https://github.com/huggingface/transformers/issues/6450 | 678,065,558 | MDU6SXNzdWU2NzgwNjU1NTg= | 6,450 | Error in PyTorch Trainer when used with TPU | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am receiving the same error. Even without using TPU. \r\n\r\npython run_glue.py --model_name_or_path bert-base-cased --task_name MRPC --do_train --do_eval --data_dir $GLUE_DIR/MRPC/ --max_seq_length 128 --per_device_train_batch_size --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/mrpc_output/ \r\n",
"Try with the following:\r\n\r\n```bash\r\n!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py\r\n!python pytorch-xla-env-setup.py --version \"nightly\"\r\n\r\n!pip install git+https://github.com/huggingface/transformers.git\r\n\r\n!git clone https://github.com/huggingface/transformers.git\r\n\r\n!python transformers/examples/xla_spawn.py --num_cores 1 \\\r\n question-answering/run_squad_trainer.py \\\r\n --model_name_or_path bert-base-multilingual-cased \\\r\n --model_type bert \\\r\n --data_dir $DATA_DIR \\\r\n --do_train \\\r\n --per_device_train_batch_size 64 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir $OUT_DIR \\\r\n --overwrite_output_dir\r\n```\r\n\r\nTo run it with all `8 TPU` cores, you most likely need the `35GB RAM` runtime from Google Colab. You can find it in this [notebook](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil).",
"Thanks @AliOsm, it works!"
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0a0+d6149a7 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
--> @sgugger
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
The following error arises when using the `run_squad_trainer.py` script with TPU:
```python
Epoch: 0% 0/2 [00:00<?, ?it/s]
Iteration: 0it [00:00, ?it/s]Exception in device=TPU:0: 'NoneType' object cannot be interpreted as an integer
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "/content/transformers/examples/question-answering/run_squad_trainer.py", line 156, in _mp_fn
main()
File "/content/transformers/examples/question-answering/run_squad_trainer.py", line 145, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 584, in train
self.epoch = epoch + (step + 1) / len(epoch_iterator)
TypeError: 'NoneType' object cannot be interpreted as an integer
```
## To reproduce
Steps to reproduce the behavior:
1. install transformers from the master branch
2. install pytorch-xla using the following command:
```shell
VERSION = "20200325"
curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
python pytorch-xla-env-setup.py --version $VERSION
```
3. run the training script (I'm using 1 tpu core merely to simplify the logs. The error is the same (for each core) when using 8 cores):
```shell
cd transformers/examples/
python ./xla_spawn.py --num_cores 1 \
question-answering/run_squad_trainer.py \
--model_name_or_path bert-base-multilingual-cased \
--model_type bert \
--data_dir $DATA_DIR \
--do_train \
--per_device_train_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir $OUT_DIR \
--overwrite_output_dir
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The script runs and trains the model
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6450/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6449 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6449/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6449/comments | https://api.github.com/repos/huggingface/transformers/issues/6449/events | https://github.com/huggingface/transformers/pull/6449 | 677,931,960 | MDExOlB1bGxSZXF1ZXN0NDY2OTY5NTU2 | 6,449 | Trainer automatically drops unused columns in nlp datasets | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=h1) Report\n> Merging [#6449](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc820476a5c72060f810f825298befd5ec85da4d?el=desc) will **decrease** coverage by `2.12%`.\n> The diff coverage is `37.03%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6449 +/- ##\n==========================================\n- Coverage 79.98% 77.86% -2.13% \n==========================================\n Files 153 153 \n Lines 28005 28031 +26 \n==========================================\n- Hits 22401 21827 -574 \n- Misses 5604 6204 +600 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.27% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.28% <27.27%> (-0.57%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.16% <80.00%> (-0.03%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=footer). Last update [bc82047...69d3ec5](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is sweet!",
"Removed all the changes linked to metrics and moved the column dropping to anywhere we pass a Dataset (init, evaluate and predict). As discussed, we'll propose an API for the metrics once we have changed all examples to use `Trainer` and `nlp`, so we know exactly what the API has to support."
] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | Here is a basic example of use for evaluation on SST-2:
```
from nlp import load_dataset, load_metric
from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments
dataset = load_dataset('glue', 'sst2')
metric = load_metric('glue', 'sst2')
model_name = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
encoded_dataset = dataset.map(lambda examples: tokenizer(examples['sentence'], padding=True), batched=True)
args = TrainingArguments(output_dir = "test")
def compute_metrics(eval_pred):
predictions, labels = eval_pred
return metric.compute(predictions.argmax(axis=-1), labels)
trainer = Trainer(
model,
args,
train_dataset=encoded_dataset["train"],
eval_dataset=encoded_dataset["validation"],
compute_metrics=compute_metrics,
)
trainer.evaluate()
```
The goal is to then refine this new API by trying to use it in all examples. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6449/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6449",
"html_url": "https://github.com/huggingface/transformers/pull/6449",
"diff_url": "https://github.com/huggingface/transformers/pull/6449.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6449.patch",
"merged_at": 1597955355000
} |
https://api.github.com/repos/huggingface/transformers/issues/6448 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6448/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6448/comments | https://api.github.com/repos/huggingface/transformers/issues/6448/events | https://github.com/huggingface/transformers/pull/6448 | 677,868,917 | MDExOlB1bGxSZXF1ZXN0NDY2OTE3MDg3 | 6,448 | [DO NOT SUBMIT] Run TPU examples for PR commits. | {
"login": "zcain117",
"id": 14796584,
"node_id": "MDQ6VXNlcjE0Nzk2NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/14796584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zcain117",
"html_url": "https://github.com/zcain117",
"followers_url": "https://api.github.com/users/zcain117/followers",
"following_url": "https://api.github.com/users/zcain117/following{/other_user}",
"gists_url": "https://api.github.com/users/zcain117/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zcain117/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zcain117/subscriptions",
"organizations_url": "https://api.github.com/users/zcain117/orgs",
"repos_url": "https://api.github.com/users/zcain117/repos",
"events_url": "https://api.github.com/users/zcain117/events{/privacy}",
"received_events_url": "https://api.github.com/users/zcain117/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=h1) Report\n> Merging [#6448](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc820476a5c72060f810f825298befd5ec85da4d&el=desc) will **increase** coverage by `0.09%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6448 +/- ##\n==========================================\n+ Coverage 79.98% 80.08% +0.09% \n==========================================\n Files 153 153 \n Lines 28005 28005 \n==========================================\n+ Hits 22401 22429 +28 \n+ Misses 5604 5576 -28 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+7.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=footer). Last update [bc82047...6567722](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"The TPU test succeeded: https://app.circleci.com/pipelines/github/huggingface/transformers/10480/workflows/c669aea7-b861-4b1b-b90c-d8c8b50e60dc/jobs/72547\r\n\r\nI'll delete this PR now"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | Trying out the CircleCI flow. I'll delete this PR after testing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6448/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6448",
"html_url": "https://github.com/huggingface/transformers/pull/6448",
"diff_url": "https://github.com/huggingface/transformers/pull/6448.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6448.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6447 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6447/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6447/comments | https://api.github.com/repos/huggingface/transformers/issues/6447/events | https://github.com/huggingface/transformers/pull/6447 | 677,832,280 | MDExOlB1bGxSZXF1ZXN0NDY2ODg2Nzg0 | 6,447 | [TF Longformer] Improve Speed for TF Longformer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=h1) Report\n> Merging [#6447](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75c64d80c76c3dc71f735d9197a4a601847e0cd?el=desc) will **increase** coverage by `0.84%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6447 +/- ##\n==========================================\n+ Coverage 78.96% 79.81% +0.84% \n==========================================\n Files 157 157 \n Lines 28486 28479 -7 \n==========================================\n+ Hits 22495 22730 +235 \n+ Misses 5991 5749 -242 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <100.00%> (+73.82%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `98.67% <100.00%> (-0.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-74.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+2.60%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (+2.75%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=footer). Last update [a75c64d...ae3bbe2](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"### Speed Benchmarking:\r\n\r\nRunning this command on the master branch: \r\n```\r\npython examples/benchmarking/run_benchmark.py --models allenai/longformer-base-4096 --no_memory --sequence_length 512 1024\r\n```\r\n\r\non this env:\r\n```\r\n- transformers_version: 3.0.2\r\n- framework: TensorFlow\r\n- eager_mode: False\r\n- use_xla: False\r\n- framework_version: 2.2.0\r\n- python_version: 3.8.5\r\n- system: Linux\r\n- cpu: x86_64\r\n- architecture: 64bit\r\n- date: 2020-08-14\r\n- time: 10:32:09.525696\r\n- fp16: False\r\n- use_multiprocessing: True\r\n- only_pretrain_model: False\r\n- cpu_ram_mb: N/A\r\n- use_gpu: True\r\n- num_gpus: 1\r\n- gpu: TITAN RTX\r\n- gpu_ram_mb: 24217\r\n- gpu_power_watts: 280.0\r\n- gpu_performance_state: 0\r\n- use_tpu: False\r\n```\r\n\r\ngives:\r\n```\r\n==================== INFERENCE - SPEED - RESULT ==================== \r\n-------------------------------------------------------------------------------- \r\n Model Name Batch Size Seq Length Time in s \r\n-------------------------------------------------------------------------------- \r\n allenai/longformer-base-4096 8 512 0.229 \r\n allenai/longformer-base-4096 8 1024 0.463 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\n\r\nOn this branch the speed is improved to:\r\n```\r\n==================== INFERENCE - SPEED - RESULT ==================== \r\n-------------------------------------------------------------------------------- \r\n Model Name Batch Size Seq Length Time in s \r\n-------------------------------------------------------------------------------- \r\n allenai/longformer-base-4096 8 512 0.223 \r\n allenai/longformer-base-4096 8 1024 0.447 \r\n--------------------------------------------------------------------------------\r\n```\r\nSo we can see an improvement of ca. 3%, which is not that much actually... I guess it's interesting to see what effect removing some unnecessary `tf.transpose()` has in TF, but it might not be worth to go through all `modeling_tf_...` files trying to remove `tf.transpose()` and similar functions."
] | 1,597 | 1,598 | 1,598 | MEMBER | null | This PR:
- adds a simple test for all tf models to verify that the forward function can be used in graph mode
- optimizes TF Longformer, by removing unnecessary calculation, such as `tf.transpose()` (In contrast to PyTorch, `tf.transpose()` allocates a new tensor and thus should be avoided). This also cleans up the code IMO.
=> These changes lead to a speed-up of 1.03 which is actually not that much...more details in benchmark below.
After a lot of digging TF XLA will not be very easy to implement as a lot of kernels that are highly used in this model `tf.where` are not implemented for XLA (yet). So TF Longformer TPU will not work sadly for the moment @ibeltagy
### Conclusion
For me the PR was also a good exercise to see whether TF can significantly sped up by removing unnecessary tensor allocations. It seems like it's not really worth it go through all the tf models if the improvement in speed is only around 2,3%. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6447/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6447",
"html_url": "https://github.com/huggingface/transformers/pull/6447",
"diff_url": "https://github.com/huggingface/transformers/pull/6447.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6447.patch",
"merged_at": 1598468142000
} |
https://api.github.com/repos/huggingface/transformers/issues/6446 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6446/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6446/comments | https://api.github.com/repos/huggingface/transformers/issues/6446/events | https://github.com/huggingface/transformers/pull/6446 | 677,795,916 | MDExOlB1bGxSZXF1ZXN0NDY2ODU2NjU5 | 6,446 | Get GKE logs via kubectl logs instead of gcloud logging read. | {
"login": "zcain117",
"id": 14796584,
"node_id": "MDQ6VXNlcjE0Nzk2NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/14796584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zcain117",
"html_url": "https://github.com/zcain117",
"followers_url": "https://api.github.com/users/zcain117/followers",
"following_url": "https://api.github.com/users/zcain117/following{/other_user}",
"gists_url": "https://api.github.com/users/zcain117/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zcain117/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zcain117/subscriptions",
"organizations_url": "https://api.github.com/users/zcain117/orgs",
"repos_url": "https://api.github.com/users/zcain117/repos",
"events_url": "https://api.github.com/users/zcain117/events{/privacy}",
"received_events_url": "https://api.github.com/users/zcain117/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | This should be a much faster method of getting logs from GKE back to the CircleCI machine. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6446/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6446",
"html_url": "https://github.com/huggingface/transformers/pull/6446",
"diff_url": "https://github.com/huggingface/transformers/pull/6446.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6446.patch",
"merged_at": 1597247185000
} |
https://api.github.com/repos/huggingface/transformers/issues/6445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6445/comments | https://api.github.com/repos/huggingface/transformers/issues/6445/events | https://github.com/huggingface/transformers/pull/6445 | 677,795,383 | MDExOlB1bGxSZXF1ZXN0NDY2ODU2MjE0 | 6,445 | Test model outputs equivalence | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=h1) Report\n> Merging [#6445](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/96c3329f19f28e47eab7f9f20ed3504619e16722&el=desc) will **increase** coverage by `0.38%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6445 +/- ##\n==========================================\n+ Coverage 79.95% 80.33% +0.38% \n==========================================\n Files 153 153 \n Lines 27932 27928 -4 \n==========================================\n+ Hits 22332 22437 +105 \n+ Misses 5600 5491 -109 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `98.69% <100.00%> (+0.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-27.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.23% <0.00%> (+0.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.71% <0.00%> (+0.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `96.09% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `88.13% <0.00%> (+0.48%)` | :arrow_up: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.69% <0.00%> (+0.56%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+0.61%)` | :arrow_up: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=footer). Last update [96c3329...400c5ad](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Awesome that we can remove the `cast_to_bool` hack here. Maybe we can remove it in `t5_modeling_tf_` as well",
"Side note, you should double-check the slow tests `test_saved_model_with_attentions_output` and `test_saved_model_with_hidden_states_output` still pass with the changes for the longformer model, as they are the ones that fail for t5 when we remove the `cast_to_bool` thingy.",
"> Side note, you should double-check the slow tests `test_saved_model_with_attentions_output` and `test_saved_model_with_hidden_states_output` still pass with the changes for the longformer model, as they are the ones that fail for t5 when we remove the `cast_to_bool` thingy.\r\n\r\nThey did not pass with Longformer before as discussed with @jplu on the PR: https://github.com/huggingface/transformers/pull/5764#issuecomment-670002430, they should actually pass now I think :-) "
] | 1,597 | 1,597 | 1,597 | MEMBER | null | Adds a test to check that the model outputs keep the same values and order as the tuple output. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6445/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6445",
"html_url": "https://github.com/huggingface/transformers/pull/6445",
"diff_url": "https://github.com/huggingface/transformers/pull/6445.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6445.patch",
"merged_at": 1597334376000
} |
https://api.github.com/repos/huggingface/transformers/issues/6444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6444/comments | https://api.github.com/repos/huggingface/transformers/issues/6444/events | https://github.com/huggingface/transformers/issues/6444 | 677,793,073 | MDU6SXNzdWU2Nzc3OTMwNzM= | 6,444 | Can't download 'Helsinki-NLP/opus-mt-hye-eng' model | {
"login": "sonja-lo",
"id": 58326920,
"node_id": "MDQ6VXNlcjU4MzI2OTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/58326920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sonja-lo",
"html_url": "https://github.com/sonja-lo",
"followers_url": "https://api.github.com/users/sonja-lo/followers",
"following_url": "https://api.github.com/users/sonja-lo/following{/other_user}",
"gists_url": "https://api.github.com/users/sonja-lo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sonja-lo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sonja-lo/subscriptions",
"organizations_url": "https://api.github.com/users/sonja-lo/orgs",
"repos_url": "https://api.github.com/users/sonja-lo/repos",
"events_url": "https://api.github.com/users/sonja-lo/events{/privacy}",
"received_events_url": "https://api.github.com/users/sonja-lo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Replicated, will fix.",
"use \r\n```python\r\nAutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-hy-en')\r\n```\r\n\r\nIt performs better than the later hye-eng version for armenian-english.\r\nI removed hye-eng.",
"Thank you! It works "
] | 1,597 | 1,597 | 1,597 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-51-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.3.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: not sure
## Information
Model I am using: MarianMTModel, AutoModelWithLMHead
The problem arises when using the official example scripts (https://huggingface.co/Helsinki-NLP/opus-mt-hye-eng):
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-hye-eng")
model = AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-hye-eng")
```
Gives error
```
/home/sonja/.local/lib/python3.6/site-packages/transformers/modeling_auto.py:798: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
~/.local/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
654 if resolved_archive_file is None:
--> 655 raise EnvironmentError
656 except EnvironmentError:
OSError:
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-2-04055899a280> in <module>
1 from transformers import AutoTokenizer, AutoModelWithLMHead
2 tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-hye-eng")
----> 3 model = AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-hye-eng")
~/.local/lib/python3.6/site-packages/transformers/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
804 for config_class, model_class in MODEL_WITH_LM_HEAD_MAPPING.items():
805 if isinstance(config, config_class):
--> 806 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
807 raise ValueError(
808 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
~/.local/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
660 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME}.\n\n"
661 )
--> 662 raise EnvironmentError(msg)
663
664 if resolved_archive_file == archive_file:
OSError: Can't load weights for 'Helsinki-NLP/opus-mt-hye-eng'. Make sure that:
- 'Helsinki-NLP/opus-mt-hye-eng' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'Helsinki-NLP/opus-mt-hye-eng' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
```
Tried to download the model manually from link I got while debugging (https://cdn.huggingface.co/Helsinki-NLP/opus-mt-hye-eng/pytorch_model.bin) but it doesn't return anything relatable. Although for 'hye-rus' model (https://cdn.huggingface.co/Helsinki-NLP/opus-mt-hye-rus/pytorch_model.bin) I can easily download the file. Works fine for "eng-hye" and "rus-hye" too.
Hjälp, @sshleifer (sorry if mistagged) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6444/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6443/comments | https://api.github.com/repos/huggingface/transformers/issues/6443/events | https://github.com/huggingface/transformers/issues/6443 | 677,784,457 | MDU6SXNzdWU2Nzc3ODQ0NTc= | 6,443 | Simple train from the start for translation transformer | {
"login": "felipeboffnunes",
"id": 51033921,
"node_id": "MDQ6VXNlcjUxMDMzOTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/51033921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felipeboffnunes",
"html_url": "https://github.com/felipeboffnunes",
"followers_url": "https://api.github.com/users/felipeboffnunes/followers",
"following_url": "https://api.github.com/users/felipeboffnunes/following{/other_user}",
"gists_url": "https://api.github.com/users/felipeboffnunes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felipeboffnunes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felipeboffnunes/subscriptions",
"organizations_url": "https://api.github.com/users/felipeboffnunes/orgs",
"repos_url": "https://api.github.com/users/felipeboffnunes/repos",
"events_url": "https://api.github.com/users/felipeboffnunes/events{/privacy}",
"received_events_url": "https://api.github.com/users/felipeboffnunes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I found the repo simpletransformers, which uses this marvelous repo of yours. I got to run a transformer through there, so I'll be closing the issue now. Thanks anyway!"
] | 1,597 | 1,597 | 1,597 | NONE | null | Hi, sorry to bother. I am trying to train a translation transformer, I have seen the documentation but I am still really lost.
I have two datasets, the original message and the translated message.
Example:
dataset_x.txt
This is the message.
This is another message.
Another one.
dataset_y.txt
This<&>is the message<^>.
This is another<&> message.
Another one<%>.
I wanted a simple script which could tokenize these datasets and train any suited model from scratch.
Could anyone help me? Thanks a bunch!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6443/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6442/comments | https://api.github.com/repos/huggingface/transformers/issues/6442/events | https://github.com/huggingface/transformers/pull/6442 | 677,780,301 | MDExOlB1bGxSZXF1ZXN0NDY2ODQzNzUy | 6,442 | Adding PaddingDataCollator | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=h1) Report\n> Merging [#6442](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/96c3329f19f28e47eab7f9f20ed3504619e16722&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `52.94%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6442 +/- ##\n==========================================\n- Coverage 79.95% 79.93% -0.02% \n==========================================\n Files 153 153 \n Lines 27932 27947 +15 \n==========================================\n+ Hits 22332 22339 +7 \n- Misses 5600 5608 +8 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.27% <ø> (ø)` | |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `90.90% <52.94%> (-5.68%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=footer). Last update [96c3329...a153ed4](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | New version of #6398 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6442/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6442",
"html_url": "https://github.com/huggingface/transformers/pull/6442",
"diff_url": "https://github.com/huggingface/transformers/pull/6442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6442.patch",
"merged_at": 1597246348000
} |
https://api.github.com/repos/huggingface/transformers/issues/6441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6441/comments | https://api.github.com/repos/huggingface/transformers/issues/6441/events | https://github.com/huggingface/transformers/pull/6441 | 677,753,637 | MDExOlB1bGxSZXF1ZXN0NDY2ODIxMzI1 | 6,441 | MBartForConditionalGeneration | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The failure is coming from `test_modeling_tf_electra.py`",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=h1) Report\n> Merging [#6441](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e0b1dc8954b87c18f77a82000e81e02683b8eb1&el=desc) will **increase** coverage by `0.29%`.\n> The diff coverage is `88.22%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6441 +/- ##\n==========================================\n+ Coverage 79.77% 80.06% +0.29% \n==========================================\n Files 148 156 +8 \n Lines 27214 28024 +810 \n==========================================\n+ Hits 21710 22438 +728 \n- Misses 5504 5586 +82 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/data/test\\_generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <ø> (-0.91%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <ø> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <ø> (+4.22%)` | :arrow_up: |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `51.92% <28.57%> (-20.81%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <37.50%> (-0.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <50.00%> (+1.79%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `90.90% <52.94%> (-5.68%)` | :arrow_down: |\n| ... and [69 more](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=footer). Last update [e92efcf...49f74a5](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @sgugger , I've applied the suggestions."
] | 1,597 | 1,597 | 1,597 | MEMBER | null | This PR adds MBartForConditionalGeneration. Regarding #6416
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6441/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6441",
"html_url": "https://github.com/huggingface/transformers/pull/6441",
"diff_url": "https://github.com/huggingface/transformers/pull/6441.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6441.patch",
"merged_at": 1597389676000
} |
https://api.github.com/repos/huggingface/transformers/issues/6440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6440/comments | https://api.github.com/repos/huggingface/transformers/issues/6440/events | https://github.com/huggingface/transformers/issues/6440 | 677,740,367 | MDU6SXNzdWU2Nzc3NDAzNjc= | 6,440 | Getting Error from Default Data Collator while training Bert on SQUAD 2.0 | {
"login": "yanchao-yu",
"id": 5929774,
"node_id": "MDQ6VXNlcjU5Mjk3NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5929774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanchao-yu",
"html_url": "https://github.com/yanchao-yu",
"followers_url": "https://api.github.com/users/yanchao-yu/followers",
"following_url": "https://api.github.com/users/yanchao-yu/following{/other_user}",
"gists_url": "https://api.github.com/users/yanchao-yu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanchao-yu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanchao-yu/subscriptions",
"organizations_url": "https://api.github.com/users/yanchao-yu/orgs",
"repos_url": "https://api.github.com/users/yanchao-yu/repos",
"events_url": "https://api.github.com/users/yanchao-yu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanchao-yu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"First things first, please use ``` when copy-pasting stack traces or code (I've edited your post to use that) otherwise it's not really readable.\r\n\r\nIt's hard to know what's going on without knowing how you built your `train_dataset`. The data collator seems to have problems with it. The items should be dictionaries of list of ints/tensors. It seems there is some nested dictionary here.",
"Sorry about that I will remember to use it :-) And regarding the `train_dataset` I used `SquadV2Processor` to get examples for `train` and `eval`, and then convert them using `squad_convert_examples_to_features`: \r\n\r\n```\r\n_processor = SquadV2Processor()\r\ntrain_examples = _processor.get_train_examples(squad_dir, filename='SQuAD-v2.0-train.json')\r\ntrain_dataset = squad_convert_examples_to_features(train_examples, self._tokenizer, max_seq_length=384, doc_stride=128, threads=2,max_query_length=64, is_training=True)\r\n```\r\n\r\n\r\nIs this correct? ",
"Could you print the result of `self.train_dataset[0]`? It would be helpful to see what the items look like.",
"i can't find any clues from the `self.train_dataset[0]`. It is a `SquadFeature` object, like: \r\n\r\n```\r\n2020-08-12 20:48:57,663 -- [__main__:57][INFO]: train_dataset[0] input_ids: [101, 1706, 2292, 1225, 1103, 6567, 2090, 9273, 2845, 1107, 8109, 1107, 10111, 20500, 1699, 136, 102, 22182, 1193, 117, 1103, 1278, 1144, 170, 2336, 1959, 119, 1335, 4184, 1103, 4304, 4334, 112, 188, 2284, 10945, 1110, 170, 5404, 5921, 1104, 1103, 6567, 2090, 119, 13301, 1107, 1524, 1104, 1103, 4304, 4334, 1105, 4749, 1122, 117, 1110, 170, 7335, 5921, 1104, 4028, 1114, 1739, 1146, 14089, 5591, 1114, 1103, 7051, 107, 159, 21462, 1566, 24930, 2508, 152, 1306, 3965, 107, 119, 5893, 1106, 1103, 4304, 4334, 1110, 1103, 19349, 1104, 1103, 11373, 4641, 119, 13301, 1481, 1103, 171, 17506, 9538, 1110, 1103, 144, 10595, 2430, 117, 170, 14789, 1282, 1104, 8070, 1105, 9284, 119, 1135, 1110, 170, 16498, 1104, 1103, 176, 10595, 2430, 1120, 10111, 20500, 117, 1699, 1187, 1103, 6567, 2090, 25153, 1193, 1691, 1106, 2216, 17666, 6397, 3786, 1573, 25422, 13149, 1107, 8109, 119, 1335, 1103, 1322, 1104, 1103, 1514, 2797, 113, 1105, 1107, 170, 2904, 1413, 1115, 8200, 1194, 124, 11739, 1105, 1103, 3487, 17917, 114, 117, 1110, 170, 3014, 117, 2030, 2576, 5921, 1104, 2090, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\r\n2020-08-12 20:48:57,663 -- [__main__:58][INFO]: train_dataset[0] attention_mask: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\r\n2020-08-12 20:48:57,663 -- [__main__:59][INFO]: train_dataset[0] token_type_ids: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\r\n2020-08-12 20:48:57,663 -- [__main__:60][INFO]: train_dataset[0] cls_index: 0\r\n2020-08-12 20:48:57,663 -- [__main__:61][INFO]: train_dataset[0] p_mask: [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\r\n2020-08-12 20:48:57,663 -- [__main__:62][INFO]: train_dataset[0] example_index: 0\r\n2020-08-12 20:48:57,663 -- [__main__:63][INFO]: train_dataset[0] unique_id: 1000000000\r\n2020-08-12 20:48:57,663 -- [__main__:64][INFO]: train_dataset[0] paragraph_len: 163\r\n2020-08-12 20:48:57,663 -- [__main__:65][INFO]: train_dataset[0] token_is_max_context: {17: True, 18: True, 19: True, 20: True, 21: True, 22: True, 23: True, 24: True, 25: True, 26: True, 27: True, 28: True, 29: True, 30: True, 31: True, 32: True, 33: True, 34: True, 35: True, 36: True, 37: True, 38: True, 39: True, 40: True, 41: True, 42: True, 43: True, 44: True, 45: True, 46: True, 47: True, 48: True, 49: True, 50: True, 51: True, 52: True, 53: True, 54: True, 55: True, 56: True, 57: True, 58: True, 59: True, 60: True, 61: True, 62: True, 63: True, 64: True, 65: True, 66: True, 67: True, 68: True, 69: True, 70: True, 71: True, 72: True, 73: True, 74: True, 75: True, 76: True, 77: True, 78: True, 79: True, 80: True, 81: True, 82: True, 83: True, 84: True, 85: True, 86: True, 87: True, 88: True, 89: True, 90: True, 91: True, 92: True, 93: True, 94: True, 95: True, 96: True, 97: True, 98: True, 99: True, 100: True, 101: True, 102: True, 103: True, 104: True, 105: True, 106: True, 107: True, 108: True, 109: True, 110: True, 111: True, 112: True, 113: True, 114: True, 115: True, 116: True, 117: True, 118: True, 119: True, 120: True, 121: True, 122: True, 123: True, 124: True, 125: True, 126: True, 127: True, 128: True, 129: True, 130: True, 131: True, 132: True, 133: True, 134: True, 135: True, 136: True, 137: True, 138: True, 139: True, 140: True, 141: True, 142: True, 143: True, 144: True, 145: True, 146: True, 147: True, 148: True, 149: True, 150: True, 151: True, 152: True, 153: True, 154: True, 155: True, 156: True, 157: True, 158: True, 159: True, 160: True, 161: True, 162: True, 163: True, 164: True, 165: True, 166: True, 167: True, 168: True, 169: True, 170: True, 171: True, 172: True, 173: True, 174: True, 175: True, 176: True, 177: True, 178: True, 179: True}\r\n2020-08-12 20:48:57,663 -- [__main__:66][INFO]: train_dataset[0] tokens: ['[CLS]', 'To', 'whom', 'did', 'the', 'Virgin', 'Mary', 'allegedly', 'appear', 'in', '1858', 'in', 'Lou', '##rdes', 'France', '?', '[SEP]', 'Architectural', '##ly', ',', 'the', 'school', 'has', 'a', 'Catholic', 'character', '.', 'At', '##op', 'the', 'Main', 'Building', \"'\", 's', 'gold', 'dome', 'is', 'a', 'golden', 'statue', 'of', 'the', 'Virgin', 'Mary', '.', 'Immediately', 'in', 'front', 'of', 'the', 'Main', 'Building', 'and', 'facing', 'it', ',', 'is', 'a', 'copper', 'statue', 'of', 'Christ', 'with', 'arms', 'up', '##rai', '##sed', 'with', 'the', 'legend', '\"', 'V', '##eni', '##te', 'Ad', 'Me', 'O', '##m', '##nes', '\"', '.', 'Next', 'to', 'the', 'Main', 'Building', 'is', 'the', 'Basilica', 'of', 'the', 'Sacred', 'Heart', '.', 'Immediately', 'behind', 'the', 'b', '##asi', '##lica', 'is', 'the', 'G', '##rot', '##to', ',', 'a', 'Marian', 'place', 'of', 'prayer', 'and', 'reflection', '.', 'It', 'is', 'a', 'replica', 'of', 'the', 'g', '##rot', '##to', 'at', 'Lou', '##rdes', ',', 'France', 'where', 'the', 'Virgin', 'Mary', 'reputed', '##ly', 'appeared', 'to', 'Saint', 'Bern', '##ade', '##tte', 'So', '##ubi', '##rous', 'in', '1858', '.', 'At', 'the', 'end', 'of', 'the', 'main', 'drive', '(', 'and', 'in', 'a', 'direct', 'line', 'that', 'connects', 'through', '3', 'statues', 'and', 'the', 'Gold', 'Dome', ')', ',', 'is', 'a', 'simple', ',', 'modern', 'stone', 'statue', 'of', 'Mary', '.', '[SEP]']\r\n2020-08-12 20:48:57,663 -- [__main__:67][INFO]: train_dataset[0] token_to_orig_map: {17: 0, 18: 0, 19: 0, 20: 1, 21: 2, 22: 3, 23: 4, 24: 5, 25: 6, 26: 6, 27: 7, 28: 7, 29: 8, 30: 9, 31: 10, 32: 10, 33: 10, 34: 11, 35: 12, 36: 13, 37: 14, 38: 15, 39: 16, 40: 17, 41: 18, 42: 19, 43: 20, 44: 20, 45: 21, 46: 22, 47: 23, 48: 24, 49: 25, 50: 26, 51: 27, 52: 28, 53: 29, 54: 30, 55: 30, 56: 31, 57: 32, 58: 33, 59: 34, 60: 35, 61: 36, 62: 37, 63: 38, 64: 39, 65: 39, 66: 39, 67: 40, 68: 41, 69: 42, 70: 43, 71: 43, 72: 43, 73: 43, 74: 44, 75: 45, 76: 46, 77: 46, 78: 46, 79: 46, 80: 46, 81: 47, 82: 48, 83: 49, 84: 50, 85: 51, 86: 52, 87: 53, 88: 54, 89: 55, 90: 56, 91: 57, 92: 58, 93: 58, 94: 59, 95: 60, 96: 61, 97: 62, 98: 62, 99: 62, 100: 63, 101: 64, 102: 65, 103: 65, 104: 65, 105: 65, 106: 66, 107: 67, 108: 68, 109: 69, 110: 70, 111: 71, 112: 72, 113: 72, 114: 73, 115: 74, 116: 75, 117: 76, 118: 77, 119: 78, 120: 79, 121: 79, 122: 79, 123: 80, 124: 81, 125: 81, 126: 81, 127: 82, 128: 83, 129: 84, 130: 85, 131: 86, 132: 87, 133: 87, 134: 88, 135: 89, 136: 90, 137: 91, 138: 91, 139: 91, 140: 92, 141: 92, 142: 92, 143: 93, 144: 94, 145: 94, 146: 95, 147: 96, 148: 97, 149: 98, 150: 99, 151: 100, 152: 101, 153: 102, 154: 102, 155: 103, 156: 104, 157: 105, 158: 106, 159: 107, 160: 108, 161: 109, 162: 110, 163: 111, 164: 112, 165: 113, 166: 114, 167: 115, 168: 115, 169: 115, 170: 116, 171: 117, 172: 118, 173: 118, 174: 119, 175: 120, 176: 121, 177: 122, 178: 123, 179: 123}\r\n```\r\n",
"Ok. You need to remove some keys from it as it has way too many attributes (the failure comes from the fact the data collator is trying to build a tensor from the `token_is_max_context` fields).\r\n\r\nThe easiest way is probably to use the `SquadDataset` in `data.datasets.squad`, or you can just copy its `__getitem__` method and put it on your own dataset class.",
"Thanks Sylvain. But I found that my transformers library doesn't contain `data.datasets.squad` at all. I'm using `transformers==3.0.2`. It only contains `glue` and `language_model` two classes. ",
"Would you mind to explain how the error comes from the data collator? The `token_is_max_context` only contains a list of `True` flags. Is there a certain order (keys) of these features?",
"The data collator should receive a dictionary string to list of ints/tensors. The value associated to `token_is_max_context` is neither a list of int or a tensor, hence the error.\r\nNote that you dataset should have items that are dictionaries with keys that are argument names your model will accept, which another reason why `token_is_max_context` needs to be removed.",
"Okay, it seems to make sense. There would be another question: does `transformers` process the original version of SQUAD dataset? I download the data from the SQUAD website, which shouldn't have any errors if it is the exact same as the one `transformers` used. Can I simply remove `token_is_max_context` from the `SquadFeatures` to solve this error? \r\n\r\nAlso which version of `transformers` contains data.datasets.squad? I can't find it in 3.0.0, 2.9.0 and even 2.5.0. ",
"You need to only extract : input_ids, attention_mask and token_type_ids as the rest is probably not helpful to your model. AFAICT the file [data.datasets.squad](https://github.com/huggingface/transformers/blob/master/src/transformers/data/datasets/squad.py) has been there for a while, so you should have it in those versions.",
"Thanks for help, Sylvain. Let me give a try today. Hopefully, it works fine. ",
"Hi Sylvain,\r\n\r\nSorry for the interruption again. I've created a new class that only contains `input_ids`, `attention_mask` and `token_type_ids`, then the system gives an error: \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/scratch/yyu/codebase/odqa/odqa/reader/reader_trainer.py\", line 190, in <module>\r\n trainer.train()\r\n File \"/scratch/yyu/codebase/odqa/odqa/reader/reader_trainer.py\", line 168, in train\r\n self._trainer.train()\r\n File \"/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/transformers/trainer.py\", line 375, in train\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/tqdm/std.py\", line 1130, in __iter__\r\n for obj in iterable:\r\n File \"/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 363, in __next__\r\n data = self._next_data()\r\n File \"/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 403, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 47, in fetch\r\n return self.collate_fn(data)\r\n File \"/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/transformers/data/data_collator.py\", line 91, in collate_batch\r\n batch = self._tensorize_batch(examples)\r\n File \"/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/transformers/data/data_collator.py\", line 99, in _tensorize_batch\r\n length_of_first = examples[0].size(0)\r\nAttributeError: 'SimpleSquadFeature' object has no attribute 'size'\r\n```\r\n\r\nBut I didn't find the `size()` function in the original class (SquadFeature) either. Do you know why it happens like that? \r\n\r\nAnd also I've installed transformers from 2.1.0 to 3.0.2 but can't import a class called `SquadDataset`, there is not a path [data.datasets.squad](https://github.com/huggingface/transformers/blob/master/src/transformers/data/datasets/squad.py), but `data.processor.squad`. When did transformers delete the `SquadDataset`? \r\n\r\nThanks for help. \r\nYanchao",
"Don't create a special class, just return a dictionary with those fields and it should work.",
"Hi,\r\n\r\nI've had nearly the same problem as yy147 (the TypeError related to the default_data_collator), even though my train_dataset had items that were precisely a dictionary mapping the keys: 'input_ids', 'attention_mask', 'token_type_ids', to lists of ints, and 'label' to a torch.LongTensor.\r\n\r\nStrangely I managed to solve the problem by copying the transformers.data.data_collator.default_data_collator into my own code and letting the Trainer use that.\r\n\r\nPython version 3.6.8\r\ntorch version 1.6.0\r\ntransformers version 3.0.2\r\n\r\nHope it helps,\r\nGijs\r\n",
"> Don't create a special class, just return a dictionary with those fields and it should work.\r\n\r\nThanks Sylvain. I've found the `SquadDateset`, which is surprisingly not included in any transformers versions if you run `pip install transfermers==3.0.2`. I can only find it if I install `transformers` from the source. It seems something needs to be fixed.\r\n\r\nBest,\r\nYanchao ",
"Thanks for help, Gijs. I will try to copy and paste it later. It is a wired situation. ",
"Hi @yy147 , I am also getting a similar error:\r\n``` \r\nlength_of_first = examples[0].size(0)\r\nAttributeError: 'dict' object has no attribute 'size\r\n```\r\n\r\nHave you managed to fix your error?",
"Hi @gungor2, I found another source code extended from transformers example `https://github.com/kamalkraj/BERT-SQuAD/blob/master/utils.py`. It gives a great example to solve the problem of `squad_examples_to_features`. It works well for me. Hope it is helpful for you. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, I have a related issue I am trying to train the [**TrOCR**](https://huggingface.co/microsoft/trocr-large-handwritten) model on my own data what I tried: \r\n```\r\n# the class for loading data has function :\r\ndef __getitem__(self, idx):\r\n file_name = self.df['file_name'][idx]\r\n text = self.df['text'][idx]\r\n # prepare image (i.e. resize + normalize)\r\n image = Image.open(self.root_dir + file_name).convert(\"RGB\")\r\n pixel_values = self.processor(image, return_tensors=\"pt\").pixel_values\r\n labels = self.processor.tokenizer(text, padding=\"max_length\",\r\n max_length=self.max_target_length).input_ids \r\n labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in\r\n labels]\r\n\r\n encoding = {\"pixel_values\": pixel_values.squeeze(), \"labels\": torch.tensor(labels)}\r\n return encoding\r\ntraining_args = Seq2SeqTrainingArguments(\r\n num_train_epochs=25, \r\n learning_rate=5e-5,\r\n predict_with_generate=True,\r\n evaluation_strategy=\"steps\",\r\n per_device_train_batch_size=64,\r\n per_device_eval_batch_size=64,\r\n fp16=True, \r\n output_dir=\"/1/large/\", \r\n logging_steps=100,\r\n save_steps=2000,\r\n eval_steps=5000,\r\n )\r\n\r\n trainer = Seq2SeqTrainer( model=model, tokenizer=processor.feature_extractor,\r\n args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset,\r\n eval_dataset=eval_dataset, data_collator=default_data_collator, )\r\n```\r\nthe feed input image to the model has a `height of 64 `fixed for all and different `width` \r\nThe issue I see is: where the training stops after a few hours\r\n```\r\nTraceback (most recent call last):\r\n File \"train.py\", line 191, in <module>\r\n main()\r\n File \"train.py\", line 173, in main\r\n trainer.train()\r\n File \"/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py\", line 1521, in train\r\n return inner_training_loop(\r\n File \"/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py\", line 1737, in _inner_training_loop\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/home/user/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 435, in __next__\r\n data = self._next_data()\r\n File \"/home/user/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 475, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/home/user/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 47, in fetch\r\n return self.collate_fn(data)\r\n File \"/home/user/venv/lib/python3.8/site-packages/transformers/trainer_utils.py\", line 696, in __call__\r\n return self.data_collator(features)\r\n File \"/home/user/venv/lib/python3.8/site-packages/transformers/data/data_collator.py\", line 67, in default_data_collator\r\n return torch_default_data_collator(features)\r\n File \"/home/user/venv/lib/python3.8/site-packages/transformers/data/data_collator.py\", line 129, in torch_default_data_collator\r\n batch[k] = torch.stack([f[k] for f in features])\r\nRuntimeError: stack expects each tensor to be equal size, but got [128] at entry 0 and [139] at entry 19\r\n 1%|▊ | 1356/166025 [40:59<82:58:15, 1.81s/it]\r\n```\r\nTransformer version: 4.22.2\r\n@sgugger @NielsRogge @\r\n\r\n",
"Hi,\r\n\r\nIt looks like your target texts aren't having the same length. You need to not only pad but also set `truncation=True` to make sure all texts have 128 tokens."
] | 1,597 | 1,679 | 1,606 | NONE | null | Hello, I'm a fresher playing with the transformers. I suppose to train the BERT model `bert-base-cased` on SQUAD 2.0, but having an error in `data_collator`: It shows an error of `TypeError: an integer is required`
Here is the detail:
```
Epoch: 0%| | 0/2 [00:00<?, ?it/s]
Iteration: 0%| | 0/4135 [00:00<?, ?it/s]
Epoch: 0%| | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/scratch/yyu/codebase/odqa/odqa/reader/reader_trainer.py", line 138, in <module>
trainer.train()
File "/scratch/yyu/codebase/odqa/odqa/reader/reader_trainer.py", line 119, in train
self._trainer.train()
File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/transformers/trainer.py", line 456, in train
for step, inputs in enumerate(epoch_iterator):
File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/tqdm/std.py", line 1130, in __iter__
for obj in iterable:
File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 363, in __next__
data = self._next_data()
File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 403, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/transformers/data/data_collator.py", line 62, in default_data_collator
batch[k] = torch.tensor([f[k] for f in features], dtype=torch.long)
TypeError: an integer is required (got type dict)
```
Here is my trainer configuration:
```
training_args = TrainingArguments(
output_dir=self.output_dir,
overwrite_output_dir=True,
num_train_epochs=2,
per_gpu_train_batch_size=32,
# per_device_eval_batch_size=64,
warmup_steps=500,
weight_decay=0.01,
# evaluate_during_training=True,
save_steps=10_000,
logging_dir='./logs',
)
self._trainer = Trainer(
model=self._model,
args=training_args,
compute_metrics=self.compute_metrics,
train_dataset=self.train_dataset,
eval_dataset=self.test_dataset
)
```
Does anyone know why it happens? And how can I fix this error? I don't know if this is one mistake in my code or from the transformers :-( Thanks for all help.
Best | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6440/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6439/comments | https://api.github.com/repos/huggingface/transformers/issues/6439/events | https://github.com/huggingface/transformers/issues/6439 | 677,699,403 | MDU6SXNzdWU2Nzc2OTk0MDM= | 6,439 | TrainingArguments are ignored?! | {
"login": "NebelAI",
"id": 7240417,
"node_id": "MDQ6VXNlcjcyNDA0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7240417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NebelAI",
"html_url": "https://github.com/NebelAI",
"followers_url": "https://api.github.com/users/NebelAI/followers",
"following_url": "https://api.github.com/users/NebelAI/following{/other_user}",
"gists_url": "https://api.github.com/users/NebelAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NebelAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NebelAI/subscriptions",
"organizations_url": "https://api.github.com/users/NebelAI/orgs",
"repos_url": "https://api.github.com/users/NebelAI/repos",
"events_url": "https://api.github.com/users/NebelAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/NebelAI/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't see what's wrong? There are 422,095 iterations in one of your epochs, so to get to 500,000 you'll do one full epoch and the beginning of the second epoch. Training will stop after it has reached 500,000.\r\n\r\nYou can't have just one epoch since you need more iterations to reach the `max_steps` you have given. That argument overrides the number of epochs. ",
"Then I can definitely say: I misinterpreted the logs. There was something like \"global steps\" when using BERT's pretrain script and the value was identical with the previously set \"max_steps\" parameter. Now I get it ... Thanks for clearing it up."
] | 1,597 | 1,597 | 1,597 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
Trainer: @sgugger
## Information
Hey, I'm using `01_how-to-train.ipynb` to get a feeling for the object oriented way of training a bert model from scratch. Until now I've been using the scripts offered by offical bert repository. My target is to train all of my future Transformer models with your Huggingface interface (from scratch and of course fine tuning too).
I used `max_steps = 500_000` but it gets completely ignored. After training is started the output says:
```
Iteration: 11639/422095 [1:52:03<70:16:42, 1.62it/s]
Epoch 0/2 [00:00<?, ?it/s]
```
**Two epochs and 422095 iterations seems wrong!?** Official docs say _"max_steps = the total number of training steps to perform"_ Am I misinterpreting something?
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset:
* line by line dataset
* training a bert language model from scratch (generating vocab, setting a config, ...)
## To reproduce
Use colab "01_how-to-train.ipynb" (https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb
) and change TrainingArguments to the following:
```
training_args = TrainingArguments(
output_dir="./smallBERTa",
overwrite_output_dir=True,
do_train=True,
warmup_steps=5000,
max_steps=500000,
per_gpu_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
)
```
Yes, I am passing the `training_args` to the `Trainer()` object.
## Expected behavior
I'm expecting to get 500.000 global training steps and just one epoch.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6439/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6438 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6438/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6438/comments | https://api.github.com/repos/huggingface/transformers/issues/6438/events | https://github.com/huggingface/transformers/issues/6438 | 677,674,267 | MDU6SXNzdWU2Nzc2NzQyNjc= | 6,438 | Training GPT2 and Reformer from scratch. | {
"login": "VikasRajashekar",
"id": 52132904,
"node_id": "MDQ6VXNlcjUyMTMyOTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/52132904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VikasRajashekar",
"html_url": "https://github.com/VikasRajashekar",
"followers_url": "https://api.github.com/users/VikasRajashekar/followers",
"following_url": "https://api.github.com/users/VikasRajashekar/following{/other_user}",
"gists_url": "https://api.github.com/users/VikasRajashekar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VikasRajashekar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VikasRajashekar/subscriptions",
"organizations_url": "https://api.github.com/users/VikasRajashekar/orgs",
"repos_url": "https://api.github.com/users/VikasRajashekar/repos",
"events_url": "https://api.github.com/users/VikasRajashekar/events{/privacy}",
"received_events_url": "https://api.github.com/users/VikasRajashekar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @VikasRajashekar, \r\n\r\nWe are trying to move \"non-bug\" related questions to https://discuss.huggingface.co/ - could you post your question there again? :-) ",
"Btw, for Reformer, you can check out these notebooks: https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb and https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb"
] | 1,597 | 1,597 | 1,597 | NONE | null | Hello, I am looking, for example, script/notebook to train GPT2 and Reformer model from scratch in German.
Something similar to :
https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb
I am trying to modify the same notebook but GPT2 doesn't seem to accept LinebyLineDataset or padding. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6438/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6437 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6437/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6437/comments | https://api.github.com/repos/huggingface/transformers/issues/6437/events | https://github.com/huggingface/transformers/pull/6437 | 677,665,813 | MDExOlB1bGxSZXF1ZXN0NDY2NzQ4MTk2 | 6,437 | Fix #6428 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"👍"
] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | The `HfArgumentParser` was failing on arguments type-annoted with `Optional[bool]`. This fixes that (and issue #6428 in the process) so we don't have to remember to not put `Optional` around bools. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6437/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6437",
"html_url": "https://github.com/huggingface/transformers/pull/6437",
"diff_url": "https://github.com/huggingface/transformers/pull/6437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6437.patch",
"merged_at": 1597236450000
} |
https://api.github.com/repos/huggingface/transformers/issues/6436 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6436/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6436/comments | https://api.github.com/repos/huggingface/transformers/issues/6436/events | https://github.com/huggingface/transformers/issues/6436 | 677,648,037 | MDU6SXNzdWU2Nzc2NDgwMzc= | 6,436 | Epoch iterator for run_pl_ner.py | {
"login": "YojanaGadiya",
"id": 45199062,
"node_id": "MDQ6VXNlcjQ1MTk5MDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45199062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YojanaGadiya",
"html_url": "https://github.com/YojanaGadiya",
"followers_url": "https://api.github.com/users/YojanaGadiya/followers",
"following_url": "https://api.github.com/users/YojanaGadiya/following{/other_user}",
"gists_url": "https://api.github.com/users/YojanaGadiya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YojanaGadiya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YojanaGadiya/subscriptions",
"organizations_url": "https://api.github.com/users/YojanaGadiya/orgs",
"repos_url": "https://api.github.com/users/YojanaGadiya/repos",
"events_url": "https://api.github.com/users/YojanaGadiya/events{/privacy}",
"received_events_url": "https://api.github.com/users/YojanaGadiya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | NONE | null | Dear all,
While training the run_pl_ner.py code, with increasing documents added the epoch iterator increases. But with this increase, it also prints a new line making the progress bar for the epoch span on multiple lines as shown below. I was wondering if there is a way to restrict this multi-line progress bar to one line?
Thank You.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6436/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6435 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6435/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6435/comments | https://api.github.com/repos/huggingface/transformers/issues/6435/events | https://github.com/huggingface/transformers/pull/6435 | 677,576,188 | MDExOlB1bGxSZXF1ZXN0NDY2NjcxNDg2 | 6,435 | Update README.md | {
"login": "cedspam",
"id": 7693193,
"node_id": "MDQ6VXNlcjc2OTMxOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7693193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cedspam",
"html_url": "https://github.com/cedspam",
"followers_url": "https://api.github.com/users/cedspam/followers",
"following_url": "https://api.github.com/users/cedspam/following{/other_user}",
"gists_url": "https://api.github.com/users/cedspam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cedspam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cedspam/subscriptions",
"organizations_url": "https://api.github.com/users/cedspam/orgs",
"repos_url": "https://api.github.com/users/cedspam/repos",
"events_url": "https://api.github.com/users/cedspam/events{/privacy}",
"received_events_url": "https://api.github.com/users/cedspam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=h1) Report\n> Merging [#6435](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6435 +/- ##\n==========================================\n+ Coverage 79.89% 79.94% +0.05% \n==========================================\n Files 153 153 \n Lines 27902 27902 \n==========================================\n+ Hits 22291 22306 +15 \n+ Misses 5611 5596 -15 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.72% <0.00%> (+2.27%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+4.76%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=footer). Last update [4ffea5c...f0507f3](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6435/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6435",
"html_url": "https://github.com/huggingface/transformers/pull/6435",
"diff_url": "https://github.com/huggingface/transformers/pull/6435.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6435.patch",
"merged_at": 1597309277000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6434 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6434/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6434/comments | https://api.github.com/repos/huggingface/transformers/issues/6434/events | https://github.com/huggingface/transformers/pull/6434 | 677,544,801 | MDExOlB1bGxSZXF1ZXN0NDY2NjQ1MTc3 | 6,434 | Centralize logging | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=h1) Report\n> Merging [#6434](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/461ae86812f9d75762bbdae2ac5776f9a5d702ea?el=desc) will **increase** coverage by `0.46%`.\n> The diff coverage is `91.58%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6434 +/- ##\n==========================================\n+ Coverage 79.63% 80.09% +0.46% \n==========================================\n Files 156 157 +1 \n Lines 28420 28471 +51 \n==========================================\n+ Hits 22631 22805 +174 \n+ Misses 5789 5666 -123 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/commands/run.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9ydW4ucHk=) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/commands/train.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFpbi5weQ==) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (-0.30%)` | :arrow_down: |\n| [src/transformers/data/metrics/squad\\_metrics.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3Mvc3F1YWRfbWV0cmljcy5weQ==) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <66.66%> (ø)` | |\n| [src/transformers/utils/logging.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlscy9sb2dnaW5nLnB5) | `75.00% <75.00%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <100.00%> (ø)` | |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <100.00%> (ø)` | |\n| ... and [132 more](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=footer). Last update [461ae86...c81c035](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks good to me as well! I'm thinking that it might be a good idea to create one helper function for each verbosity level:\r\n```\r\nhf_logger.set_info_verbosity()\r\nhf_logger.set_warning_verbosity()\r\nhf_logger.set_debug_verbosity()\r\n```\r\n\r\nThese functions might be easier to remember...what do you think @LysandreJik ?\r\n",
"> Looks good to me as well! I'm thinking that it might be a good idea to create one helper function for each verbosity level:\r\n> \r\n> ```\r\n> hf_logger.set_info_verbosity()\r\n> hf_logger.set_warning_verbosity()\r\n> hf_logger.set_debug_verbosity()\r\n> ```\r\n> \r\n> These functions might be easier to remember...what do you think @LysandreJik ?\r\n\r\nFor a simpler completion, I would rather call these:\r\n```\r\nhf_logger.set_verbosity_info()\r\nhf_logger.set_verbosity_warning()\r\nhf_logger.set_verbosity_debug()\r\nhf_logger.set_verbosity_error() # This one is important as well, to basically disactivate all infos/warnings\r\n```\r\n",
"h"
] | 1,597 | 1,598 | 1,598 | MEMBER | null | The goal of this PR is to offer a better way to manage logging to the HuggingFace/transformers users. It's a very simple proposal: implement a single logger that is shared across all files, and implement three helper methods that can be used across the library and by users:
```py
def get_logger():
'''
Returns the logger instance for the library, that can be managed as a traditional `logging` logger.
'''
def get_verbosity():
'''
Returns the logger instance verbosity level. Used to manage what is printed, for example with tqdm loading bars.
Same as doing:
hf_logging.get_logger().getEffectiveLevel()
'''
def set_verbosity(level: int):
'''
Sets the logger instance verbosity level. Used to set the desired verbosity level across the library.
Same as doing:
hf_logging.get_logger().setLevel(level)
'''
```
Users can use these methods as such:
```py
from transformers import hf_logging
logger = hf_logging.get_logger()
hf_logging.set_verbosity(hf_logging.INFO)
# same as doing
logger.setLevel(hf_logging.INFO)
```
The noteworthy additions/changes are shown below. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6434/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6434",
"html_url": "https://github.com/huggingface/transformers/pull/6434",
"diff_url": "https://github.com/huggingface/transformers/pull/6434.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6434.patch",
"merged_at": 1598454636000
} |
https://api.github.com/repos/huggingface/transformers/issues/6433 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6433/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6433/comments | https://api.github.com/repos/huggingface/transformers/issues/6433/events | https://github.com/huggingface/transformers/pull/6433 | 677,531,000 | MDExOlB1bGxSZXF1ZXN0NDY2NjMzNjQ0 | 6,433 | Fix PABEE & PL CI failure | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=h1) Report\n> Merging [#6433](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **decrease** coverage by `2.51%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6433 +/- ##\n==========================================\n- Coverage 79.89% 77.37% -2.52% \n==========================================\n Files 153 153 \n Lines 27902 27902 \n==========================================\n- Hits 22291 21588 -703 \n- Misses 5611 6314 +703 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-63.30%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=footer). Last update [4ffea5c...cd1ca4c](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ah, PL test still failing!",
"@LysandreJik Don't worry. It seems I mistype some parameter name",
"Oops! @sshleifer could you have a look at the PL example? I've tried tweaking the parameters but it doesn't seem to work. ",
"@stas00 Can you take a look? @sshleifer is on a vacation. Lots of thanks!",
"Yes, of course, I will be able to investigate in a few hours.",
"(we are talking about `examples/test_examples.py::ExamplesTests::test_run_pl_glue`)\r\n\r\nI'm able to reproduce the problem of low `acc` with those changes proposed in this PR. This PR I get:\r\n```\r\nacc = 0.5\r\nf1 = 0.666\r\n```\r\n\r\nThe original pre-PR changes gives acc/f1=1.0 on my machine. \r\n\r\nIf you have a look at https://github.com/huggingface/transformers/pull/6034 I tried various hparams to no avail, it was working fine on my machine, but CI kept on failing. It was just very counterproductive trying to experiment w/o being able to reproduce it locally, so after some time I gave up. So the test is not ideal, but at least it's testing that it runs.\r\n\r\n@sshleifer said he was able to match the CI's low accuracy on his hardware (pre this PR).\r\n\r\n\r\n",
"@stas00 Yes I've already found the problem in #6034 (output_dir) and fixed that in our PR. However the accuracy is still too low compared to the trainer version of run_glue. Since you can now reproduce the low acc, please give it a look! ",
"Thank you for explaining what is happening, @JetRunner \r\n\r\nI have no perms to push, so try to use this:\r\n```\r\n testargs = \"\"\"\r\n run_pl_glue.py\r\n --model_name_or_path bert-base-cased\r\n --data_dir ./tests/fixtures/tests_samples/MRPC/\r\n --task mrpc\r\n --do_train\r\n --do_predict\r\n --output_dir ./tests/fixtures/tests_samples/pl_temp_dir\r\n --train_batch_size=32\r\n --learning_rate=1e-4\r\n --num_train_epochs=4\r\n --warmup_steps=3\r\n --seed=42\r\n --max_seq_length=128\r\n \"\"\".split()`\r\n```\r\nI get acc/f1 of 1.0 with this config, the key was more `--num_train_epochs` and some warm-up.\r\n\r\nSo you uncovered that these tests are very unreliable as they don't clean up after themselves and re-runs give invalid results. It's enough to get one run that succeeded, all the subsequent test re-runs will succeed at the moment. At the very least pl_glue needs to support `--overwrite_output_dir`.\r\n\r\nThat explains why I couldn't get CI to work, as mine probably wasn't working all along, other than succeeding once and then always reporting the old success. So I was getting false positives.\r\n\r\nShould transformers warn a user when a pre-existing dir filled with outdated data is found or plainly refuse to run?\r\n",
"@stas00 this perm also outputs `0.5`, sadly. I feel maybe there's another bug here in the PL example? \r\ncc @LysandreJik @sshleifer ",
"PABEE's bug is fixed in #6453. The reproducible low acc is still existing for PL.\r\ncc @LysandreJik @sshleifer "
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | #6421 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6433/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6433",
"html_url": "https://github.com/huggingface/transformers/pull/6433",
"diff_url": "https://github.com/huggingface/transformers/pull/6433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6433.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6432 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6432/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6432/comments | https://api.github.com/repos/huggingface/transformers/issues/6432/events | https://github.com/huggingface/transformers/issues/6432 | 677,495,146 | MDU6SXNzdWU2Nzc0OTUxNDY= | 6,432 | TF2 implementation of LineByLineTextDataset? | {
"login": "simran-khanuja",
"id": 24687672,
"node_id": "MDQ6VXNlcjI0Njg3Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/24687672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simran-khanuja",
"html_url": "https://github.com/simran-khanuja",
"followers_url": "https://api.github.com/users/simran-khanuja/followers",
"following_url": "https://api.github.com/users/simran-khanuja/following{/other_user}",
"gists_url": "https://api.github.com/users/simran-khanuja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simran-khanuja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simran-khanuja/subscriptions",
"organizations_url": "https://api.github.com/users/simran-khanuja/orgs",
"repos_url": "https://api.github.com/users/simran-khanuja/repos",
"events_url": "https://api.github.com/users/simran-khanuja/events{/privacy}",
"received_events_url": "https://api.github.com/users/simran-khanuja/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | NONE | null | Hi, I have a text file which I want to use as input for trainer_tf.py. Since it requires a dataset object as input, is there any implementation of something like the LineByLineTextDataset module in TF2 as well? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6432/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/6432/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6431 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6431/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6431/comments | https://api.github.com/repos/huggingface/transformers/issues/6431/events | https://github.com/huggingface/transformers/pull/6431 | 677,439,043 | MDExOlB1bGxSZXF1ZXN0NDY2NTU2NjE0 | 6,431 | Disabled pabee test | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | MEMBER | null | @JetRunner | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6431/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6431",
"html_url": "https://github.com/huggingface/transformers/pull/6431",
"diff_url": "https://github.com/huggingface/transformers/pull/6431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6431.patch",
"merged_at": 1597215171000
} |
https://api.github.com/repos/huggingface/transformers/issues/6430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6430/comments | https://api.github.com/repos/huggingface/transformers/issues/6430/events | https://github.com/huggingface/transformers/pull/6430 | 677,434,976 | MDExOlB1bGxSZXF1ZXN0NDY2NTUzMTUw | 6,430 | [WIP] QA Loss refactoring | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=h1) Report\n> Merging [#6430](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `93.10%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6430 +/- ##\n==========================================\n+ Coverage 79.89% 80.06% +0.17% \n==========================================\n Files 153 153 \n Lines 27902 27761 -141 \n==========================================\n- Hits 22291 22228 -63 \n+ Misses 5611 5533 -78 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.42% <89.47%> (+0.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `81.85% <100.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.99% <100.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.87% <100.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `90.96% <100.00%> (+0.10%)` | :arrow_up: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.50% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `95.82% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `96.38% <100.00%> (+0.59%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.15% <100.00%> (+0.12%)` | :arrow_up: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=footer). Last update [4ffea5c...4e08790](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I am unsure about this as it goes against the \"all in one files\" policy transformers has for the models and has been highlighted as one of the main reason people like transformers in a recent survey.\r\nYes the alternative is duplicate code in several files, yes this is officially considered as bad computer science practice and yes it is harder for us to maintain, but the point is to have everything available in one file a researcher can easily tweak.\r\n\r\nA similar attempt in #4944 has been ignored for the same reason.\r\n\r\nTagging @thomwolf and @julien-c for their thoughts.",
"I'd hold off then with completing this until if and when you give a green light to do so.\r\n\r\nMy idea of killing both rabbits with one shot would be writing an easy to maintain refactored code and have a tool that will unfold it for those who want it unfolded (and control the levels of how deep the unfolding goes). Does such a tool exist in python land?",
"> an easy to maintain refactored code and have a tool that will unfold it for those who want it unfolded (and control the levels of how deep the unfolding goes). Does such a tool exist in python land?\r\n\r\nWe explored such tools with @aaugustin a few months ago and the conclusion then was to try and build a lightweight, home-built system for this.",
"We could add a simple script that copies the code from somewhere into the modeling files if necessary and another to check the consistency. The first could be called during `make style` and the second during `make quality`. I was thinking of doing something similar for the `TrainingArguments` and the examples this week (adding a tweakable training arguments file for each example using Trainer), so let's see how it goes for those and then continue with model refactoring the same way?",
"(probably the same script with a different flag, like `black`, but yes, I like this idea)",
"this proved to be a failed experiment, closing this down."
] | 1,597 | 1,599 | 1,599 | CONTRIBUTOR | null | This is a refactoring experiment as suggested at https://github.com/huggingface/transformers/issues/6204
10 models and 1 template have been refactored - I will check for more if this looks promising (the doc and new function's signature are incomplete). Let me know whether to continue or not.
the refactoring was done with:
```
perl -0777 -pi -e '
$in = <<END;
logits = self.qa_outputs(sequence_output)
start_logits, end_logits = logits.split(1, dim=-1)
start_logits = start_logits.squeeze(-1)
end_logits = end_logits.squeeze(-1)
total_loss = None
if start_positions is not None and end_positions is not None:
# If we are on multi-GPU, split add a dimension
if len(start_positions.size()) > 1:
start_positions = start_positions.squeeze(-1)
if len(end_positions.size()) > 1:
end_positions = end_positions.squeeze(-1)
# sometimes the start/end positions are outside our model inputs, we ignore these terms
ignored_index = start_logits.size(1)
start_positions.clamp_(0, ignored_index)
end_positions.clamp_(0, ignored_index)
loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
start_loss = loss_fct(start_logits, start_positions)
end_loss = loss_fct(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2
END
s/\Q$in\E/start_logits, end_logits, total_loss = self.calc_qa_loss(sequence_output, start_positions, end_positions)\n/msg
' \
./templates/adding_a_new_model/modeling_* ./src/transformers/modeling_*
```
@sshleifer, I'm not sure how you're going to judge the coverage change as the coverage data is unreliable at the moment: https://github.com/huggingface/transformers/issues/6317 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6430/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6430",
"html_url": "https://github.com/huggingface/transformers/pull/6430",
"diff_url": "https://github.com/huggingface/transformers/pull/6430.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6430.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6429/comments | https://api.github.com/repos/huggingface/transformers/issues/6429/events | https://github.com/huggingface/transformers/pull/6429 | 677,357,185 | MDExOlB1bGxSZXF1ZXN0NDY2NDg2NzQy | 6,429 | [test schedulers] adjust to test the first step's reading | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=h1) Report\n> Merging [#6429](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6429 +/- ##\n==========================================\n+ Coverage 79.89% 79.94% +0.05% \n==========================================\n Files 153 153 \n Lines 27902 27902 \n==========================================\n+ Hits 22291 22307 +16 \n+ Misses 5611 5595 -16 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.72% <0.00%> (+2.27%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+5.01%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=footer). Last update [4ffea5c...324dd60](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | As I was working on a new scheduler, it was difficult to match numbers since the first step's reading was dropped in `unwrap_schedule` wrappers (they were taking the measurement after stepping). This PR adjusts the wrappers to first take a reading and then step.
This PR also makes a small refactoring to move all the unwrapping into the script, so the test just compares 2 lists. (avoiding multiple `[l[0] for l in lrs_1]`)
The updated table is:
```
scheds = {
get_constant_schedule: ({}, [10.0] * self.num_steps),
get_constant_schedule_with_warmup: (
{"num_warmup_steps": 4},
[0.0, 2.5, 5.0, 7.5, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0],
),
get_linear_schedule_with_warmup: (
{**common_kwargs},
[0.0, 5.0, 10.0, 8.75, 7.5, 6.25, 5.0, 3.75, 2.5, 1.25],
),
get_cosine_schedule_with_warmup: (
{**common_kwargs},
[0.0, 5.0, 10.0, 9.61, 8.53, 6.91, 5.0, 3.08, 1.46, 0.38],
),
get_cosine_with_hard_restarts_schedule_with_warmup: (
{**common_kwargs, "num_cycles": 2},
[0.0, 5.0, 10.0, 8.53, 5.0, 1.46, 10.0, 8.53, 5.0, 1.46],
),
get_polynomial_decay_schedule_with_warmup: (
{**common_kwargs, "power": 2.0, "lr_end": 1e-7},
[0.0, 5.0, 10.0, 7.656, 5.625, 3.906, 2.5, 1.406, 0.625, 0.156],
),
}
```
Unrelated to the changes suggestion in this PR, it exposes 2 minor issues:
1. We definitely have a one off problem there, as the last step's reading is one reading too early (which this change exposes) - it doesn't complete the intended cycle. This is probably unimportant for 100s of steps, but it definitely stands out when developing a new scheduler.
To illustrate, see this change in reported number for `get_polynomial_decay_schedule_with_warmup`:
```
- [5.0, 10.0, 7.656, 5.625, 3.906, 2.5, 1.406, 0.625, 0.156, 1e-07],
+ [0.0, 5.0, 10.0, 7.656, 5.625, 3.906, 2.5, 1.406, 0.625, 0.156],
```
the expected last step of `1e-07` is not there. It never was.
2. Also the first step's reading is `0.0` in all schedulers, except in `get_constant_schedule`, so the first step does nothing. This can be fixed with a potentially added `min_lr=1e-7` to all schedulers, as it was suggested by @sshleifer in one of the recent scheduler-related PRs.
Let me know if this better fits into its own issue, as these issues have nothing to do with the PR itself. Or perhaps the 2 issues are just unimportant... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6429/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6429",
"html_url": "https://github.com/huggingface/transformers/pull/6429",
"diff_url": "https://github.com/huggingface/transformers/pull/6429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6429.patch",
"merged_at": 1598545408000
} |
https://api.github.com/repos/huggingface/transformers/issues/6428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6428/comments | https://api.github.com/repos/huggingface/transformers/issues/6428/events | https://github.com/huggingface/transformers/issues/6428 | 677,274,110 | MDU6SXNzdWU2NzcyNzQxMTA= | 6,428 | Error in run_tf_squad.py script | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The error seems to be caused by the field `use_tfds` from the `DataTrainingArguments` class.\r\nChanging its type from `Optional[bool]` to `bool` and changing the default value to `False`, seem to resolve the issue, however, I don't really understand why and I'm not sure whether this is the right way to fix the issue.\r\n",
"Can reproduce, will investigate today."
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
--> @sgugger
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
I'm simply trying to train a new question answering model using the TF trainer script, and I get the following error:
```python
Traceback (most recent call last):
File "run_tf_squad.py", line 244, in <module>
main()
File "run_tf_squad.py", line 123, in main
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments))
File "/usr/local/lib/python3.6/dist-packages/transformers/hf_argparser.py", line 40, in __init__
self._add_dataclass_arguments(dtype)
File "/usr/local/lib/python3.6/dist-packages/transformers/hf_argparser.py", line 72, in _add_dataclass_arguments
elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List):
File "/usr/lib/python3.6/typing.py", line 1154, in __subclasscheck__
return super().__subclasscheck__(cls)
File "/usr/lib/python3.6/abc.py", line 209, in __subclasscheck__
ok = cls.__subclasshook__(subclass)
File "/usr/lib/python3.6/typing.py", line 890, in __extrahook__
if cls.__extra__ and issubclass(subclass, cls.__extra__):
TypeError: issubclass() arg 1 must be a class
```
## To reproduce
Steps to reproduce the behavior:
1.install transformers from the master branch
2.run the example script in question-answering:
```
python run_tf_squad.py \
--model_name_or_path bert-base-uncased \
--output_dir model \
--max_seq_length 384 \
--num_train_epochs 2 \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 16 \
--do_train \
--logging_dir logs \
--logging_steps 10 \
--learning_rate 3e-5 \
--doc_stride 128
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The script should run normally and train the model
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6428/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6427/comments | https://api.github.com/repos/huggingface/transformers/issues/6427/events | https://github.com/huggingface/transformers/pull/6427 | 677,198,923 | MDExOlB1bGxSZXF1ZXN0NDY2MzU2NTI4 | 6,427 | Activate check on the CI | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=h1) Report\n> Merging [#6427](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/34fabe1697f653dc0f54ac8f510d6ba5578a1a53&el=desc) will **increase** coverage by `2.57%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6427 +/- ##\n==========================================\n+ Coverage 77.38% 79.95% +2.57% \n==========================================\n Files 153 153 \n Lines 27932 27932 \n==========================================\n+ Hits 21614 22332 +718 \n+ Misses 6318 5600 -718 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <0.00%> (+0.19%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.58% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (+0.83%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.33% <0.00%> (+0.94%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.87% <0.00%> (+1.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.58% <0.00%> (+1.20%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+1.36%)` | :arrow_up: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=footer). Last update [34fabe1...333b476](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | The check of modules documented and tested was only in `make quality`, not circleCI | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6427/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6427",
"html_url": "https://github.com/huggingface/transformers/pull/6427",
"diff_url": "https://github.com/huggingface/transformers/pull/6427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6427.patch",
"merged_at": 1597236135000
} |
https://api.github.com/repos/huggingface/transformers/issues/6426 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6426/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6426/comments | https://api.github.com/repos/huggingface/transformers/issues/6426/events | https://github.com/huggingface/transformers/pull/6426 | 677,167,388 | MDExOlB1bGxSZXF1ZXN0NDY2MzMxOTk3 | 6,426 | Move prediction_loss_only to TrainingArguments | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=h1) Report\n> Merging [#6426](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/66fa8ceaeaa6fe12f1bd4a5e6b0a924f59f715d9&el=desc) will **decrease** coverage by `2.62%`.\n> The diff coverage is `36.36%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6426 +/- ##\n==========================================\n- Coverage 79.90% 77.28% -2.63% \n==========================================\n Files 153 153 \n Lines 27877 27884 +7 \n==========================================\n- Hits 22276 21549 -727 \n- Misses 5601 6335 +734 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <60.00%> (-0.03%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `80.58% <100.00%> (+0.19%)` | :arrow_up: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `28.94% <0.00%> (-67.11%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-30.36%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=footer). Last update [66fa8ce...34cca14](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"no strong opinion on this (but as usual consider the BC/cleanliness ratio carefully)"
] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | It didn't make sense to me to have that boolean flag in the init of `Trainer` when all the other ones are in `TrainingArguments` so I deprecated it and moved it.
Let me know if you think it's a wrong move.
Unrelated changes: had to fix `make quality` that was complaining about non-documented or tested models and it was easier to fix them than change my setup (which doesn't let me push if make quality fails). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6426/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6426",
"html_url": "https://github.com/huggingface/transformers/pull/6426",
"diff_url": "https://github.com/huggingface/transformers/pull/6426.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6426.patch",
"merged_at": 1597233826000
} |
https://api.github.com/repos/huggingface/transformers/issues/6425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6425/comments | https://api.github.com/repos/huggingface/transformers/issues/6425/events | https://github.com/huggingface/transformers/pull/6425 | 677,166,825 | MDExOlB1bGxSZXF1ZXN0NDY2MzMxNTYx | 6,425 | [examples] add pytest dependency | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | Not really a new dependency, since already installed by `pip install -e .[testing]` but some examples users just run:
```
pip install -r examples/requirements.txt
```
so they don't have it, and tests break.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6425/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6425",
"html_url": "https://github.com/huggingface/transformers/pull/6425",
"diff_url": "https://github.com/huggingface/transformers/pull/6425.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6425.patch",
"merged_at": 1597183090000
} |
https://api.github.com/repos/huggingface/transformers/issues/6424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6424/comments | https://api.github.com/repos/huggingface/transformers/issues/6424/events | https://github.com/huggingface/transformers/issues/6424 | 677,129,877 | MDU6SXNzdWU2NzcxMjk4Nzc= | 6,424 | actions CI self-scheduled: run_examples torch even if run_torch_tests fails | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,602 | 1,602 | CONTRIBUTOR | null |

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6424/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6423/comments | https://api.github.com/repos/huggingface/transformers/issues/6423/events | https://github.com/huggingface/transformers/pull/6423 | 677,112,028 | MDExOlB1bGxSZXF1ZXN0NDY2Mjg4ODUz | 6,423 | Fixes to make life easier with the nlp library | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=h1) Report\n> Merging [#6423](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f6cb0f806efecb64df40c946dacaad0adad33d53&el=desc) will **increase** coverage by `2.27%`.\n> The diff coverage is `95.45%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6423 +/- ##\n==========================================\n+ Coverage 77.51% 79.79% +2.27% \n==========================================\n Files 150 150 \n Lines 27789 27807 +18 \n==========================================\n+ Hits 21542 22188 +646 \n+ Misses 6247 5619 -628 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.79% <ø> (+52.80%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.16% <95.45%> (+0.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <0.00%> (+0.19%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.58% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.75%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (+0.83%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.33% <0.00%> (+0.94%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.87% <0.00%> (+1.06%)` | :arrow_up: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=footer). Last update [f6cb0f8...8edc948](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Merging then, we can follow up next week when @thomwolf is back if he has more comments."
] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | This PR adds two things to make the interface easier with the `nlp` library:
- `BatchEncoding` stops enforcing a 2-dim for every tensor, which causes problems for labels (which should be one vector of shape `[batch_size]`).
- `PreTrainedTokenizerBase.pad` accepts tensors as inputs, which makes it easy to use this function for data collation.
Added proper documentation and tests from @thomwolf initial work. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6423/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6423",
"html_url": "https://github.com/huggingface/transformers/pull/6423",
"diff_url": "https://github.com/huggingface/transformers/pull/6423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6423.patch",
"merged_at": 1597233656000
} |
https://api.github.com/repos/huggingface/transformers/issues/6422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6422/comments | https://api.github.com/repos/huggingface/transformers/issues/6422/events | https://github.com/huggingface/transformers/pull/6422 | 677,104,769 | MDExOlB1bGxSZXF1ZXN0NDY2MjgyOTg5 | 6,422 | [test] replace capsys with the more refined CaptureStderr/CaptureStdout | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=h1) Report\n> Merging [#6422](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **decrease** coverage by `2.51%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6422 +/- ##\n==========================================\n- Coverage 79.89% 77.37% -2.52% \n==========================================\n Files 153 153 \n Lines 27902 27902 \n==========================================\n- Hits 22291 21588 -703 \n- Misses 5611 6314 +703 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-63.30%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=footer). Last update [4ffea5c...bece6ba](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | Now https://github.com/huggingface/transformers/pull/6231 has been merged, we can now do a more refined, more tightly scoped std stream captures as shown [here](https://github.com/huggingface/transformers/pull/6231#issuecomment-671789424)
otherwise no test functionality change.
Any CI fails are unrelated.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6422",
"html_url": "https://github.com/huggingface/transformers/pull/6422",
"diff_url": "https://github.com/huggingface/transformers/pull/6422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6422.patch",
"merged_at": 1597233269000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.