url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/6621
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6621/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6621/comments
https://api.github.com/repos/huggingface/transformers/issues/6621/events
https://github.com/huggingface/transformers/pull/6621
682,892,892
MDExOlB1bGxSZXF1ZXN0NDcxMDYzMjEy
6,621
[Tests] fix attention masks in Tests
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=h1) Report\n> Merging [#6621](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/573bdb0a5d2897ff6c7520ebb38693c7acfbf17e?el=desc) will **increase** coverage by `0.85%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6621/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6621 +/- ##\n==========================================\n+ Coverage 79.16% 80.02% +0.85% \n==========================================\n Files 156 156 \n Lines 28217 28217 \n==========================================\n+ Hits 22339 22581 +242 \n+ Misses 5878 5636 -242 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-74.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.18% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+2.60%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.75%)` | :arrow_up: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6621/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=footer). Last update [573bdb0...d61cbf8](https://codecov.io/gh/huggingface/transformers/pull/6621?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
MEMBER
null
This PR should fix the flaky test failures of `test_modeling_output_equivalence` and `test_feed_forward_chunking`. I added a new random attention_mask generation function that makes sure that at least one token is attended to.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6621/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6621", "html_url": "https://github.com/huggingface/transformers/pull/6621", "diff_url": "https://github.com/huggingface/transformers/pull/6621.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6621.patch", "merged_at": 1597944228000 }
https://api.github.com/repos/huggingface/transformers/issues/6620
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6620/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6620/comments
https://api.github.com/repos/huggingface/transformers/issues/6620/events
https://github.com/huggingface/transformers/issues/6620
682,873,627
MDU6SXNzdWU2ODI4NzM2Mjc=
6,620
Pegasus: OSError: Unable to load weights from pytorch checkpoint file.
{ "login": "yxyzzz", "id": 5890954, "node_id": "MDQ6VXNlcjU4OTA5NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5890954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yxyzzz", "html_url": "https://github.com/yxyzzz", "followers_url": "https://api.github.com/users/yxyzzz/followers", "following_url": "https://api.github.com/users/yxyzzz/following{/other_user}", "gists_url": "https://api.github.com/users/yxyzzz/gists{/gist_id}", "starred_url": "https://api.github.com/users/yxyzzz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yxyzzz/subscriptions", "organizations_url": "https://api.github.com/users/yxyzzz/orgs", "repos_url": "https://api.github.com/users/yxyzzz/repos", "events_url": "https://api.github.com/users/yxyzzz/events{/privacy}", "received_events_url": "https://api.github.com/users/yxyzzz/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "works for me in in torch 1.5.1. and torch 1.6.\r\nMaybe this is a one off s3 failure?\r\nCan anybody else replicate?\r\n\r\n```\r\nfrom transformers import PegasusForConditionalGeneration\r\nmodel = PegasusForConditionalGeneration.from_pretrained(model_name)\r\n```", "I set ```force_download=True``` and it worked. Thanks!", "> I set `force_download=True` and it worked. Thanks!\r\n\r\ncan you describe in detail how did you solved the problem\r\n\r\n", "Just upgrading the PyTorch and TensorFlow version solved the problem for me. ", "torch==1.6.0\r\ntensorflow==2.3.1\r\ntransformers==3.5.1\r\n\r\nAnd I'm trying to load my model trained on gpt2-small named train-on-test1. But I get the an OSERROR: \r\n\r\n Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' OSError: Unable to load weights \r\n from pytorch checkpoint file for '/mounted/models/train-on-test1/' at '/mounted/models/train-on-test1/pytorch_model.bin' If you \r\n tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.\r\n", "I'm getting the same using torch == 1.7.0 and transformers == 4.1.1 and a xlnet localy downloaded model :\r\n\r\n```\r\nfrom transformers import XLNetForSequenceClassification\r\nmodel = XLNetForSequenceClassification.from_pretrained('/../../models/xlnet/', num_labels = 3)\r\n\r\nOSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. \r\n```", "I keep getting the same error too. I have pretrained a distilbertmodel named amazon-distilbert. When I am trying to load it using from pretrained it is throwing the same error.\r\n\r\n```\r\nfrom transformers import DistilBertTokenizerFast, DistilBertForSequenceClassification\r\n\r\ntokenizer = DistilBertTokenizerFast.from_pretrained(\"distilbert-base-cased\");\r\nmodel = DistilBertForSequenceClassification.from_pretrained(\"../models/amazon-distilbert\")\r\n```\r\n\r\nAnd the error\r\n```\r\n f\"Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' \"\r\nOSError: Unable to load weights from pytorch checkpoint file for '/GitHub/TextSentimentAnalysis/models/amazon-distilbert' at '//GitHub/TextSentimentAnalysis/models/amazon-distilbert/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. \r\n```", "same error with torch 1.5.0, TensorFlow 2.4.1, and transformers 4.2.2. Does anyone know how to solve such a problem?", "Same error with torch 1.8.1, transformers 4.5.1 when trying to call \r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\n \r\nAutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-ja-en').save_pretrained('pretrained_models/opus-mt-ja-en')\r\n```", "can anyone share how to solve this error to me? plz , i have met this problem too", "Same issue here ..", "> > I set `force_download=True` and it worked. Thanks!\r\n> \r\n> can you describe in detail how did you solved the problem\r\n\r\nIn the **from_pretrained** function, set one of the parameters as **force_download=True**\r\nEg. - model = LayoutLMForTokenClassification.from_pretrained(\"microsoft/layoutlm-base-uncased\", num_labels=num_labels, **force_download=True**)", "In my case, I was running on a cpu only compute, and this issue got solved when installing a cpu version of PyTorch. For example: http://download.pytorch.org/whl/cpu/torch-1.13.0%2Bcpu-cp39-cp39-linux_x86_64.whl\r\n" ]
1,597
1,677
1,597
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sshleifer ## Information Model I am using (Bert, XLNet ...): google/pegasus-cnn_dailymail The problem arises when using: ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' model_name = 'google/pegasus-cnn_dailymail' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) ``` Traceback: ``` RuntimeError Traceback (most recent call last) ~/projects/transformers/src/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 854 try: --> 855 state_dict = torch.load(resolved_archive_file, map_location="cpu") 856 except Exception: ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args) 584 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) --> 585 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) 586 ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args) 771 assert key in deserialized_objects --> 772 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly) 773 if offset is not None: RuntimeError: unexpected EOF, expected 10498989 more bytes. The file might be corrupted. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-1-1ae6eb884edd> in <module> 7 model_name = 'google/pegasus-cnn_dailymail' 8 tokenizer = PegasusTokenizer.from_pretrained(model_name) ----> 9 model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) ~/projects/transformers/src/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 855 state_dict = torch.load(resolved_archive_file, map_location="cpu") 856 except Exception: --> 857 raise OSError( 858 "Unable to load weights from pytorch checkpoint file. " 859 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. " OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6620/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6619
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6619/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6619/comments
https://api.github.com/repos/huggingface/transformers/issues/6619/events
https://github.com/huggingface/transformers/issues/6619
682,855,128
MDU6SXNzdWU2ODI4NTUxMjg=
6,619
[DistilBert] Flaky tests
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Will investigate now @sgugger @VictorSanh . My first guess is that it because of \"inf\" values because of masking because this error does not happen when the attention mask is not passed to forward. " ]
1,597
1,597
1,597
MEMBER
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-61-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyTorch version (GPU?): 1.6.0+cpu (False) - Tensorflow version (GPU?): 2.1.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Reproduce Error: ```python #!/usr/bin/env python3 import torch from transformers import DistilBertModel, DistilBertConfig input_ids = torch.tensor([[55, 40, 88, 37, 12, 6, 20], [33, 87, 56, 6, 34, 92, 2], [ 4, 25, 95, 19, 9, 14, 80], [96, 45, 71, 10, 78, 33, 68], [72, 40, 59, 90, 5, 78, 44], [36, 15, 11, 18, 74, 40, 30], [84, 25, 5, 61, 18, 77, 35], [70, 87, 9, 42, 24, 65, 11], [28, 0, 28, 45, 92, 83, 96], [75, 41, 69, 61, 83, 31, 81], [94, 93, 79, 48, 24, 17, 9], [97, 5, 38, 94, 75, 8, 59], [31, 71, 87, 39, 97, 10, 22]]) attention_mask = torch.tensor([[1, 1, 1, 1, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1], [0, 1, 1, 1, 0, 0, 1], [1, 1, 1, 0, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1], [0, 0, 1, 0, 0, 1, 1], [1, 0, 1, 0, 0, 0, 1], [0, 1, 0, 1, 1, 1, 0], [0, 1, 1, 0, 0, 0, 0], [0, 1, 0, 1, 1, 0, 1], [0, 1, 1, 1, 1, 1, 1], [0, 1, 0, 1, 0, 0, 0]]) distil_bert_config = { "activation": "gelu", "attention_dropout": 0.1, "dim": 32, "dropout": 0.1, "hidden_act": "gelu", "hidden_dim": 37, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 4, "n_layers": 5, "pad_token_id": 0, "qa_dropout": 0.1, "return_dict": True, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": False, "vocab_size": 99 } config = DistilBertConfig(**distil_bert_config) torch.manual_seed(0) model = DistilBertModel(config).eval() last_hidden_state = model(input_ids, attention_mask=attention_mask)[0] if torch.isnan(last_hidden_state).any().item(): print("Error with DistilBert") ``` This code example allows yields nan values.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6619/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6618
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6618/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6618/comments
https://api.github.com/repos/huggingface/transformers/issues/6618/events
https://github.com/huggingface/transformers/pull/6618
682,800,693
MDExOlB1bGxSZXF1ZXN0NDcwOTgwMTQ4
6,618
TFTrainer dataset doc & fix evaluation bug
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=h1) Report\n> Merging [#6618](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/039d8d65fc19ac74a8c7917233eb2828c46c0fa7?el=desc) will **decrease** coverage by `0.93%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6618/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6618 +/- ##\n==========================================\n- Coverage 79.79% 78.86% -0.94% \n==========================================\n Files 156 156 \n Lines 28213 28213 \n==========================================\n- Hits 22513 22250 -263 \n- Misses 5700 5963 +263 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.99% <0.00%> (-1.31%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=footer). Last update [039d8d6...76afd14](https://codecov.io/gh/huggingface/transformers/pull/6618?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,598
1,597
CONTRIBUTOR
null
discussed in #6551
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6618/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6618", "html_url": "https://github.com/huggingface/transformers/pull/6618", "diff_url": "https://github.com/huggingface/transformers/pull/6618.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6618.patch", "merged_at": 1597939897000 }
https://api.github.com/repos/huggingface/transformers/issues/6617
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6617/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6617/comments
https://api.github.com/repos/huggingface/transformers/issues/6617/events
https://github.com/huggingface/transformers/issues/6617
682,762,171
MDU6SXNzdWU2ODI3NjIxNzE=
6,617
unk handling in v3.0 different than v2.0?
{ "login": "BCWang93", "id": 31853251, "node_id": "MDQ6VXNlcjMxODUzMjUx", "avatar_url": "https://avatars.githubusercontent.com/u/31853251?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BCWang93", "html_url": "https://github.com/BCWang93", "followers_url": "https://api.github.com/users/BCWang93/followers", "following_url": "https://api.github.com/users/BCWang93/following{/other_user}", "gists_url": "https://api.github.com/users/BCWang93/gists{/gist_id}", "starred_url": "https://api.github.com/users/BCWang93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BCWang93/subscriptions", "organizations_url": "https://api.github.com/users/BCWang93/orgs", "repos_url": "https://api.github.com/users/BCWang93/repos", "events_url": "https://api.github.com/users/BCWang93/events{/privacy}", "received_events_url": "https://api.github.com/users/BCWang93/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056635, "node_id": "MDU6TGFiZWwxODM0MDU2NjM1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization", "name": "Core: Tokenization", "color": "FF4446", "default": false, "description": "Internals of the library; Tokenization." } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @BCWang93, can you post a reproducible code snippet? It would be great if we could just copy paste your code and see the same error you post here :-) ", "> Hey @BCWang93, can you post a reproducible code snippet? It would be great if we could just copy paste your code and see the same error you post here :-)\r\n\r\nBecause this is an entire project, I can't paste the full code. But I found the difference between transformers 3 and 2. Transformers2 maps ID to \"[unk]\" when dealing with characters like '\\n','\\r' et.al, but transformers3 discards all such characters when dealing with such cases. Like this:\r\n![image](https://user-images.githubusercontent.com/31853251/90902807-9db0d980-e3ff-11ea-8ebf-4abf3d97b732.png)\r\nThanks very much!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,604
1,604
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I have this problem in runing my code in transformers3.0.2, Trainable parameters: 102274668 248 250 252 252 248 250 252 252 Traceback (most recent call last): File "train.py", line 92, in <module> run('./configs/base_config.json') File "train.py", line 88, in run main(config) File "train.py", line 66, in main trainer.train() File "/DATA2/disk1/wangbingchen/project/ccks2020-task8-pytorch/base/base_trainer.py", line 67, in train result = self._train_epoch(epoch) File "/DATA2/disk1/wangbingchen/project/ccks2020-task8-pytorch/trainer/trainer.py", line 59, in _train_epoch for batch_idx, batch_data in enumerate(self.train_iter): File "/home/wangbingchen/wangbingchen/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/wangbingchen/wangbingchen/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/wangbingchen/wangbingchen/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/DATA2/disk1/wangbingchen/project/ccks2020-task8-pytorch/data_process/military_data_process.py", line 179, in collate_fn text_token_ids = torch.LongTensor(np.array(text_token_ids)) TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool. this is the problem of embedding length.But when I run this code in transformers2.11,everything is well.So,I want to ask the difference of transformers2.11 and transformers3.0.2.Thanks very much! <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6617/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6616
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6616/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6616/comments
https://api.github.com/repos/huggingface/transformers/issues/6616/events
https://github.com/huggingface/transformers/issues/6616
682,739,963
MDU6SXNzdWU2ODI3Mzk5NjM=
6,616
Fine tune masked language model on custom dataset 'index out of range in self'
{ "login": "dragonlee97", "id": 34571516, "node_id": "MDQ6VXNlcjM0NTcxNTE2", "avatar_url": "https://avatars.githubusercontent.com/u/34571516?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dragonlee97", "html_url": "https://github.com/dragonlee97", "followers_url": "https://api.github.com/users/dragonlee97/followers", "following_url": "https://api.github.com/users/dragonlee97/following{/other_user}", "gists_url": "https://api.github.com/users/dragonlee97/gists{/gist_id}", "starred_url": "https://api.github.com/users/dragonlee97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dragonlee97/subscriptions", "organizations_url": "https://api.github.com/users/dragonlee97/orgs", "repos_url": "https://api.github.com/users/dragonlee97/repos", "events_url": "https://api.github.com/users/dragonlee97/events{/privacy}", "received_events_url": "https://api.github.com/users/dragonlee97/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "same question", "same issues coming for me as well", "same issue", "Does this still happen in the latest transformers version? Could you put the output of `transformers-cli` env here? Thanks.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.", "Reopening. Still having same issue?" ]
1,597
1,645
1,614
NONE
null
# ❓ Questions & Help Hi I am training a mlm model with reference of this tutorial, https://huggingface.co/blog/how-to-train. However,I got the error of 'index out of range in self' but I already set max_length as well as block size in my code. I am also not clear how to load and prepare data, should I mask certain words by myself? and pass the masked label while training by myself? Because herehttps://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm I see '– Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size]' I am not sure using Trainer and DataCollatorForLanguageModeling could solve this. ## Details <!-- Description of your issue --> To reprpduce: ``` tokenizer = BertTokenizer.from_pretrained('./bert-large-cased',truncation = True, padding=True, max_length=100) model = BertForMaskedLM.from_pretrained( "./bert-large-cased", output_attentions = False, output_hidden_states = True ) device = 'cuda' if torch.cuda.is_available() else 'cpu' # Tell pytorch to run this model on the GPU. model = model.to(device) model.train() from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer,mlm=True, mlm_probability=0.15 #I don't know if this is the only way to set up mask for mlm task.... ) from transformers import LineByLineTextDataset train_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="./na_en_train.txt", #I reorgnasized data to line by line form as the tutorial, which is stupid. but I also tried TensorDataset, it got errors block_size=100, ) eval_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="./na_en_test.txt", block_size=100, ) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./results", overwrite_output_dir=True, num_train_epochs=3, per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=16, save_steps=10000, save_total_limit=2 ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, prediction_loss_only=True, ) trainer.train() ``` #Error Message: --------------------------------------------------------------------------- > IndexError Traceback (most recent call last) > <ipython-input-116-3435b262f1ae> in <module> > ----> 1 trainer.train() > > /opt/conda/envs/rapids/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path) > 497 continue > 498 > --> 499 tr_loss += self._training_step(model, inputs, optimizer) > 500 > 501 if (step + 1) % self.args.gradient_accumulation_steps == 0 or ( > > /opt/conda/envs/rapids/lib/python3.6/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer) > 620 inputs["mems"] = self._past > 621 > --> 622 outputs = model(**inputs) > 623 loss = outputs[0] # model outputs are always tuple in transformers (see doc) > 624 > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) > 548 .. note:: > 549 This method modifies the module in-place. > --> 550 > 551 Args: > 552 device (:class:`torch.device`): the desired device of the parameters > > /opt/conda/envs/rapids/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, **kwargs) > 1081 encoder_attention_mask=encoder_attention_mask, > 1082 output_attentions=output_attentions, > -> 1083 output_hidden_states=output_hidden_states, > 1084 ) > 1085 > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) > 548 .. note:: > 549 This method modifies the module in-place. > --> 550 > 551 Args: > 552 device (:class:`torch.device`): the desired device of the parameters > > /opt/conda/envs/rapids/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states) > 751 > 752 embedding_output = self.embeddings( > --> 753 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds > 754 ) > 755 encoder_outputs = self.encoder( > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) > 548 .. note:: > 549 This method modifies the module in-place. > --> 550 > 551 Args: > 552 device (:class:`torch.device`): the desired device of the parameters > > /opt/conda/envs/rapids/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) > 176 > 177 if inputs_embeds is None: > --> 178 inputs_embeds = self.word_embeddings(input_ids) > 179 position_embeddings = self.position_embeddings(position_ids) > 180 token_type_embeddings = self.token_type_embeddings(token_type_ids) > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) > 548 .. note:: > 549 This method modifies the module in-place. > --> 550 > 551 Args: > 552 device (:class:`torch.device`): the desired device of the parameters > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input) > 112 assert list(_weight.shape) == [num_embeddings, embedding_dim], \ > 113 'Shape of weight does not match num_embeddings and embedding_dim' > --> 114 self.weight = Parameter(_weight) > 115 self.sparse = sparse > 116 > > /opt/conda/envs/rapids/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) > 1722 if not torch.jit.is_scripting(): > 1723 if type(input) is not Tensor and has_torch_function((input,)): > -> 1724 return handle_torch_function(hardswish, (input,), input, inplace=inplace) > 1725 if inplace: > 1726 return torch._C._nn.hardswish_(input) > > IndexError: index out of range in self > **A link to original question on the forum/Stack Overflow**: https://discuss.huggingface.co/t/fine-tune-masked-language-model-on-custom-dataset/747
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6616/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6616/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6615
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6615/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6615/comments
https://api.github.com/repos/huggingface/transformers/issues/6615/events
https://github.com/huggingface/transformers/issues/6615
682,680,060
MDU6SXNzdWU2ODI2ODAwNjA=
6,615
I can't reproduce the results of tf-xlm-r-ner-40-lang model
{ "login": "isl-m", "id": 69969954, "node_id": "MDQ6VXNlcjY5OTY5OTU0", "avatar_url": "https://avatars.githubusercontent.com/u/69969954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/isl-m", "html_url": "https://github.com/isl-m", "followers_url": "https://api.github.com/users/isl-m/followers", "following_url": "https://api.github.com/users/isl-m/following{/other_user}", "gists_url": "https://api.github.com/users/isl-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/isl-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/isl-m/subscriptions", "organizations_url": "https://api.github.com/users/isl-m/orgs", "repos_url": "https://api.github.com/users/isl-m/repos", "events_url": "https://api.github.com/users/isl-m/events{/privacy}", "received_events_url": "https://api.github.com/users/isl-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello!\r\n\r\nWhat is the command line you use to train the model?", "Hello @jplu \r\nthat's the command line I tried to use to train the model (you added that command to the model description on huggingface community models)\r\n\r\n\r\n```\r\ncd examples/ner\r\npython run_tf_ner.py \\\r\n--data_dir . \\\r\n--labels ./labels.txt \\\r\n--model_name_or_path jplu/tf-xlm-roberta-base \\\r\n--output_dir model \\\r\n--max-seq-length 128 \\\r\n--num_train_epochs 2 \\\r\n--per_gpu_train_batch_size 16 \\\r\n--per_gpu_eval_batch_size 32 \\\r\n--do_train \\\r\n--do_eval \\\r\n--logging_dir logs \\\r\n--mode token-classification \\\r\n--evaluate_during_training \\\r\n--optimizer_name adamw\r\n```", "Can you use the last version of the trainer, with th ecommand line I used:\r\n\r\n```\r\npython run_tf_ner.py \\\r\n--data_dir . \\\r\n--labels ./labels.txt \\\r\n--model_name_or_path jplu/tf-xlm-roberta-base \\\r\n--output_dir model \\\r\n--num_train_epochs 8 \\\r\n--per_gpu_train_batch_size 32 \\\r\n--per_gpu_eval_batch_size 64 \\\r\n--do_train \\\r\n--do_eval \\\r\n--do_predict \\\r\n--logging_steps 10 \\\r\n--evaluate_during_training \\\r\n--save_steps 100 \\\r\n--overwrite_output_dir\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,604
1,604
NONE
null
I tired to reproduce the results of tf-xlm-r-ner-40-lang model but there were compatibility issues in the token-classification/run_tf_ner.py I fixed some of these issue but it's still not running as expected, @jplu could you please share with me the transformers/tf version used to produce the results of tf-xlm-r-ner-40-lang model. i'm using: transformers = 3.0.2 tensorflow = 2.3.0 Thanks in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6615/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6614
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6614/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6614/comments
https://api.github.com/repos/huggingface/transformers/issues/6614/events
https://github.com/huggingface/transformers/pull/6614
682,551,963
MDExOlB1bGxSZXF1ZXN0NDcwNzc0MDEw
6,614
removed redundant arg in prepare_inputs
{ "login": "prajjwal1", "id": 24690051, "node_id": "MDQ6VXNlcjI0NjkwMDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prajjwal1", "html_url": "https://github.com/prajjwal1", "followers_url": "https://api.github.com/users/prajjwal1/followers", "following_url": "https://api.github.com/users/prajjwal1/following{/other_user}", "gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions", "organizations_url": "https://api.github.com/users/prajjwal1/orgs", "repos_url": "https://api.github.com/users/prajjwal1/repos", "events_url": "https://api.github.com/users/prajjwal1/events{/privacy}", "received_events_url": "https://api.github.com/users/prajjwal1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You may be right but maybe there is a reason? @sgugger ", "I'm guessing it was used at some point and then I forgot to remove it when it wasn't used anymore. Thanks for fixing!" ]
1,597
1,597
1,597
CONTRIBUTOR
null
I am not sure why `model` was being passed in `_prepare_inputs`. It seemed redundant.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6614/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6614", "html_url": "https://github.com/huggingface/transformers/pull/6614", "diff_url": "https://github.com/huggingface/transformers/pull/6614.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6614.patch", "merged_at": 1597926216000 }
https://api.github.com/repos/huggingface/transformers/issues/6613
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6613/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6613/comments
https://api.github.com/repos/huggingface/transformers/issues/6613/events
https://github.com/huggingface/transformers/issues/6613
682,488,652
MDU6SXNzdWU2ODI0ODg2NTI=
6,613
FillMaskPipeline return special tokens i.e. <mask> as prediction
{ "login": "hungluumfc", "id": 69781878, "node_id": "MDQ6VXNlcjY5NzgxODc4", "avatar_url": "https://avatars.githubusercontent.com/u/69781878?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hungluumfc", "html_url": "https://github.com/hungluumfc", "followers_url": "https://api.github.com/users/hungluumfc/followers", "following_url": "https://api.github.com/users/hungluumfc/following{/other_user}", "gists_url": "https://api.github.com/users/hungluumfc/gists{/gist_id}", "starred_url": "https://api.github.com/users/hungluumfc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hungluumfc/subscriptions", "organizations_url": "https://api.github.com/users/hungluumfc/orgs", "repos_url": "https://api.github.com/users/hungluumfc/repos", "events_url": "https://api.github.com/users/hungluumfc/events{/privacy}", "received_events_url": "https://api.github.com/users/hungluumfc/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,604
1,604
NONE
null
# ❓ FillMaskPipeline return special tokens i.e. \<mask\> as prediction ## Details Im training a new language model from scratch using ByteLevelBPETokenizer for tokenizer and RobertaForMaskedLM for self-supervised language model. The config is as follow: Tokenizer: ``` tokenizer = ByteLevelBPETokenizer() tokenizer.train(files=paths, vocab_size=100000, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) ``` After the tokenizer was trained, it was loaded using `tokenizer = RobertaTokenizerFast.from_pretrained("./BERTmese/Tokenizer", max_length=512)` And the RobertaForMaskedLM is configed as follow: ``` config = RobertaConfig( vocab_size=100000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, ) model = RobertaForMaskedLM(config=config) ``` Then I train the RobertaForMaskedLM model using custom dataset and Adam optimizer as follow: ``` class CustomTextDataset(Dataset): def __init__(self, tokenizer, dataset, max_length): self.tokenizer = tokenizer self.dataset = dataset self.max_length = max_length def __len__(self): return len(self.dataset) def __getitem__(self, i) -> torch.Tensor: encoding = self.tokenizer(self.dataset[i]['text'], add_special_tokens=True, max_length=self.max_length, padding='max_length', truncation=True) return torch.tensor(encoding["input_ids"], dtype=torch.long) ``` ``` for step in data_loader: input_ids = next(iter(train_loader)) input_ids = input_ids.to(device) outputs = model(input_ids, labels=input_ids) loss, _ = outputs[:2] loss.backward() ... ``` For every 100 steps, I test the trained model using this code snippet: ``` model.eval() # test on a sample text fill_mask = FillMaskPipeline( model=model, tokenizer=tokenizer, topk=1 ) mask = fill_mask("ယခုလတွင်ပျားရည်နှင့်ပျားဖယောင်းများကိုစုဆောင်း<mask>သည်ဟုခန့်မှန်းနိုင်သည်။")[0] print(mask['token'], decoder.decode([mask['token_str']]), mask['score']) # set model back to train mode model.train() ``` In the first hundred steps, the predict token is fined (it predict some Burmese word). However, after that, the model start to predict the '\<mask\>' token as the result. Im not sure if predicting \<mask\> token for masked language is normal?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6613/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6612
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6612/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6612/comments
https://api.github.com/repos/huggingface/transformers/issues/6612/events
https://github.com/huggingface/transformers/pull/6612
682,329,996
MDExOlB1bGxSZXF1ZXN0NDcwNTg0MjU1
6,612
wip/mbart: make batches that are identical to fairseq
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=h1) Report\n> Merging [#6612](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/18ca0e91402d17950b870d7c9f67ddb7fd573817&el=desc) will **decrease** coverage by `0.61%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6612/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6612 +/- ##\n==========================================\n- Coverage 79.89% 79.28% -0.62% \n==========================================\n Files 156 156 \n Lines 28213 28219 +6 \n==========================================\n- Hits 22542 22374 -168 \n- Misses 5671 5845 +174 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `97.14% <100.00%> (+1.83%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+6.20%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6612/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=footer). Last update [18ca0e9...2207e5d](https://codecov.io/gh/huggingface/transformers/pull/6612?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,598
1,598
CONTRIBUTOR
null
This seems to + 0.2 BLEU
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6612/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6612", "html_url": "https://github.com/huggingface/transformers/pull/6612", "diff_url": "https://github.com/huggingface/transformers/pull/6612.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6612.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6611
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6611/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6611/comments
https://api.github.com/repos/huggingface/transformers/issues/6611/events
https://github.com/huggingface/transformers/issues/6611
682,267,651
MDU6SXNzdWU2ODIyNjc2NTE=
6,611
TextGenerationPipeline giving FutureWarning about AutoModelWithLMHead
{ "login": "pranavpsv", "id": 30323565, "node_id": "MDQ6VXNlcjMwMzIzNTY1", "avatar_url": "https://avatars.githubusercontent.com/u/30323565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pranavpsv", "html_url": "https://github.com/pranavpsv", "followers_url": "https://api.github.com/users/pranavpsv/followers", "following_url": "https://api.github.com/users/pranavpsv/following{/other_user}", "gists_url": "https://api.github.com/users/pranavpsv/gists{/gist_id}", "starred_url": "https://api.github.com/users/pranavpsv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pranavpsv/subscriptions", "organizations_url": "https://api.github.com/users/pranavpsv/orgs", "repos_url": "https://api.github.com/users/pranavpsv/repos", "events_url": "https://api.github.com/users/pranavpsv/events{/privacy}", "received_events_url": "https://api.github.com/users/pranavpsv/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @pranavpsv, \r\n\r\nYes, I think we can switch the `TextGenerationPipeline` to `AutoModelForCausalLM` and `AutoModelForSeq2SeqLM`. So check if the model is in one of the above and then use it, instead of using `AutoModelWithLMHead`. Also once we have a `ConditionalTextGenerationPipeline` we can remove the `AutoModelForSeq2SeqLM` dependency. Feel free to open a PR about this :-) ", "@patrickvonplaten Got it, thank you for the info :)\r\n\r\nMy model is just a GPT2 model, so I believe it should use AutoModelForCausalLM.\r\n\r\nI just checked on the transformers master branch in the pipelines.py source code [file](https://github.com/huggingface/transformers/blob/d0e42a7bed3de9271ae39c575d7eeb54cf985921/src/transformers/pipelines.py#L2432). \r\nIt shows that the pipelines.py file from master branch doesn't use AutoModelWithLMHead. \r\n\r\nHowever, for some reason, when pip installing the latest version of transformers, the Pipeline object (with task as text-generation) still gives the AutoModelWithLMHead deprecated warning (indicating that it might be importing AutoModelWithLMHead). \r\n\r\n To confirm, I found that the installed transformers (3.0.2) pipelines.py file imports AutoModelWithLMHead (which is what could be causing this warning):\r\n\r\n```\r\n# The pipelines.py file\r\nif is_torch_available():\r\n import torch\r\n from .modeling_auto import (\r\n AutoModel,\r\n AutoModelForSequenceClassification,\r\n AutoModelForQuestionAnswering,\r\n AutoModelForTokenClassification,\r\n AutoModelWithLMHead,\r\n AutoModelForSeq2SeqLM,\r\n )\r\n\r\n```\r\nThere seems to be a discrepancy between master branch and the 3.0.2 release pipelines object.\r\n\r\n\r\nFor now, I'm doing the following to avoid the warning.\r\n\r\n\r\n```\r\nmodel = GPT2LMHeadModel.from_pretrained(checkpoint)\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\ntext_generator = TextGenerationPipeline(model=model, tokenizer=tokenizer)\r\n\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,603
1,603
CONTRIBUTOR
null
@TevenLeScao ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Model I am using (Bert, XLNet ...): GPT2LMHeadModel The TextGenerationPipeline object causes FutureWarning about class `AutoModelWithLMHead` being deprecated ``` from transformers import pipeline text_generator = pipeline("text-generation", model="pranavpsv/gpt2-genre-story-generator") ``` ## To reproduce Steps to reproduce the behavior: Run the above script. This is the warning: ``` /usr/local/lib/python3.6/dist-packages/transformers/modeling_auto.py:798: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. FutureWarning, ``` ## Expected behavior I expected no warning since I thought the TextGenerationPipeline would use AutoModelForCausalLM object or GPT2LMHeadModel object for the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6611/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6610
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6610/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6610/comments
https://api.github.com/repos/huggingface/transformers/issues/6610/events
https://github.com/huggingface/transformers/pull/6610
682,255,933
MDExOlB1bGxSZXF1ZXN0NDcwNTIwNjIy
6,610
[seq2seq Example] Convert tensor to List[int] for decoding
{ "login": "setu4993", "id": 1833708, "node_id": "MDQ6VXNlcjE4MzM3MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1833708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/setu4993", "html_url": "https://github.com/setu4993", "followers_url": "https://api.github.com/users/setu4993/followers", "following_url": "https://api.github.com/users/setu4993/following{/other_user}", "gists_url": "https://api.github.com/users/setu4993/gists{/gist_id}", "starred_url": "https://api.github.com/users/setu4993/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/setu4993/subscriptions", "organizations_url": "https://api.github.com/users/setu4993/orgs", "repos_url": "https://api.github.com/users/setu4993/repos", "events_url": "https://api.github.com/users/setu4993/events{/privacy}", "received_events_url": "https://api.github.com/users/setu4993/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "what are your versions? This doesn't break for me.\r\nTry running \r\n```\r\ntransformers-cli env\r\n```", "The latest version from PyPI, v3.0.2. It doesn't happen while running locally but does break on remote executions, though.\r\n\r\nLooking through the documentation, it made sense to me that an error would pop up since [`.generate()`](https://huggingface.co/transformers/model_doc/bart.html#transformers.BartForConditionalGeneration.generate) outputs a `torch.Tensor`, but [`tokenizer.decode()`](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.decode) (and batch_decode) expect `List[int]`", "PyTorch 1.6.0 and Python 3.6, if that helps.", "Can I see your traceback?\r\n\r\nYou can loop over tensors just like lists.", "I have a custom `summarization_trainer.py` in there that calls the `main` from `finetune.py`.\r\n\r\nTraceback:\r\n```\r\nFile \"summarization_trainer.py\", line 157, in main\r\n return finetune_main(args, model)\r\n File \"/opt/ml/code/finetune.py\", line 460, in main\r\n logger=logger,\r\n File \"/opt/ml/code/lightning_base.py\", line 448, in generic_train\r\n trainer.fit(model)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py\", line 1044, in fit\r\n results = self.run_pretrain_routine(model)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py\", line 1196, in run_pretrain_routine\r\n False)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 293, in _evaluate\r\n output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 470, in evaluation_forward\r\n output = model.validation_step(*args)\r\n File \"/opt/ml/code/finetune.py\", line 211, in validation_step\r\n return self._generative_step(batch)\r\n File \"/opt/ml/code/finetune.py\", line 255, in _generative_step\r\n preds: List[str] = self.ids_to_clean_text(generated_ids)\r\n File \"/opt/ml/code/finetune.py\", line 153, in ids_to_clean_text\r\n generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 2254, in batch_decode\r\n return [self.decode(seq, **kwargs) for seq in sequences]\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 2254, in <listcomp>\r\n return [self.decode(seq, **kwargs) for seq in sequences]\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py\", line 439, in decode\r\n text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)\r\n File \"/opt/conda/lib/python3.6/site-packages/tokenizers/implementations/base_tokenizer.py\", line 267, in decode\r\n return self._tokenizer.decode(ids, skip_special_tokens=skip_special_tokens)\r\nTypeError\r\n```", "is there anything after TypeError?\r\nI guess this is a torch 1.6 issue.", "No, it doesn't say anything after `TypeError`. Could be a 1.6 issue, let me try to downgrade and report back.", "Hmm, something's off. Now I'm seeing `TypeError` when trying to save the tokenizer during a checkpoint.\r\n```\r\nTraceback (most recent call last):\r\n File \"/root/ds-sandbox/projects/abstractive_summarization/finetuning-bart/finetuning_bart/summarization_trainer.py\", line 214, in <module>\r\n _ = slack_wrapper(args)\r\n File \"/opt/conda/lib/python3.6/site-packages/knockknock/slack_sender.py\", line 105, in wrapper_sender\r\n raise ex\r\n File \"/opt/conda/lib/python3.6/site-packages/knockknock/slack_sender.py\", line 63, in wrapper_sender\r\n value = func(*args, **kwargs)\r\n File \"/root/ds-sandbox/projects/abstractive_summarization/finetuning-bart/finetuning_bart/summarization_trainer.py\", line 159, in main\r\n return finetune_main(args, model)\r\n File \"/root/ds-sandbox/projects/abstractive_summarization/finetuning-bart/finetuning_bart/finetune.py\", line 463, in main\r\n logger=logger,\r\n File \"/root/ds-sandbox/projects/abstractive_summarization/finetuning-bart/finetuning_bart/lightning_base.py\", line 448, in generic_train\r\n trainer.fit(model)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py\", line 1003, in fit\r\n results = self.single_gpu_train(model)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py\", line 186, in single_gpu_train\r\n results = self.run_pretrain_routine(model)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py\", line 1213, in run_pretrain_routine\r\n self.train()\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py\", line 370, in train\r\n self.run_training_epoch()\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py\", line 470, in run_training_epoch\r\n self.run_evaluation(test_mode=False)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 430, in run_evaluation\r\n self.on_validation_end()\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_hook.py\", line 112, in on_validation_end\r\n callback.on_validation_end(self, self.get_model())\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py\", line 12, in wrapped_fn\r\n return fn(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/callbacks/model_checkpoint.py\", line 309, in on_validation_end\r\n self._do_check_save(filepath, current, epoch)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/callbacks/model_checkpoint.py\", line 346, in _do_check_save\r\n self._save_model(filepath)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/callbacks/model_checkpoint.py\", line 168, in _save_model\r\n self.save_function(filepath, self.save_weights_owandb: Program failed with code 1. Press ctrl-c to abort syncing.\r\nnly)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_io.py\", line 268, in save_checkpoint\r\n checkpoint = self.dump_checkpoint(weights_only)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_io.py\", line 379, in dump_checkpoint\r\n model.on_save_checkpoint(checkpoint)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py\", line 12, in wrapped_fn\r\n return fn(*args, **kwargs)\r\n File \"/root/ds-sandbox/projects/abstractive_summarization/finetuning-bart/finetuning_bart/lightning_base.py\", line 223, in on_save_checkpoint\r\n self.tokenizer.save_pretrained(save_path)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 1379, in save_pretrained\r\n vocab_files = self.save_vocabulary(save_directory)\r\n File \"/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py\", line 449, in save_vocabulary\r\n files = self._tokenizer.save_model(save_directory)\r\n File \"/opt/conda/lib/python3.6/site-packages/tokenizers/implementations/base_tokenizer.py\", line 323, in save_model\r\n return self._tokenizer.model.save(directory, name=name)\r\nTypeError\r\n```\r\n\r\nI have the changes proposed in this PR in my code so it is clearly not the same.", "^ is occurring on torch 1.5.1 and 1.6.0 both.", "Update: I was creating a Fast tokenizer and passing it in during the creation of `SummarizationModule`. Switching to the one automatically created by the `BaseTransformer` avoids the last error.", "The change in the PR is also related to the above error and creating a new tokenizer instead of an automatic initialization. Letting `SummarizationModule` and `BaseTransformer` deal with the creation of tokenizer instead doesn't raise those errors, however, that also means I can't use the Rust-based tokenizers.", "Great catch! Would you mind making a new issue with a broken snippet that doesn't use PL? and tag @sshleifer \r\nE.g.\r\n```\r\nfrom transformers import BartTokenizerFast, BartForConditionalGeneration\r\n...\r\n\r\n```\r\n?\r\nThen we can try to fix the bug on the proper level of abstraction.", "Thanks! Yes, agree that the right place for this is an issue. I'll try to create an issue that doesn't use PL later in the day." ]
1,597
1,597
1,597
CONTRIBUTOR
null
Ran into an error today while using `finetune.py` where decoding kept failing because the output was a `torch.Tensor` instead of a `List[int]`. This fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6610/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6610", "html_url": "https://github.com/huggingface/transformers/pull/6610", "diff_url": "https://github.com/huggingface/transformers/pull/6610.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6610.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6609
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6609/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6609/comments
https://api.github.com/repos/huggingface/transformers/issues/6609/events
https://github.com/huggingface/transformers/issues/6609
682,247,720
MDU6SXNzdWU2ODIyNDc3MjA=
6,609
PegasusForConditionalGeneration - Error in loading state dictionary
{ "login": "suchig", "id": 37094536, "node_id": "MDQ6VXNlcjM3MDk0NTM2", "avatar_url": "https://avatars.githubusercontent.com/u/37094536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suchig", "html_url": "https://github.com/suchig", "followers_url": "https://api.github.com/users/suchig/followers", "following_url": "https://api.github.com/users/suchig/following{/other_user}", "gists_url": "https://api.github.com/users/suchig/gists{/gist_id}", "starred_url": "https://api.github.com/users/suchig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suchig/subscriptions", "organizations_url": "https://api.github.com/users/suchig/orgs", "repos_url": "https://api.github.com/users/suchig/repos", "events_url": "https://api.github.com/users/suchig/events{/privacy}", "received_events_url": "https://api.github.com/users/suchig/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "I just fixed. Can you try again. Should produce a warning but no error.", "> I just fixed. Can you try again. Should produce a warning but no error.\r\n\r\nI am still getting the below error. It does not seem to even go to the tokenizer. It throws the error right when we acquire the pretrained model. It seems as though something about the checkpoint of the pre-trained model has changed\r\n\r\nRuntimeError: Error(s) in loading state_dict for PegasusForConditionalGeneration:\r\n\tsize mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).\r\n\tsize mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).", "I changed the state dict and deleted `model.encoder.embed_positions.weight` at 6pm EST.\r\n\r\nYour code works for me on master (93c5c9a5) \r\n\r\ncommand: `transformers-cli env`\r\n\r\n```\r\n- `transformers` version: 3.0.2\r\n- Platform: Darwin-19.5.0-x86_64-i386-64bit\r\n- Python version: 3.7.7\r\n- PyTorch version (GPU?): 1.5.1 (False)\r\n- Tensorflow version (GPU?): 2.2.0 (False)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```\r\n", "> `model.encoder.embed_positions.weight`\r\n\r\nI completely removed transformers and took a fresh git and I am still getting the same error. Is there some other information I could provide that will help resolve this issue for me?", "> I changed the state dict and deleted `model.encoder.embed_positions.weight` at 6pm EST.\r\n> \r\n> Your code works for me on master ([93c5c9a](https://github.com/huggingface/transformers/commit/93c5c9a528475db73c2b481131578b8dd903efba))\r\n> \r\n> command: `transformers-cli env`\r\n> \r\n> ```\r\n> - `transformers` version: 3.0.2\r\n> - Platform: Darwin-19.5.0-x86_64-i386-64bit\r\n> - Python version: 3.7.7\r\n> - PyTorch version (GPU?): 1.5.1 (False)\r\n> - Tensorflow version (GPU?): 2.2.0 (False)\r\n> - Using GPU in script?: <fill in>\r\n> - Using distributed or parallel set-up in script?: <fill in>\r\n> ```\r\n\r\nThe error occurs only for pegasus-arxiv. It works with warning for pegasus-pubmed and pegasus-large. I need help with pegasus-arxiv.\r\n" ]
1,597
1,597
1,597
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-1034-azure-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed - Using GPU in script?: Tried both - Using distributed or parallel set-up in script?: No ### Who can help @sshleifer ## Information Model I am using (Bert, XLNet ...): google/pegasus-arxiv The problem arises when using: the official example scripts: The tasks I am working on is: generating summary using pegasus-arxiv ## To reproduce Steps to reproduce the behavior: run the below script ```ruby mname = "google/pegasus-arxiv" model = PegasusForConditionalGeneration.from_pretrained(mname) ``` This is throwing error File "abstractive_summarizer.py", line 21, in <module> model = PegasusForConditionalGeneration.from_pretrained(mname, force_download=True) File "/anaconda/envs/py37_default/lib/python3.7/site-packages/transformers/modeling_utils.py", line 894, in from_pretrained model.__class__.__name__, "\n\t".join(error_msgs) RuntimeError: Error(s) in loading state_dict for PegasusForConditionalGeneration: size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). ## Expected behavior I tried running a sample in console this morning and it worked fine and I was able to generate summary using pegasus-arxiv. Once I transferred this to Jupyter notebook for some trial purpose, it downloaded pegasus-arxiv and after the same has been giving this error. (If not able to simulate, please try with force_download=True)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6609/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6608
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6608/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6608/comments
https://api.github.com/repos/huggingface/transformers/issues/6608/events
https://github.com/huggingface/transformers/issues/6608
682,175,774
MDU6SXNzdWU2ODIxNzU3NzQ=
6,608
How to use Huggingface model for continuous values directly?
{ "login": "monk1337", "id": 17107749, "node_id": "MDQ6VXNlcjE3MTA3NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/monk1337", "html_url": "https://github.com/monk1337", "followers_url": "https://api.github.com/users/monk1337/followers", "following_url": "https://api.github.com/users/monk1337/following{/other_user}", "gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/monk1337/subscriptions", "organizations_url": "https://api.github.com/users/monk1337/orgs", "repos_url": "https://api.github.com/users/monk1337/repos", "events_url": "https://api.github.com/users/monk1337/events{/privacy}", "received_events_url": "https://api.github.com/users/monk1337/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "you could input the `input_embeds` tokens instead of the `input_ids` to the forward pass and adjust the `hidden_size` accordingly. Or you just tweak the model files yourself and you remove `nn.Embeddings` and replace it by a dense layer. Btw, normally you get much better answers for these kind of questions when you post it on https://discuss.huggingface.co/ . We try to move \"non-bug\" questions to this forum :-) ", "@patrickvonplaten Thank you for the reply, Sure from onwards, I'll post my doubts there.\r\nIf you could provide any quick template to start with continuous values, that’d be helpful.", "Hi, @patrickvonplaten I asked this question on discussion forum but didn't get any update yet. Can you provide any starter code/ Template where I can feed continuous values directly? \r\n\r\nhttps://discuss.huggingface.co/t/how-to-use-huggingface-model-for-continuous-values-directly/816", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,604
1,604
NONE
null
Hi, I have a dataset which contains continuous values [ batch_size, features ] Features look like this : `[0.49221584, -0.021571456, -0.0920076, -0.14408934, -0.62306774]` I want to apply transformer model on these values and pass it to the final layer, something like this `batch_data ==> Transformer ==> output_layer ==> classification` Currently, I am using hand-coded multi-head attention and norm with the feed-forward network to pass these values to the transformer block. I Gone through hugging face models, but all the models accept tokens and sequences, Is there any way/hack How I can use hugging face transformer models on direct continuous values?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6608/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6607
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6607/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6607/comments
https://api.github.com/repos/huggingface/transformers/issues/6607/events
https://github.com/huggingface/transformers/pull/6607
682,155,989
MDExOlB1bGxSZXF1ZXN0NDcwNDM1NzU4
6,607
[Longformer] try if multi gpu works
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,597
1,651
1,601
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6607/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6607", "html_url": "https://github.com/huggingface/transformers/pull/6607", "diff_url": "https://github.com/huggingface/transformers/pull/6607.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6607.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6606
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6606/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6606/comments
https://api.github.com/repos/huggingface/transformers/issues/6606/events
https://github.com/huggingface/transformers/pull/6606
682,146,244
MDExOlB1bGxSZXF1ZXN0NDcwNDI3NDIy
6,606
Regression test for pegasus bugfix
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=h1) Report\n> Merging [#6606](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/18ca0e91402d17950b870d7c9f67ddb7fd573817?el=desc) will **decrease** coverage by `0.61%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6606/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6606 +/- ##\n==========================================\n- Coverage 79.89% 79.28% -0.62% \n==========================================\n Files 156 156 \n Lines 28213 28215 +2 \n==========================================\n- Hits 22542 22370 -172 \n- Misses 5671 5845 +174 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3BlZ2FzdXMucHk=) | `100.00% <100.00%> (+9.09%)` | :arrow_up: |\n| [src/transformers/modeling\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19wZWdhc3VzLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+6.20%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6606/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=footer). Last update [18ca0e9...68ace1e](https://codecov.io/gh/huggingface/transformers/pull/6606?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Just to get some context, why can't we fix the tokenizer's default value?", "correct max_model_length differs between checkpoints\r\n", "But isn't each tokenizer instantiated with a checkpoint?\r\nI'd expect this to be automatically done by\r\n```\r\ntokenizer1 = AutoTokenizer.from_pretrained(\"checkpoint_with_one_max_len\")\r\ntokenizer2 = AutoTokenizer.from_pretrained(\"checkpoint_with_another_max_len\")\r\n```", "Yeah. The tokenizer defaults have always been correct.\r\n\r\nThe issue was that fixing the model defaults to match them created an inconsistency with the state_dict on s3.\r\n\r\nFor cnn...\r\n\r\n- Broken converter creates model, config with 512 positional embeddings. Tokenizer is good -- says max_model_length=1024.\r\n- User reports IndexError bug. #6599 \r\n- Sam adjusts model config to position_embeddings=1024. \r\n- This creates a new error: at __init__ the cnn model will allocate space for 1024 positional embeddings, but the state dict on s3 will only have 512. #6909\r\n- Sam deletes positional embeddings on S3. Code works but gives missing key warning.\r\n- Sam sends this PR to suppress warning, add regression test that checks that #6599 can't happen again.\r\n\r\n\r\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
The bug was that tokenizer.max_model_length (set through `tokenizer_config.json`) was sometimes 1024, but `max_position_embeddings` was only 512. This means that the tokenizer can produce inputs of length 513, which will produce an IndexError when we try to get the from the embedding table. Since the position embeddings are static for pegasus, the fix was on s3: set the max_position_embeddings to the correct value, and remove the saved, incorrectly sized position embeddings from the state dict. The tradeoff here is that users can now pass max_position_embeddings=HUGE to the model without error, and pass huge inputs, and either OOM or get shitty performance. But if you are modifying config you sort of know what you are doing so I'm OK with it. This PR: - suppresses the warning created by the S3 change - adds a regression test that `config.max_position_embeddings >= tokenizer.max_model_length`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6606/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6606", "html_url": "https://github.com/huggingface/transformers/pull/6606", "diff_url": "https://github.com/huggingface/transformers/pull/6606.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6606.patch", "merged_at": 1597952083000 }
https://api.github.com/repos/huggingface/transformers/issues/6605
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6605/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6605/comments
https://api.github.com/repos/huggingface/transformers/issues/6605/events
https://github.com/huggingface/transformers/pull/6605
682,140,646
MDExOlB1bGxSZXF1ZXN0NDcwNDIyNzE3
6,605
Add tests to Trainer
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=h1) Report\n> Merging [#6605](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe0b85e77a6af041471657069bbb9c21a880cd5c?el=desc) will **increase** coverage by `0.08%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6605/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6605 +/- ##\n==========================================\n+ Coverage 80.21% 80.30% +0.08% \n==========================================\n Files 156 156 \n Lines 28178 28205 +27 \n==========================================\n+ Hits 22604 22650 +46 \n+ Misses 5574 5555 -19 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <ø> (ø)` | |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.26% <ø> (+10.67%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `90.90% <100.00%> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <100.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `83.50% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.02% <100.00%> (+0.07%)` | :arrow_up: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6605/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=footer). Last update [9a86321...3f89b2d](https://codecov.io/gh/huggingface/transformers/pull/6605?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Re. the eval loss, did you also run [test_trainer_distributed.py](https://github.com/huggingface/transformers/blob/master/tests/test_trainer_distributed.py) on a multi-gpu machine?", "No, I don't have a multi-GPU machine setup. It does not seem like this test uses the eval_loss anywhere, it only computes a metric.", "> No, I don't have a multi-GPU machine setup.\r\n\r\nYou can use the office machines! Not necessarily related to this PR but to keep in mind to run this test once in a while", "> Not necessarily related to this PR but to keep in mind to run this test once in a while\r\n\r\nIt was indeed broken not due to this PR, fixed. Know how to run it periodically now :-)" ]
1,597
1,597
1,597
COLLABORATOR
null
This PR moves the tests of the various `data_collator` in `test_data_collator.py` and adds tests of the Trainer on a simple regression problem. While testing, a few problems were uncovered: - The number of epochs is documented as a float but used as an int, fixed the documentation. - There was one more step done than specified by the argument `max_steps`. - The evaluation loss was wrong whenever the evaluation dataset length is not a round multiple of the batch size. Those three things are also fixed in the PR. With the regression infrastructure, we can add more tests (for custom data collator, optimizers, schedulers etc...) since each training is fast. Will do in follow-up PRs as this one was starting to be of a decent size already.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6605/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6605/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6605", "html_url": "https://github.com/huggingface/transformers/pull/6605", "diff_url": "https://github.com/huggingface/transformers/pull/6605.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6605.patch", "merged_at": 1597936430000 }
https://api.github.com/repos/huggingface/transformers/issues/6604
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6604/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6604/comments
https://api.github.com/repos/huggingface/transformers/issues/6604/events
https://github.com/huggingface/transformers/pull/6604
682,136,500
MDExOlB1bGxSZXF1ZXN0NDcwNDE5MjM4
6,604
Fix confusing warnings during TF2 import from PyTorch
{ "login": "jcrocholl", "id": 118312, "node_id": "MDQ6VXNlcjExODMxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/118312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcrocholl", "html_url": "https://github.com/jcrocholl", "followers_url": "https://api.github.com/users/jcrocholl/followers", "following_url": "https://api.github.com/users/jcrocholl/following{/other_user}", "gists_url": "https://api.github.com/users/jcrocholl/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcrocholl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcrocholl/subscriptions", "organizations_url": "https://api.github.com/users/jcrocholl/orgs", "repos_url": "https://api.github.com/users/jcrocholl/repos", "events_url": "https://api.github.com/users/jcrocholl/events{/privacy}", "received_events_url": "https://api.github.com/users/jcrocholl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=h1) Report\n> Merging [#6604](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/18ca0e91402d17950b870d7c9f67ddb7fd573817?el=desc) will **decrease** coverage by `1.15%`.\n> The diff coverage is `66.66%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6604/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6604 +/- ##\n==========================================\n- Coverage 79.89% 78.74% -1.16% \n==========================================\n Files 156 156 \n Lines 28213 28213 \n==========================================\n- Hits 22542 22216 -326 \n- Misses 5671 5997 +326 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <66.66%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-6.02%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.29%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=footer). Last update [18ca0e9...6ebffb5](https://codecov.io/gh/huggingface/transformers/pull/6604?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This fixes #5588 ", "This pull request has been replaced with #6623 " ]
1,597
1,597
1,597
CONTRIBUTOR
null
1. Swapped missing_keys and unexpected_keys. 2. Copy&paste error caused these warnings to say "from TF 2.0" when it's actually "from PyTorch".
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6604/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6604", "html_url": "https://github.com/huggingface/transformers/pull/6604", "diff_url": "https://github.com/huggingface/transformers/pull/6604.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6604.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6603
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6603/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6603/comments
https://api.github.com/repos/huggingface/transformers/issues/6603/events
https://github.com/huggingface/transformers/pull/6603
682,090,091
MDExOlB1bGxSZXF1ZXN0NDcwMzc2NzM0
6,603
[cleanup] remove confusing newline
{ "login": "orena1", "id": 8983713, "node_id": "MDQ6VXNlcjg5ODM3MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/8983713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orena1", "html_url": "https://github.com/orena1", "followers_url": "https://api.github.com/users/orena1/followers", "following_url": "https://api.github.com/users/orena1/following{/other_user}", "gists_url": "https://api.github.com/users/orena1/gists{/gist_id}", "starred_url": "https://api.github.com/users/orena1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orena1/subscriptions", "organizations_url": "https://api.github.com/users/orena1/orgs", "repos_url": "https://api.github.com/users/orena1/repos", "events_url": "https://api.github.com/users/orena1/events{/privacy}", "received_events_url": "https://api.github.com/users/orena1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=h1) Report\n> Merging [#6603](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/18ca0e91402d17950b870d7c9f67ddb7fd573817&el=desc) will **decrease** coverage by `1.12%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6603/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6603 +/- ##\n==========================================\n- Coverage 79.89% 78.77% -1.13% \n==========================================\n Files 156 156 \n Lines 28213 28213 \n==========================================\n- Hits 22542 22226 -316 \n- Misses 5671 5987 +316 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-6.02%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <0.00%> (-3.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.29%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6603/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=footer). Last update [18ca0e9...1ac2967](https://codecov.io/gh/huggingface/transformers/pull/6603?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Next time, try writing a PR title that provides more context.\r\nLike \"remove confusing newline\".\r\nThen your current title could go in the description.\r\n\r\nAnyways, thanks for the contribution!", "I'll try to be more clear next time, thanks." ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6603/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6603", "html_url": "https://github.com/huggingface/transformers/pull/6603", "diff_url": "https://github.com/huggingface/transformers/pull/6603.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6603.patch", "merged_at": 1597898016000 }
https://api.github.com/repos/huggingface/transformers/issues/6602
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6602/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6602/comments
https://api.github.com/repos/huggingface/transformers/issues/6602/events
https://github.com/huggingface/transformers/pull/6602
682,030,011
MDExOlB1bGxSZXF1ZXN0NDcwMzIzNzM4
6,602
Create README.md
{ "login": "rohanrajpal", "id": 7023147, "node_id": "MDQ6VXNlcjcwMjMxNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/7023147?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohanrajpal", "html_url": "https://github.com/rohanrajpal", "followers_url": "https://api.github.com/users/rohanrajpal/followers", "following_url": "https://api.github.com/users/rohanrajpal/following{/other_user}", "gists_url": "https://api.github.com/users/rohanrajpal/gists{/gist_id}", "starred_url": "https://api.github.com/users/rohanrajpal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohanrajpal/subscriptions", "organizations_url": "https://api.github.com/users/rohanrajpal/orgs", "repos_url": "https://api.github.com/users/rohanrajpal/repos", "events_url": "https://api.github.com/users/rohanrajpal/events{/privacy}", "received_events_url": "https://api.github.com/users/rohanrajpal/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,597
1,598
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6602/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6602", "html_url": "https://github.com/huggingface/transformers/pull/6602", "diff_url": "https://github.com/huggingface/transformers/pull/6602.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6602.patch", "merged_at": 1598997524000 }
https://api.github.com/repos/huggingface/transformers/issues/6601
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6601/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6601/comments
https://api.github.com/repos/huggingface/transformers/issues/6601/events
https://github.com/huggingface/transformers/pull/6601
682,014,083
MDExOlB1bGxSZXF1ZXN0NDcwMzEwOTE0
6,601
Fix GPT2DoubleHeadsModel to work with model.generate()
{ "login": "LSinev", "id": 12072891, "node_id": "MDQ6VXNlcjEyMDcyODkx", "avatar_url": "https://avatars.githubusercontent.com/u/12072891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LSinev", "html_url": "https://github.com/LSinev", "followers_url": "https://api.github.com/users/LSinev/followers", "following_url": "https://api.github.com/users/LSinev/following{/other_user}", "gists_url": "https://api.github.com/users/LSinev/gists{/gist_id}", "starred_url": "https://api.github.com/users/LSinev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LSinev/subscriptions", "organizations_url": "https://api.github.com/users/LSinev/orgs", "repos_url": "https://api.github.com/users/LSinev/repos", "events_url": "https://api.github.com/users/LSinev/events{/privacy}", "received_events_url": "https://api.github.com/users/LSinev/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "I do not know what happened on your PR, any way you can restore or rebase/force-push for the diff to be readable?", "force-pushed", "@julien-c Rebased onto latest commit in master (https://github.com/huggingface/transformers/commit/f72fe1f31aca235c7f675680832cc364efe4088e)", "Don't think the change to `past[-1]` is applicable to all models. We should enforce the use of `return_dict=True` in TF as well to simplify this behaviour", "> Don't think the change to `past[-1]` is applicable to all models.\r\n\r\n@patrickvonplaten removed this commit.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=h1) Report\n> Merging [#6601](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/26d5475d4b6644528956df3020dbaa436b443706?el=desc) will **increase** coverage by `5.22%`.\n> The diff coverage is `80.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6601/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6601 +/- ##\n==========================================\n+ Coverage 75.91% 81.13% +5.22% \n==========================================\n Files 195 168 -27 \n Lines 39827 32288 -7539 \n==========================================\n- Hits 30233 26198 -4035 \n+ Misses 9594 6090 -3504 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.66% <80.00%> (-1.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-71.56%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `37.03% <0.00%> (-53.13%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `36.50% <0.00%> (-39.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-37.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.72%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.29% <0.00%> (-20.48%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.22%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| ... and [157 more](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=footer). Last update [26d5475...8ffb4c0](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hey @LSinev, \r\n\r\nI think the `token_type_ids` would have to be increased by 1 at each generation step. With the current generation implementing this logic is quite tedious - I think we should put this PR on hold until generation is properly refactored. Will try to get to this within the next 3,4 weeks. Hope that's fine!", "\r\n@LSinev I think this implementation is what I need! (haven't tried it)\r\nFor my purpose, I would have a `context` (`token_type_ids`=0), a `prompt token` (`token_type_ids`=1) followed by prediction (`token_type_ids`=1).\r\nWith this PR, seems like I can pass in `context` + `prompt token` with `token_type_ids` and the last `token_type_ids` will be reused in the generated, which is what I want.\r\n\r\n> I think the `token_type_ids` would have to be increased by 1 at each generation step\r\n\r\n @patrickvonplaten Do you mean `position_ids`?\r\n I just got an idea. What if we allow users to pass in a function for `prepare_inputs_for_generation`, so users can easily customize the behavior? 🤔 💥", "> like\r\n\r\nThat would work with `past`, but would break if you set `use_cache=False` because `token_type_ids` length's is not increased. I can see that this feature is required. However, I want to wait until the refactoring is done before adding more functionality to the general `generate()` method. \r\n\r\nAdding a function to `prepare_input_ids` is a bit tooo hacky for me as well (we would need to add a function arg to `generate()`)", "Was just curious what was the status of this PR? It seems all the checks have passed", "> > like\r\n> \r\n> That would work with `past`, but would break if you set `use_cache=False` because `token_type_ids` length's is not increased. I can see that this feature is required. However, I want to wait until the refactoring is done before adding more functionality to the general `generate()` method.\r\n> \r\n> Adding a function to `prepare_input_ids` is a bit tooo hacky for me as well (we would need to add a function arg to `generate()` which I don't really like\r\n\r\nSorry to only get back you guys now. So having refactored the `generate()` method, I think we can should add an `if token_type_ids in model_kwargs` here: https://github.com/huggingface/transformers/blob/70708cca1a2f3e01d9d72a3aaa7ab078dfef639e/src/transformers/generation_utils.py#L184 to deal with this case. The way it's implemented now does not really work as explained in the message above.", "Maybe I should add changes to `GPT2ForSequenceClassification` too? Or maybe `GPT2PreTrainedModel` should have widest `prepare_inputs_for_generation` for all inherited models?", "Tests added. Not a good showcase (because the model is not finetuned to use `token_type_ids` in the way like https://github.com/huggingface/transfer-learning-conv-ai), but at least it tests that `token_type_ids` are used (and expanded if `num_return_sequences` passed) and affect output. Not sure if slow tests are tested online at PR submission, though.", "Tests are great! It's fine if it's not a good showcase - as long as we make sure the functionality works and will work in the future this is great! \r\n\r\nGreat job @LSinev and thanks for bearing with me throughout this PR :-) " ]
1,597
1,619
1,605
CONTRIBUTOR
null
`token_type_ids` were not passed through `model_specific_kwargs` nor cut properly with `use_cache`/`past` usage (maybe this should be corrected through bigger PR for all models available for caching).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6601/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6601", "html_url": "https://github.com/huggingface/transformers/pull/6601", "diff_url": "https://github.com/huggingface/transformers/pull/6601.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6601.patch", "merged_at": 1605533745000 }
https://api.github.com/repos/huggingface/transformers/issues/6600
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6600/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6600/comments
https://api.github.com/repos/huggingface/transformers/issues/6600/events
https://github.com/huggingface/transformers/pull/6600
681,977,164
MDExOlB1bGxSZXF1ZXN0NDcwMjgwMjY4
6,600
[wip] Seq2SeqTrainer
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,597
1,598
1,598
MEMBER
null
cc @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6600/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6600", "html_url": "https://github.com/huggingface/transformers/pull/6600", "diff_url": "https://github.com/huggingface/transformers/pull/6600.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6600.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6599
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6599/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6599/comments
https://api.github.com/repos/huggingface/transformers/issues/6599/events
https://github.com/huggingface/transformers/issues/6599
681,945,105
MDU6SXNzdWU2ODE5NDUxMDU=
6,599
Pegasus: IndexError: index out of range in self
{ "login": "yxyzzz", "id": 5890954, "node_id": "MDQ6VXNlcjU4OTA5NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5890954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yxyzzz", "html_url": "https://github.com/yxyzzz", "followers_url": "https://api.github.com/users/yxyzzz/followers", "following_url": "https://api.github.com/users/yxyzzz/following{/other_user}", "gists_url": "https://api.github.com/users/yxyzzz/gists{/gist_id}", "starred_url": "https://api.github.com/users/yxyzzz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yxyzzz/subscriptions", "organizations_url": "https://api.github.com/users/yxyzzz/orgs", "repos_url": "https://api.github.com/users/yxyzzz/repos", "events_url": "https://api.github.com/users/yxyzzz/events{/privacy}", "received_events_url": "https://api.github.com/users/yxyzzz/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Found bug, will be fix by EOD.", "This is a coincidence. I found the same issue with pegasus-arxiv and pegasus-pubmed today. Glad this will be fixed", "> \r\n> ## Information\r\n> Model I am using: google/pegasus-cnn_dailymail\r\n> \r\n> The problem arises when using:\r\n> \r\n> ```\r\n> import torch\r\n> from transformers import AutoModelWithLMHead, AutoTokenizer\r\n> \r\n> torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'\r\n> \r\n> model = AutoModelWithLMHead.from_pretrained(\"google/pegasus-cnn_dailymail\")\r\n> tokenizer = AutoTokenizer.from_pretrained(\"google/pegasus-cnn_dailymail\")\r\n> \r\n> src_text = [\r\n> '(CNN)James Best, best known for his portrayal of bumbling sheriff Rosco P. Coltrane on TV\\'s \"The Dukes of Hazzard,\" died Monday after a brief illness. He was 88. Best died in hospice in Hickory, North Carolina, of complications from pneumonia, said Steve Latshaw, a longtime friend and Hollywood colleague. Although he\\'d been a busy actor for decades in theater and in Hollywood, Best didn\\'t become famous until 1979, when \"The Dukes of Hazzard\\'s\" cornpone charms began beaming into millions of American homes almost every Friday night. For seven seasons, Best\\'s Rosco P. Coltrane chased the moonshine-running Duke boys back and forth across the back roads of fictitious Hazzard County, Georgia, although his \"hot pursuit\" usually ended with him crashing his patrol car. Although Rosco was slow-witted and corrupt, Best gave him a childlike enthusiasm that got laughs and made him endearing. His character became known for his distinctive \"kew-kew-kew\" chuckle and for goofy catchphrases such as \"cuff \\'em and stuff \\'em!\" upon making an arrest. Among the most popular shows on TV in the early \\'80s, \"The Dukes of Hazzard\" ran until 1985 and spawned TV movies, an animated series and video games. Several of Best\\'s \"Hazzard\" co-stars paid tribute to the late actor on social media. \"I laughed and learned more from Jimmie in one hour than from anyone else in a whole year,\" co-star John Schneider, who played Bo Duke, said on Twitter. \"Give Uncle Jesse my love when you see him dear friend.\" \"Jimmy Best was the most constantly creative person I have ever known,\" said Ben Jones, who played mechanic Cooter on the show, in a Facebook post. \"Every minute of his long life was spent acting, writing, producing, painting, teaching, fishing, or involved in another of his life\\'s many passions.\" Born Jewel Guy on July 26, 1926, in Powderly, Kentucky, Best was orphaned at 3 and adopted by Armen and Essa Best, who renamed him James and raised him in rural Indiana. Best served in the Army during World War II before launching his acting career. In the 1950s and 1960s, he accumulated scores of credits, playing a range of colorful supporting characters in such TV shows as \"The Twilight Zone,\" \"Bonanza,\" \"The Andy Griffith Show\" and \"Gunsmoke.\" He later appeared in a handful of Burt Reynolds\\' movies, including \"Hooper\" and \"The End.\" But Best will always be best known for his \"Hazzard\" role, which lives on in reruns. \"Jimmie was my teacher, mentor, close friend and collaborator for 26 years,\" Latshaw said. \"I directed two of his feature films, including the recent \\'Return of the Killer Shrews,\\' a sequel he co-wrote and was quite proud of as he had made the first one more than 50 years earlier.\" People we\\'ve lost in 2015 . CNN\\'s Stella Chan contributed to this story.'\r\n> ]\r\n> batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)\r\n> translated = model.generate(**batch)\r\n> tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n> ```\r\n> \r\nCurrently I am overcoming this by passing additional parameter max_length=512 in prepare_seq2seq_batch. \r\n\r\nThis is not what I want. I want to expand the length to atleast 2048 if possible. But I don't see a way to do it", "The bug should be fixed. Please try again.\r\n\r\nThe following max_lengths are recommended:\r\n```\r\nmax_input_length = {\r\n \"xsum\": 512,\r\n \"cnn_dailymail\": 1024,\r\n \"newsroom\": 512,\r\n \"wikihow\": 512,\r\n \"multi_news\": 1024,\r\n \"reddit_tifu\": 512,\r\n \"big_patent\": 1024,\r\n \"arxiv\": 1024,\r\n \"pubmed\": 1024,\r\n \"gigaword\": 128,\r\n \"aeslc\": 512,\r\n \"billsum\": 1024,\r\n \"large\": 1024,\r\n}\r\n```\r\n\r\n@suchig You can probably do longer with the current code by passing `max_position_embeddings=2048`, but no model was finetuned on sequences of that length so it might (a) OOM or (b) perform poorly.", "> The bug should be fixed. Please try again.\r\n> \r\n> The following max_lengths are recommended:\r\n> \r\n> ```\r\n> max_input_length = {\r\n> \"xsum\": 512,\r\n> \"cnn_dailymail\": 1024,\r\n> \"newsroom\": 512,\r\n> \"wikihow\": 512,\r\n> \"multi_news\": 1024,\r\n> \"reddit_tifu\": 512,\r\n> \"big_patent\": 1024,\r\n> \"arxiv\": 1024,\r\n> \"pubmed\": 1024,\r\n> \"gigaword\": 128,\r\n> \"aeslc\": 512,\r\n> \"billsum\": 1024,\r\n> \"large\": 1024,\r\n> }\r\n> ```\r\n> \r\n> @suchig You can probably do longer with the current code by passing `max_position_embeddings=2048`, but no model was finetuned on sequences of that length so it might (a) OOM or (b) perform poorly.\r\n\r\nAt this junction, I am not even able to go beyond PegasusForConditionalGeneration.from_pretrained(mname) statement. It was working fine earlier in the morning. It feels as though something has changed in the pretrained model checkpoint this afternoon.", "@suchig please stick to one issue. ", "Hi @sshleifer , I want to a train a model for 2048 input length. My idea is to use a distilled teacher model and change the source length to 2048. How can I change the `max_position_embeddings`? \r\n\r\nI'm using the following command to train the model on CNN-DM dataset.\r\n\r\n```python examples/seq2seq/distillation.py --teacher sshleifer/distill-pegasus-cnn-16-4 --data_dir ./data/cnn_dm/ --student_decoder_layers 3 --student_encoder_layers 12 --freeze_embeds --learning_rate=3e-4 --do_train --do_predict --val_check_interval 0.1 --n_val 1000 --eval_beams 2 --length_penalty=0.5 --val_metric rouge2 --max_source_length 2048 --max_target_length=142 --val_max_target_length=142 --test_max_target_length=142 --model_name_or_path IGNORED --alpha_hid=3. --train_batch_size=1 --eval_batch_size=1 --gradient_accumulation_steps=256 --sortish_sampler --num_train_epochs=6 --warmup_steps 500 --output_dir distilpegasus_xsum_12_3 --gpus 1 --tokenizer_name sshleifer/distill-pegasus-cnn-16-4 --num_workers=0 --adafactor```", "That command will be very slow, but you can achieve your goal by passing `max_position_embeddings=2048` [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/distillation.py#L41)" ]
1,597
1,603
1,597
NONE
null
@sshleifer ## Environment info - `transformers` version: 3.0.2 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Model I am using: google/pegasus-cnn_dailymail The problem arises when using: ``` import torch from transformers import AutoModelWithLMHead, AutoTokenizer torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' model = AutoModelWithLMHead.from_pretrained("google/pegasus-cnn_dailymail") tokenizer = AutoTokenizer.from_pretrained("google/pegasus-cnn_dailymail") src_text = [ '(CNN)James Best, best known for his portrayal of bumbling sheriff Rosco P. Coltrane on TV\'s "The Dukes of Hazzard," died Monday after a brief illness. He was 88. Best died in hospice in Hickory, North Carolina, of complications from pneumonia, said Steve Latshaw, a longtime friend and Hollywood colleague. Although he\'d been a busy actor for decades in theater and in Hollywood, Best didn\'t become famous until 1979, when "The Dukes of Hazzard\'s" cornpone charms began beaming into millions of American homes almost every Friday night. For seven seasons, Best\'s Rosco P. Coltrane chased the moonshine-running Duke boys back and forth across the back roads of fictitious Hazzard County, Georgia, although his "hot pursuit" usually ended with him crashing his patrol car. Although Rosco was slow-witted and corrupt, Best gave him a childlike enthusiasm that got laughs and made him endearing. His character became known for his distinctive "kew-kew-kew" chuckle and for goofy catchphrases such as "cuff \'em and stuff \'em!" upon making an arrest. Among the most popular shows on TV in the early \'80s, "The Dukes of Hazzard" ran until 1985 and spawned TV movies, an animated series and video games. Several of Best\'s "Hazzard" co-stars paid tribute to the late actor on social media. "I laughed and learned more from Jimmie in one hour than from anyone else in a whole year," co-star John Schneider, who played Bo Duke, said on Twitter. "Give Uncle Jesse my love when you see him dear friend." "Jimmy Best was the most constantly creative person I have ever known," said Ben Jones, who played mechanic Cooter on the show, in a Facebook post. "Every minute of his long life was spent acting, writing, producing, painting, teaching, fishing, or involved in another of his life\'s many passions." Born Jewel Guy on July 26, 1926, in Powderly, Kentucky, Best was orphaned at 3 and adopted by Armen and Essa Best, who renamed him James and raised him in rural Indiana. Best served in the Army during World War II before launching his acting career. In the 1950s and 1960s, he accumulated scores of credits, playing a range of colorful supporting characters in such TV shows as "The Twilight Zone," "Bonanza," "The Andy Griffith Show" and "Gunsmoke." He later appeared in a handful of Burt Reynolds\' movies, including "Hooper" and "The End." But Best will always be best known for his "Hazzard" role, which lives on in reruns. "Jimmie was my teacher, mentor, close friend and collaborator for 26 years," Latshaw said. "I directed two of his feature films, including the recent \'Return of the Killer Shrews,\' a sequel he co-wrote and was quite proud of as he had made the first one more than 50 years earlier." People we\'ve lost in 2015 . CNN\'s Stella Chan contributed to this story.' ] batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device) translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) ``` The tasks I am working on is: * abstractive text summarization ## To reproduce 1. Run the script above. The traceback: ``` IndexError Traceback (most recent call last) <ipython-input-1-ac154ffeb574> in <module> 13 ] 14 batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device) ---> 15 translated = model.generate(**batch) 16 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 13 def decorate_context(*args, **kwargs): 14 with self: ---> 15 return func(*args, **kwargs) 16 return decorate_context 17 ~/projects/transformers/src/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, **model_specific_kwargs) 394 encoder = self.get_encoder() 395 --> 396 encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask) 397 398 # Expand input ids if num_beams > 1 or num_return_sequences > 1 ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/projects/transformers/src/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, output_attentions, output_hidden_states, return_dict) 328 329 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale --> 330 embed_pos = self.embed_positions(input_ids) 331 x = inputs_embeds + embed_pos 332 x = self.layernorm_embedding(x) ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 13 def decorate_context(*args, **kwargs): 14 with self: ---> 15 return func(*args, **kwargs) 16 return decorate_context 17 ~/projects/transformers/src/transformers/modeling_bart.py in forward(self, input_ids, use_cache) 1337 # starts at 0, ends at 1-seq_len 1338 positions = torch.arange(seq_len, dtype=torch.long, device=self.weight.device) -> 1339 return super().forward(positions) ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input) 122 123 def forward(self, input: Tensor) -> Tensor: --> 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, 126 self.norm_type, self.scale_grad_by_freq, self.sparse) ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1812 # remove once script supports set_grad_enabled 1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1815 1816 IndexError: index out of range in self ``` ## Expected behavior IndexError: index out of range in self
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6599/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6598
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6598/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6598/comments
https://api.github.com/repos/huggingface/transformers/issues/6598/events
https://github.com/huggingface/transformers/pull/6598
681,939,055
MDExOlB1bGxSZXF1ZXN0NDcwMjQ4MDkx
6,598
Create README.md
{ "login": "rohanrajpal", "id": 7023147, "node_id": "MDQ6VXNlcjcwMjMxNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/7023147?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohanrajpal", "html_url": "https://github.com/rohanrajpal", "followers_url": "https://api.github.com/users/rohanrajpal/followers", "following_url": "https://api.github.com/users/rohanrajpal/following{/other_user}", "gists_url": "https://api.github.com/users/rohanrajpal/gists{/gist_id}", "starred_url": "https://api.github.com/users/rohanrajpal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohanrajpal/subscriptions", "organizations_url": "https://api.github.com/users/rohanrajpal/orgs", "repos_url": "https://api.github.com/users/rohanrajpal/repos", "events_url": "https://api.github.com/users/rohanrajpal/events{/privacy}", "received_events_url": "https://api.github.com/users/rohanrajpal/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,597
1,598
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6598/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6598", "html_url": "https://github.com/huggingface/transformers/pull/6598", "diff_url": "https://github.com/huggingface/transformers/pull/6598.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6598.patch", "merged_at": 1598997556000 }
https://api.github.com/repos/huggingface/transformers/issues/6597
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6597/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6597/comments
https://api.github.com/repos/huggingface/transformers/issues/6597/events
https://github.com/huggingface/transformers/issues/6597
681,889,401
MDU6SXNzdWU2ODE4ODk0MDE=
6,597
tf2 transformers cache dir
{ "login": "thevasudevgupta", "id": 53136577, "node_id": "MDQ6VXNlcjUzMTM2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thevasudevgupta", "html_url": "https://github.com/thevasudevgupta", "followers_url": "https://api.github.com/users/thevasudevgupta/followers", "following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}", "gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions", "organizations_url": "https://api.github.com/users/thevasudevgupta/orgs", "repos_url": "https://api.github.com/users/thevasudevgupta/repos", "events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}", "received_events_url": "https://api.github.com/users/thevasudevgupta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It should be `.cache/torch/transformers` I think (even though \"torch\" is in the name).", "Probably that's the case. Thanks." ]
1,597
1,598
1,598
CONTRIBUTOR
null
what is the directory of tf2 transformers cache weights? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6597/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6596
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6596/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6596/comments
https://api.github.com/repos/huggingface/transformers/issues/6596/events
https://github.com/huggingface/transformers/pull/6596
681,839,322
MDExOlB1bGxSZXF1ZXN0NDcwMTYzODk0
6,596
Fix #6575
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,597
1,597
1,597
COLLABORATOR
null
Make docs clearer as mentioned in #6575
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6596/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6596", "html_url": "https://github.com/huggingface/transformers/pull/6596", "diff_url": "https://github.com/huggingface/transformers/pull/6596.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6596.patch", "merged_at": 1597856673000 }
https://api.github.com/repos/huggingface/transformers/issues/6595
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6595/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6595/comments
https://api.github.com/repos/huggingface/transformers/issues/6595/events
https://github.com/huggingface/transformers/pull/6595
681,809,190
MDExOlB1bGxSZXF1ZXN0NDcwMTM4ODk0
6,595
Typo fix in 04-onnx-export
{ "login": "SidJain1412", "id": 35868478, "node_id": "MDQ6VXNlcjM1ODY4NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/35868478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SidJain1412", "html_url": "https://github.com/SidJain1412", "followers_url": "https://api.github.com/users/SidJain1412/followers", "following_url": "https://api.github.com/users/SidJain1412/following{/other_user}", "gists_url": "https://api.github.com/users/SidJain1412/gists{/gist_id}", "starred_url": "https://api.github.com/users/SidJain1412/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SidJain1412/subscriptions", "organizations_url": "https://api.github.com/users/SidJain1412/orgs", "repos_url": "https://api.github.com/users/SidJain1412/repos", "events_url": "https://api.github.com/users/SidJain1412/events{/privacy}", "received_events_url": "https://api.github.com/users/SidJain1412/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=h1) Report\n> Merging [#6595](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ab42d74850233cff9df87701d257d9b975435f66&el=desc) will **increase** coverage by `1.10%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6595/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6595 +/- ##\n==========================================\n+ Coverage 79.42% 80.52% +1.10% \n==========================================\n Files 156 156 \n Lines 28127 28127 \n==========================================\n+ Hits 22339 22650 +311 \n+ Misses 5788 5477 -311 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.91% <0.00%> (+72.35%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=footer). Last update [ab42d74...a49a870](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6595/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6595", "html_url": "https://github.com/huggingface/transformers/pull/6595", "diff_url": "https://github.com/huggingface/transformers/pull/6595.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6595.patch", "merged_at": 1597911437000 }
https://api.github.com/repos/huggingface/transformers/issues/6594
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6594/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6594/comments
https://api.github.com/repos/huggingface/transformers/issues/6594/events
https://github.com/huggingface/transformers/pull/6594
681,805,797
MDExOlB1bGxSZXF1ZXN0NDcwMTM2MDc4
6,594
Add "Leveraging Pretrained Checkpoints for Generation" Seq2Seq models.
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "If I understand correctly, the BERT model used here is slightly different because:\r\n- It doesn't use token type IDs\r\n- It's tying its word embedding layer to its LM head\r\n- No pooling layer\r\n\r\nDoesn't that just mean we could use an additional architecture instead of an entire model class? Something like the following, in `modeling_bert.py`:\r\n\r\n```py\r\n@add_start_docstrings(\r\n \"\"\"Bert Model with a `language modeling` head on top that acts as a decoder in a seq2seq setting.\"\"\", BERT_START_DOCSTRING\r\n)\r\nclass CausalBertModel(BertPreTrainedModel):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n\r\n if not config.is_decoder:\r\n logger.warning(\"If you want to use `BertLMHeadModel` as a standalone, add `is_decoder=True.`\")\r\n\r\n self.bert = BertModel(config)\r\n self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)\r\n\r\n self.init_weights()\r\n```\r\n\r\nAnd this module wouldn't accept token type IDs as an input.\r\n\r\nI don't know what to do regarding the tokenizer though. This ^ approach could probably leverage @julien-c's https://github.com/huggingface/transformers/pull/6995", "Naming ideas: `BertFor{Conditional}Generation`, `BertEncoder`, `BertDecoder`. \r\n\r\nReasoning:\r\n\r\n+ `CausalBert` doesn't make sense if the encoder wasn't trained with a causal mask. \r\n+ I think in class naming it's more important to give someone a sense of how to use something than how that thing was trained, but that's not an opinion I hold strongly.\r\n\r\nAnyways, your signatures look super clean, easy and consistent!\r\nExcited to try these out+happy to help check metrics.", "@LysandreJik, \r\n\r\nThere are a couple of problems with that: \r\n\r\n1) I also need a different `BertEmbeddings` or manually set `self.token_type_embeddings` to a zero matrix. Even if `token_type_ids` is set to `None` in Bert, the `self.token_type_embeddings` is always used. This model just does not have the embeddings (and should not have them IMO). I could set the `self.token_type_embedding` matrix just to 0, but then people using this class for training would not realize that a `self.token_type_embedding` matrix is trained which it shouldn't. So, here I think either way, I will need a separete `BertEmbeddings` class.\r\n\r\n2) A bigger problem is the config class. Because I need both the new `CausalBertForCausalLM` and `BertLMHeadModel` in the `AUTO_MODELS_FOR_CAUSAL_LM` class (to leverage both models with the `EncoderDecoder` framework), the two models have to have different config classes. I guess we could also create a separate config class and overwrite the inherited config class from `BertPretrainedModel`, but then IMO, it's cleaner to just create a new `PretrainedModelClass` and in this case we can directly create a completely new model class\r\n\r\nSo overall, it seems to me that a separate model class is the cleaner way to go - what do you think?\r\n\r\n@sshleifer - very much agree here! Think the naming should be different...`BertEncoder` is already taken though. I could go for `BertForGenerationEncoder` and `BertForGenerationDecoder` and `BertForGenerationConfig` - No need for `BertForConditionalGeneration` as the `EncoderDecoderModel will be used for this", "BertForGenerationEncoder and BertForGenerationDecoder and BertForGenerationConfig 👍 \r\n\r\nI do see lysandre's point though and would be fine with you setting `token_type` matrix to 0 if it's small (which I think it is).\r\n", "summarization models seem to function and are uploaded here: \r\n\r\nhttps://huggingface.co/models?search=google%2Froberta2roberta", "**UPDATE**: PR is ready for review @sshleifer @LysandreJik @sgugger . \r\n\r\nWould be awesome if you could take a look", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=h1) Report\n> Merging [#6594](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/15478c1287a4e7b52c01730ffb0718243d153600?el=desc) will **increase** coverage by `2.11%`.\n> The diff coverage is `75.76%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6594/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6594 +/- ##\n==========================================\n+ Coverage 78.37% 80.49% +2.11% \n==========================================\n Files 164 167 +3 \n Lines 31026 31314 +288 \n==========================================\n+ Hits 24318 25207 +889 \n+ Misses 6708 6107 -601 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.23% <ø> (-0.05%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `91.52% <40.00%> (-4.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `88.78% <50.00%> (-3.22%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0X2dlbmVyYXRpb24ucHk=) | `69.19% <69.19%> (ø)` | |\n| [src/transformers/tokenization\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `94.64% <94.64%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.33% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.61% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/configuration\\_bert\\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnRfZ2VuZXJhdGlvbi5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <100.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <100.00%> (-0.28%)` | :arrow_down: |\n| ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=footer). Last update [15478c1...aa953cb](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "> Looks great to me! I mostly have annoying nits about the docs, cause I'm an annoying person.\r\n\r\nHaha, no you are 100% right - sorry for being so sloppy with the docs! I should have learnt it by now ....", "@sshleifer @sgugger - thanks a lot for your suggestions. I went for the name `BertGenerationEncoder` and `BertGenerationDecoder` now. I think it's the best trade-off between short and concise name that is not confusing.", "Have the \"share\" models been implemented? In the paper, in many tasks they achieve the best results.", "Yes you can find then under google/roberta2roberta", "Thank you. How to tie weights in the code for training own model?", "`tie_encoder_decoder=True` -> The code in this model card should show you how to do it :-) https://huggingface.co/patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16" ]
1,597
1,600
1,599
MEMBER
null
This PR adds the models from the following paper: Paper: https://arxiv.org/pdf/1907.12461.pdf The paper does a great job at showing how pretrained BERT & RoBERTa model can be leveraged for Seq2Seq tasks and yields good results on many seq2seq tasks. It's fits very well with the current implementation of the EncoderDecoder framework. This PR adds code to port all pretrained encoder decoder models that can be found here: https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder, which can be found here: https://huggingface.co/models?search=google%2Froberta and here: https://huggingface.co/models?search=google%2Fbert2 An example of how a model can be used is here: https://huggingface.co/google/roberta2roberta_L-24_bbc Big thanks to @shashiongithub for providing me with the tokenizer files and giving valuable insights on setting the correct generation parameters!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6594/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6594/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6594", "html_url": "https://github.com/huggingface/transformers/pull/6594", "diff_url": "https://github.com/huggingface/transformers/pull/6594.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6594.patch", "merged_at": 1599748851000 }
https://api.github.com/repos/huggingface/transformers/issues/6593
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6593/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6593/comments
https://api.github.com/repos/huggingface/transformers/issues/6593/events
https://github.com/huggingface/transformers/pull/6593
681,798,623
MDExOlB1bGxSZXF1ZXN0NDcwMTMwMDkz
6,593
[Tests common] Fix flaky test
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=h1) Report\n> Merging [#6593](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ab42d74850233cff9df87701d257d9b975435f66&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6593/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6593 +/- ##\n==========================================\n+ Coverage 79.42% 79.43% +0.01% \n==========================================\n Files 156 156 \n Lines 28127 28127 \n==========================================\n+ Hits 22339 22343 +4 \n+ Misses 5788 5784 -4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=footer). Last update [ab42d74...d243891](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Long term, I would rather fill nan with a sentinel value like -11.27 (my negative birthday, dont forget!) and check equality but I am also fine with the temp fix.", "Ok pinging @LysandreJik here that we should take a look next week. Will note it down as well. " ]
1,597
1,597
1,597
MEMBER
null
The test `test_model_outputs_equivalence` fails quite often at the moment because of a problem with `nan - nan`. This should solve the issue. Also added more explicit error message in case test error occurs. Flaky error happens here for example: https://app.circleci.com/pipelines/github/huggingface/transformers/10798/workflows/44e689b2-f4b3-49be-88b3-a5b214eac6c5/jobs/75173
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6593/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6593", "html_url": "https://github.com/huggingface/transformers/pull/6593", "diff_url": "https://github.com/huggingface/transformers/pull/6593.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6593.patch", "merged_at": 1597846732000 }
https://api.github.com/repos/huggingface/transformers/issues/6592
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6592/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6592/comments
https://api.github.com/repos/huggingface/transformers/issues/6592/events
https://github.com/huggingface/transformers/issues/6592
681,749,199
MDU6SXNzdWU2ODE3NDkxOTk=
6,592
Unable to save and load RoBERTa model using tensorflow
{ "login": "kiruthiga-A", "id": 35336984, "node_id": "MDQ6VXNlcjM1MzM2OTg0", "avatar_url": "https://avatars.githubusercontent.com/u/35336984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kiruthiga-A", "html_url": "https://github.com/kiruthiga-A", "followers_url": "https://api.github.com/users/kiruthiga-A/followers", "following_url": "https://api.github.com/users/kiruthiga-A/following{/other_user}", "gists_url": "https://api.github.com/users/kiruthiga-A/gists{/gist_id}", "starred_url": "https://api.github.com/users/kiruthiga-A/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiruthiga-A/subscriptions", "organizations_url": "https://api.github.com/users/kiruthiga-A/orgs", "repos_url": "https://api.github.com/users/kiruthiga-A/repos", "events_url": "https://api.github.com/users/kiruthiga-A/events{/privacy}", "received_events_url": "https://api.github.com/users/kiruthiga-A/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @kiruthiga-A,\r\n\r\nCould you post a complete code example so that we can reproduce the error? If the complete code is too long, a google colab would be super useful.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I have this same exact issue importing a model from HuggingFace" ]
1,597
1,620
1,607
NONE
null
I have trained my model with Roberta-base and tested, it works. But when i try to save the model I get following error: /usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/save.py in list_dependencies(self, obj) 126 obj._object_identifier, # pylint: disable=protected-access 127 name, --> 128 extra_dependencies.keys())) 129 yield base.TrackableReference(name, extra_dependencies[name]) 130 else: ValueError: Error when exporting object <tensorflow.python.keras.layers.core.Activation object at 0x7fef7d287128> of with identifier=_tf_keras_layer. The object has an attribute named regularization_losses, which is reserved. List of all reserved attributes: dict_keys(['regularization_losses', 'variables', 'trainable_variables', 'keras_api']) Unable to solve it. Is it some obvious mistake am doing? # Code: path = '/path/model_1' tf.keras.models.save_model(model_roberta_base,path, overwrite=True,include_optimizer=False,save_format='tf') new_model = tf.keras.models.load_model(path) result =predct_func(new_model,"Technology driven by Medical imaging and Artificial Intellignence") plot_result(result)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6592/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6591
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6591/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6591/comments
https://api.github.com/repos/huggingface/transformers/issues/6591/events
https://github.com/huggingface/transformers/pull/6591
681,551,291
MDExOlB1bGxSZXF1ZXN0NDY5OTI0NjQ3
6,591
tf generation utils: remove unused kwargs
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2139563322, "node_id": "MDU6TGFiZWwyMTM5NTYzMzIy", "url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup", "name": "cleanup", "color": "e7fc49", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6591/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6591", "html_url": "https://github.com/huggingface/transformers/pull/6591", "diff_url": "https://github.com/huggingface/transformers/pull/6591.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6591.patch", "merged_at": 1597844266000 }
https://api.github.com/repos/huggingface/transformers/issues/6590
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6590/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6590/comments
https://api.github.com/repos/huggingface/transformers/issues/6590/events
https://github.com/huggingface/transformers/issues/6590
681,549,238
MDU6SXNzdWU2ODE1NDkyMzg=
6,590
Delete Unused TFModelTesterMixin attributes
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" }, { "id": 2139563322, "node_id": "MDU6TGFiZWwyMTM5NTYzMzIy", "url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup", "name": "cleanup", "color": "e7fc49", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,597
1,599
1,599
CONTRIBUTOR
null
Afaict, neither `test_torchscript` or `test_pruning` is used. https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py#L76
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6590/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6589
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6589/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6589/comments
https://api.github.com/repos/huggingface/transformers/issues/6589/events
https://github.com/huggingface/transformers/issues/6589
681,530,383
MDU6SXNzdWU2ODE1MzAzODM=
6,589
[seq2seq] finetune.sh OOMs in fp16 w torch 1.6 on colab
{ "login": "amanpreet692", "id": 42522643, "node_id": "MDQ6VXNlcjQyNTIyNjQz", "avatar_url": "https://avatars.githubusercontent.com/u/42522643?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amanpreet692", "html_url": "https://github.com/amanpreet692", "followers_url": "https://api.github.com/users/amanpreet692/followers", "following_url": "https://api.github.com/users/amanpreet692/following{/other_user}", "gists_url": "https://api.github.com/users/amanpreet692/gists{/gist_id}", "starred_url": "https://api.github.com/users/amanpreet692/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amanpreet692/subscriptions", "organizations_url": "https://api.github.com/users/amanpreet692/orgs", "repos_url": "https://api.github.com/users/amanpreet692/repos", "events_url": "https://api.github.com/users/amanpreet692/events{/privacy}", "received_events_url": "https://api.github.com/users/amanpreet692/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "try passing `--fp16 --fp_16_opt_level=O1` \r\nthat is a relevant default that has changed. I have also experienced some torch 1.6 issues, so would love to know if that helps.\r\n\r\n\r\nSemi-relatedly, a good practice is to run \r\n```\r\n!pip freeze | grep transformers\r\n!pip freeze | grep torch\r\n```\r\nat the top of colab so that when you go back you can know what version you were on.", "Thanks for the quick reply! I tried this and it didn't work though :(\r\nRemoving the fp16 parameter for now and fine-tuning.\r\nWill keep the colab advice in mind :)", "+1 on this. Using `fp16` with level `O1` or `O2` both causes OOM even for batch size 1. Without `fp16` fine-tuning works.\r\n\r\nTorch 1.6.0, transformers 3.0.2, Linux, V100 GPU.", "This is a torch 1.6 issue.\r\nI haven't gotten anything working well with torch 1.6 + fp16.\r\ntorch 1.5.1 with apex installed works well for me.\r\n", "I tried running fp16 training with `amp_backend=apex` and `amp_backend=native` (passing them as additional args) and the latter does much better in terms power consumption, but memory consumption is same for both (wandb GPU graphs). However, both of them OOM during the validation step. May have something to do with beam search since my validation batch size is 1.\r\n\r\n<img width=\"1564\" alt=\"Screen Shot 2020-08-31 at 12 47 28\" src=\"https://user-images.githubusercontent.com/1833708/91760533-60c7ae00-eb88-11ea-888b-550e3bb28967.png\">\r\n", "Can you try torch 1.5.1 ?\r\n", "Succeeds with 1.5.1, and power and temperature are in-line with native.", "However, the process failing during generation for 1.6.0 suggests there's some optimization missing during the generation steps which causes OOM.", "Another thing to note which might be related: Validation (400 samples) takes 5x time for 1 epoch of training (2400 samples). Even if accounting for beam size (4x), it is much slower.", "Interesting! I would definitely be open to a PR here if you have a fix in mind!", "Thanks! I have a couple ideas and will try them out and create a PR if any of them works.", "I think the problem is that the `_generative_step` method calls `_step` in it, causing 2x forward steps within each validation step. Also, `model.generate` steps are inherently slower than an eval forward pass, even with `num_beams=1`, about 30-60x slower. But this is a different problem than the OOM issue on 1.6.0. Maybe should split this up into a different issue?", "The problem is with `model.generate` that causes OOM on PyTorch 1.6. I switched out to using a custom `validation_step` that only uses `_step` and does not make a call to `model.generate`; it succeeds and is fast. The drawback is that I cannot use beam search for the validation step and keep `do_predict` set to `False` to ensure the test step does not execute. All of which are acceptable limitations to me for faster val, val not running into OOM and being able to use native fp16 with PyTorch 1.6.0.\r\n\r\nI'm happy to create a PR for it if it makes sense to check it in.", "That PR would be interesting. More interesting would be figuring out why generate OOMs in these conditions.", "Definitely the question for why generate OOMs is interesting but one I haven't found an answer for yet. I suggested a workaround in #7004 using the fix I described earlier.", "OK, I'm gunna try to fix the underlying issue today/tomorrow and if I fail, we'll move to your PR.\r\nThanks!", "Does anyone have a snippet that replicates the OOM outside of colab?\r\nI have no trouble running `examples/seq2seq/test_bash_script.py` on self hosted hardware in torch 1.6.\r\n", "The issue wasn't on Colab but on AWS.", "What was your command/hardware?", "Command: `python script.py ...` with a bunch of args (I have a custom wrapper that inherits `SummarizationModule` for initialization and adds extra args. I did not modify train / eval / test in that so should be identical to running `python fine-tune.py` from `finetune.sh`).\r\nGPU: V100-SXM2-16GB.", "I can replicate on v100 with cuda 10.1, torch 1.6, python 3.7.\r\nThe problem is that during the first call to `generate` (during the validation sanity check) the untrained model generates `config.max_length` tokens, causing OOM.\r\n\r\nEasiest fix is adding `--num_sanity_val_steps=0` to your command. LMK if that works.\r\nThe linked PR above allows the user to limit how many tokens are generating during validation, which may be independently helpful.\r\n\r\n\r\n", "Hmm, that makes sense. I'll also say that in the screenshots I had attached earlier it occurred at the end of the first epoch during validation, so setting that new flag should help with that. It is a tricky choice between setting a `max_length` for generate steps that is different from the model's expected output. I do prefer using a forward pass' output (my PR #7004) as a substitute for the runtime output when it is with the correct `max_length` instead of a shorter output that fits within the memory at that time.\r\n\r\nHowever, this still does not explain the avg gen time being 30-60x time per batch (with equal batch sizes for training and validation).\r\n\r\nLastly, the same model does generate without producing an OOM at run-time on similar hardware with the model producing upto `max_length`, which continues to baffle me.", "I don't have any slow down after the validation sanity check in my replication. Maybe I haven't found your bug.", "I don't understand \r\n> Lastly, the same model does generate without producing an OOM at run-time on similar hardware with the model producing upto max_length, which continues to baffle me.\r\n\r\nDo you have a snippet that does not involve finetune.py (just calls `generate`) that OOMs/is way slower?", "> I don't have any slow down after the validation sanity check in my replication. Maybe I haven't found your bug.\r\n\r\nOr maybe it got resolved between when I tested it and this version. No worries.\r\n\r\n> Do you have a snippet that does not involve finetune.py (just calls generate) that OOMs/is way slower?\r\n\r\nThis might not matter anymore if the previous is fixed during training since this is specifically at runtime. Regardless, here's an example I have that is much slower.\r\n```python\r\n%%timeit\r\nwith torch.no_grad():\r\n generated_ids = model.generate(tokenized_input[\"input_ids\"].to(\"cuda\"), skip_special_tokens=True, clean_up_tokenization_spaces=False, \r\n num_beams=3, top_p=0.9, repetition_penalty=10, decoder_start_token_id=model.config.decoder_start_token_id, max_length=model.config.max_length)\r\n```\r\nwhich produces:\r\n`10.4 s ± 7.89 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)`", "The sequence produced is of length 311. While the output sequence length is long (max possible is 768), 10 seconds is still quite a lot.", "can you send a full working example that I can copy paste and try in different torch versions?", "Sure! I'm using a finetuned model and a custom dataset so changed the below to `bart-large` and removed the lines where a dataset is queried. Everything else is the same.\r\n\r\n```python\r\nfrom transformers import BartTokenizer, BartForConditionalGeneration, BartConfig\r\ntokenizer = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\nmodel = BartForConditionalGeneration.from_pretrained(\"facebook/bart-large\")\r\nmodel = model.to(\"cuda\")\r\nmodel = model.eval()\r\ntokenized_input = tokenizer(..., return_tensors=\"pt\", max_length=model.config.max_position_embeddings)\r\nwith torch.no_grad():\r\n generated_ids = model.generate(tokenized_input[\"input_ids\"].to(\"cuda\"), skip_special_tokens=True, clean_up_tokenization_spaces=False, \r\n num_beams=3, top_p=0.9, repetition_penalty=10, decoder_start_token_id=model.config.decoder_start_token_id, max_length=model.config.max_length)\r\n```\r\n\r\nI'm running this in a notebook so I can time profile the generate step.", "I've spent a quite some time today focused on trying various combinations of `generate`. The increased delay arises from a `num_beams` count that is large, which leads to the model producing longer outputs, thus compounding the generate time (`num_beams` * `max_length`).\r\n\r\nIn conclusion, it doesn't appear to be a bug but a property of generate being more memory intensive.", "@sshleifer , I have the same issue, and I am using the latest version that includes the PR you provided. I set eval_max_gen_length to 30 and still getting OOM during the sanity check. Do I also have to set num_sanity_val_steps=0 ?" ]
1,597
1,600
1,599
CONTRIBUTOR
null
On trying to fine-tune either T5 or BART models for summarization I was encountering OOM repeatedly in the latest code whereas it used to work fine earlier for me, atleast on Google Colab. On checking the startup scripts and latest commits I saw that optimizations have been added for native pytorch fp16 support recently. On removing the fp16 parameter from the script it started working as expected. Please check if this could be a real issue or just a matter of a dangling parameter that needs to be removed? Thanks @sshleifer @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6589/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6588
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6588/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6588/comments
https://api.github.com/repos/huggingface/transformers/issues/6588/events
https://github.com/huggingface/transformers/issues/6588
681,508,638
MDU6SXNzdWU2ODE1MDg2Mzg=
6,588
all_hidden_states indentation bug in modeling_bert.py
{ "login": "YuanEric88", "id": 32417149, "node_id": "MDQ6VXNlcjMyNDE3MTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/32417149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YuanEric88", "html_url": "https://github.com/YuanEric88", "followers_url": "https://api.github.com/users/YuanEric88/followers", "following_url": "https://api.github.com/users/YuanEric88/following{/other_user}", "gists_url": "https://api.github.com/users/YuanEric88/gists{/gist_id}", "starred_url": "https://api.github.com/users/YuanEric88/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YuanEric88/subscriptions", "organizations_url": "https://api.github.com/users/YuanEric88/orgs", "repos_url": "https://api.github.com/users/YuanEric88/repos", "events_url": "https://api.github.com/users/YuanEric88/events{/privacy}", "received_events_url": "https://api.github.com/users/YuanEric88/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "My bad, I ignore some code in the file. " ]
1,597
1,597
1,597
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @lavanyashukla ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L490 ## Expected behavior https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L490 There is a small bug here, I suppose Line491 and Line491 should be inside loop chunk. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6588/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6587
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6587/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6587/comments
https://api.github.com/repos/huggingface/transformers/issues/6587/events
https://github.com/huggingface/transformers/pull/6587
681,466,462
MDExOlB1bGxSZXF1ZXN0NDY5ODU3MjI1
6,587
Fix bart base test
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "spurious" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6587/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6587", "html_url": "https://github.com/huggingface/transformers/pull/6587", "diff_url": "https://github.com/huggingface/transformers/pull/6587.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6587.patch", "merged_at": 1597800490000 }
https://api.github.com/repos/huggingface/transformers/issues/6586
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6586/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6586/comments
https://api.github.com/repos/huggingface/transformers/issues/6586/events
https://github.com/huggingface/transformers/pull/6586
681,462,837
MDExOlB1bGxSZXF1ZXN0NDY5ODUzODUz
6,586
wip: Code to add lang tags to marian model cards
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Let me know if/when you need me to review this!", "Do you think it's a good investment of time to clean up automated marian model cards/conversion?\r\n\r\nFor first 1000 models from `OPUS-MT-Train` repo, the process was basically \r\ndownload and convert everything, if there is a '+' in the model name try to find a matching group name.\r\nAutomated model card is an f string + Jorg's README.\r\n\r\nFor the next 500 models, the process was:\r\ndecide whether we don't want the model (\"the blacklisting process\"):\r\n- if it's \"short pair\" (as in en-es/en-zh) identical to one we have and the score (which we get from https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/models/released-models.txt) is worse than the BLEU of the previous model (which we extract disgustingly from the README.md).\r\n- if either side is \"jpx\" (see slack: \"you can ignore jpx\"\r\n\r\nAssuming we DO want the model:\r\n- identical state dict conversion to the previous iteration\r\n- Automated model card = yaml front matter with language tags (incl. group expansion for multilingual, look at all the tags on this guy: https://huggingface.co/Helsinki-NLP/opus-mt-roa-en) + f string + Jorg's README.\r\n- also write `metadata.json` to S3 so that, in the future, blacklisting need not involved extracting floats from markdown. \r\n\r\n\r\n(The `api-inference` issue is a disk space one that @mfuntowicz hopes to resolve tomorrow, curl-requests come back 👍 )\r\n\r\nThis process works well but is obviously not as simple as it could/should be. The state dict automation that other projects have is simple and done, but the automated model carding needs and blacklisting needs to be at least partially rewritten.\r\n\r\nI also suspect that if we do this again in a third repo we will need to add more model card/blacklisting logic.\r\n\r\n### Proposed next steps\r\n- add metadata.json to every opus-mt model in the hub\r\n- strip the opus-mt model prefix from every model in the hub. What's the point of it?\r\n- some one off deletions of old capital letter group names as discussed in #fixme \r\n- get this code clean enough that it runs on future models for the current `Tatoeba-Challenge` repo.\r\n\r\nDid any of that make sense? What do you think?", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=h1) Report\n> Merging [#6586](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `1.02%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6586/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6586 +/- ##\n==========================================\n+ Coverage 76.58% 77.61% +1.02% \n==========================================\n Files 181 181 \n Lines 34828 34828 \n==========================================\n+ Hits 26674 27032 +358 \n+ Misses 8154 7796 -358 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.46% <0.00%> (-72.60%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.13% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.01% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (+0.66%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (+3.24%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=footer). Last update [28cf873...ab93341](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,600
1,600
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6586/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6586", "html_url": "https://github.com/huggingface/transformers/pull/6586", "diff_url": "https://github.com/huggingface/transformers/pull/6586.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6586.patch", "merged_at": 1600899066000 }
https://api.github.com/repos/huggingface/transformers/issues/6585
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6585/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6585/comments
https://api.github.com/repos/huggingface/transformers/issues/6585/events
https://github.com/huggingface/transformers/issues/6585
681,399,065
MDU6SXNzdWU2ODEzOTkwNjU=
6,585
Failing bart-base slow test
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,597
1,597
1,597
CONTRIBUTOR
null
``` =================================== FAILURES =================================== ____________ BartModelIntegrationTests.test_bart_base_mask_filling _____________ [gw0] linux -- Python 3.7.6 /home/hf/actions-runner_transformers/_work/transformers/transformers/.env/bin/python self = <tests.test_modeling_bart.BartModelIntegrationTests testMethod=test_bart_base_mask_filling> @slow def test_bart_base_mask_filling(self): pbase = pipeline(task="fill-mask", model="facebook/bart-base") src_text = [" I went to the <mask>."] results = [x["token_str"] for x in pbase(src_text)] expected_results = ["Ġbathroom", "Ġrestroom", "Ġhospital", "Ġkitchen", "Ġcar"] > self.assertListEqual(results, expected_results) E AssertionError: Lists differ: ['Ġlibrary', 'Ġhospital', 'Ġbathroom', 'Ġmovies', 'Ġpolice'] != ['Ġbathroom', 'Ġrestroom', 'Ġhospital', 'Ġkitchen', 'Ġcar'] E E First differing element 0: ``` https://github.com/huggingface/transformers/runs/996031882?check_suite_focus=true
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6585/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6584
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6584/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6584/comments
https://api.github.com/repos/huggingface/transformers/issues/6584/events
https://github.com/huggingface/transformers/issues/6584
681,398,847
MDU6SXNzdWU2ODEzOTg4NDc=
6,584
Failing ONNX Slow Test
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "@sshleifer Is it still a failing test? I can have a look", "Still failing! See https://github.com/huggingface/transformers/runs/1024137280?check_suite_focus=true", "On it 💪 ", "attempt at: #6716" ]
1,597
1,598
1,598
CONTRIBUTOR
null
``` ___________________ OnnxExportTestCase.test_quantize_pytorch ___________________ [gw0] linux -- Python 3.7.6 /home/hf/actions-runner_transformers/_work/transformers/transformers/.env/bin/python self = <tests.test_onnx.OnnxExportTestCase testMethod=test_quantize_pytorch> @require_torch @slow def test_quantize_pytorch(self): for model in OnnxExportTestCase.MODEL_TO_TEST: path = self._test_export(model, "pt", 12) quantized_path = quantize(path) # Ensure the actual quantized model is not bigger than the original one > if quantized_path.stat().st_size >= Path(path).stat().st_size: E AttributeError: 'NoneType' object has no attribute 'stat' tests/test_onnx.py:76: AttributeError =============================== warnings summary =============================== ``` https://github.com/huggingface/transformers/runs/996031882?check_suite_focus=true
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6584/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6583
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6583/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6583/comments
https://api.github.com/repos/huggingface/transformers/issues/6583/events
https://github.com/huggingface/transformers/pull/6583
681,340,749
MDExOlB1bGxSZXF1ZXN0NDY5NzQ1NzUw
6,583
add intro to nlp lib & dataset links to custom datasets tutorial
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,597
1,598
1,597
CONTRIBUTOR
null
Being a bit more explicit with the recommendation to use NLP where possible. Adds a link to the hub for each dataset used as well as a brief introduction to the NLP lib at the bottom.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6583/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6583", "html_url": "https://github.com/huggingface/transformers/pull/6583", "diff_url": "https://github.com/huggingface/transformers/pull/6583.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6583.patch", "merged_at": 1597933972000 }
https://api.github.com/repos/huggingface/transformers/issues/6582
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6582/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6582/comments
https://api.github.com/repos/huggingface/transformers/issues/6582/events
https://github.com/huggingface/transformers/issues/6582
681,308,027
MDU6SXNzdWU2ODEzMDgwMjc=
6,582
BatchEncoding interacts poorly with apex.amp
{ "login": "Craigacp", "id": 729696, "node_id": "MDQ6VXNlcjcyOTY5Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/729696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Craigacp", "html_url": "https://github.com/Craigacp", "followers_url": "https://api.github.com/users/Craigacp/followers", "following_url": "https://api.github.com/users/Craigacp/following{/other_user}", "gists_url": "https://api.github.com/users/Craigacp/gists{/gist_id}", "starred_url": "https://api.github.com/users/Craigacp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Craigacp/subscriptions", "organizations_url": "https://api.github.com/users/Craigacp/orgs", "repos_url": "https://api.github.com/users/Craigacp/repos", "events_url": "https://api.github.com/users/Craigacp/events{/privacy}", "received_events_url": "https://api.github.com/users/Craigacp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Any interest in fixing this?", "Hello! Indeed, there doesn't seem to be any type cast. Do you want to open a PR with a fix?", "Do you think the fix we have is appropriate? I'm worried it'll break something elsewhere which is expecting the to method to do casting.", "I think here we could do something better than what is currently implemented. We could do it like it's done in the PyTorch implementation, as it takes all `args` and `kwargs`. Something like this:\r\n\r\n```py\r\n def to(self, *args, **kwargs) -> \"BatchEncoding\":\r\n \"\"\"\r\n *Appropriate docstring here*\r\n\r\n Args:\r\n *Appropriate Args here*\r\n\r\n Returns:\r\n :class:`~transformers.BatchEncoding`:\r\n The same instance of :class:`~transformers.BatchEncoding` after modification.\r\n \"\"\"\r\n self.data = {k: v.to(*args, **kwargs) for k, v in self.data.items()}\r\n return self\r\n\r\n```\r\nWhat do you think?", "Won't that pass the casts down? It looks like it would directly pass everything through, but we wouldn't usually want to cast the `LongTensor` that lives in a `BatchEncoding`.", "Ah, then I misunderstood. Let me try to understand the issue better to see how to work it out: as Apex casts from `LongTensor` to `HalfTensor` when using `BatchEncoding` which raises this issue, doesn't the issue happen if the inputs are `torch.Tensor`s instead of a dict/`BatchEncoding`? These inputs would be converted to `HalfTensor`s as well, resulting in the error, wouldn't it?", "Apex.amp blindly calls a `to()` method on anything that gets passed into a module if it's a type it doesn't know about, passing it arguments that cast a tensor to `HalfTensor`. `BatchEncoding` exposes a `to()` method which is designed for moving things between devices, and not for casting, yet it passes the casts down to the tensors despite the docs saying it's only for moving between devices. We don't want to cast the tensors inside `BatchEncoding` to `HalfTensor` as that truncates and rounds the vocabulary to 65k and every 16th word in the high end, even if we cast the indices back to `LongTensor` at the point they are looked up in the embedding. This is a mostly silent failure as if you blindly insert the cast back in that the embedding error suggests you lose a lot of vocabulary items.\r\n\r\nThe workaround I posted makes `BatchEncoding` ignore any casts that are passed into it.\r\n\r\nHonestly the issue is that pytorch overloaded the `to` method with two completely different functions, casting and device transfer, and python doesn't have a type system that allows you to select an appropriate method overloading based on the inbound method arguments.", "Thanks for taking the time to write such a detailed explanation. I think your proposal makes complete sense. The way the `to` method is implemented in `BatchEncoding` is only intended for use with devices, not casting, so it won't break the current implementation.\r\n\r\nOn top of what you have proposed, I think logging a warning in the case that it's not a device would be appropriate (if it doesn't get caught in the `if` statement). We would want to warn users that their cast did not go through if they attempt doing something like this, rather than silently doing nothing.", "Sure. I can log a warning in there and make up a PR." ]
1,597
1,606
1,606
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.0.0 - Platform: Linux - Python version: 3.6 - PyTorch version (GPU?): 1.6.0 GPU ### Who can help tokenizers: @mfuntowicz ## Information We are building a multi-task transformer on top of the transformers API, using Nvidia's Apex to get the best speed out of the GPUs. When using Apex with `O2` optimisations (which are necessary to get speedups on our model), it inserts casts around the inputs to `forward` which in our case is a `BatchEncoding`. This cast blindly calls `BatchEncoding.to` with a `dtype` argument, which in turn is passed through to the values `to` method because there is only a type hint `str` on the `BatchEncoding.to` input argument, not a `isinstance` check. This causes our tensor of BPE ids to be cast from a `LongTensor` into `HalfTensor` and then the embedding layer raises an error because it's not expecting floating point values (also we rounded away most of our vocabulary terms). ## To reproduce Run amp.initialize with the `O2` opt level on a model which's forward method accepts BatchEncoding as the input. This [line](https://github.com/NVIDIA/apex/blob/4ef930c1c884fdca5f472ab2ce7cb9b505d26c1a/apex/amp/_initialize.py#L46) in apex calls the `to` method blindly on all the input types, which in this case calls `BatchEncoding.to` which doesn't support the full scope of a pytorch `to` because it's designed for moving things between devices and not for casting them. We're currently guarding it this way: ```python def patched_to(self, device: Union[str,torch.device]): """Send all values to device by calling v.to(device)""" if isinstance(device, str) or isinstance(device, torch.device): self.data = {k: v.to(device=device) for k, v in self.data.items()} return self BatchEncoding.to = patched_to ``` ## Expected behavior The `BatchEncoding.to` method to not to pass the cast through (as the documentation indicates). Though it's not clear if this is because the `to` method is underspecified, or because Apex is being quite aggressive (and a little silly) in it's blind casting of everything. Either way, if the `BatchEncoding.to` method isn't supposed to be able to cast things (and it doesn't look like it) then it probably shouldn't pass casts down to the tensors it contains.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6582/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6581
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6581/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6581/comments
https://api.github.com/repos/huggingface/transformers/issues/6581/events
https://github.com/huggingface/transformers/issues/6581
681,272,944
MDU6SXNzdWU2ODEyNzI5NDQ=
6,581
Error using pipeline with officiel docker image (transformers 3.0.2)
{ "login": "ZeJ0hn", "id": 38920448, "node_id": "MDQ6VXNlcjM4OTIwNDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/38920448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZeJ0hn", "html_url": "https://github.com/ZeJ0hn", "followers_url": "https://api.github.com/users/ZeJ0hn/followers", "following_url": "https://api.github.com/users/ZeJ0hn/following{/other_user}", "gists_url": "https://api.github.com/users/ZeJ0hn/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZeJ0hn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZeJ0hn/subscriptions", "organizations_url": "https://api.github.com/users/ZeJ0hn/orgs", "repos_url": "https://api.github.com/users/ZeJ0hn/repos", "events_url": "https://api.github.com/users/ZeJ0hn/events{/privacy}", "received_events_url": "https://api.github.com/users/ZeJ0hn/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,603
1,603
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.19.76-linuxkit-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ## To reproduce Steps to reproduce the behavior: 1. Start docker image: `docker run -it huggingface/transformers-pytorch-cpu /bin/bash` 2. Start python 3: `python3` 3. Import transfomers and create the pipeline: ``` >>> from transformers import pipeline >>> nlp = pipeline('question-answering') ``` 4. Define question and context: ``` >>> question='What is the GOC at Zinia-1 ?' >>> context='s. However numerous types of data tend to prove presence of oil i.e. mud gas data show C3 and traces of C4, shows on cuttings and oil was seen by LFA during MDT pump out The middle unit U4 1844-1880.5 MD is oil bearing. No gas is seen from logs at the assumed GOC depth within this unit flat spot at 1849 MD or -1830 TVDSS The lower unit U2 1880.5-1909 mMD is also oil bearing. Pressure data indicate that the 3 units are on-trend although the GWD gas while drilling result suggest barriers between the U2, U4 and U5 in addition to the ultimate seal. The oil sampled in the UM4 has higher viscosity than the oil collected. s. However numerous types of data tend to prove presence of oil i.e. mud gas data show C3 and traces of C4, shows on cuttings and oil was seen by LFA during MDT pump out The middle unit U4 1844-1880.5 MD is oil bearing. No gas is seen from logs at the assumed GOC depth within this unit flat spot at 1849 MD or -1830 TVDSS The lower unit U2 1880.5-1909 mMD is also oil bearing. Pressure data indicate that the 3 units are on-trend although the GWD gas while drilling result suggest barriers between the U2, U4 and U5 in addition to the ultimate seal. The oil sampled in the UM4 has higher viscosity than the oil collected' ``` 5. Call the prediction for question answering: ``` >>> nlp(question=question, context=context, verbose=False) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py", line 1316, in __call__ for s, e, score in zip(starts, ends, scores) File "/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py", line 1316, in <listcomp> for s, e, score in zip(starts, ends, scores) KeyError: 0 ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Get the answer like this (this result comes from 2.11.0): `{'score': 0.00025710496321816774, 'start': 300, 'end': 322, 'answer': '1849 MD or -1830 TVDSS'}`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6581/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6580
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6580/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6580/comments
https://api.github.com/repos/huggingface/transformers/issues/6580/events
https://github.com/huggingface/transformers/pull/6580
681,183,066
MDExOlB1bGxSZXF1ZXN0NDY5NjE0NDIw
6,580
[examples/text-classification] update xnli-mt url
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=h1) Report\n> Merging [#6580](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7516bcf27319a2aea9bbe927f8e4d8e501e23c99&el=desc) will **decrease** coverage by `0.57%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6580/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6580 +/- ##\n==========================================\n- Coverage 80.53% 79.95% -0.58% \n==========================================\n Files 156 156 \n Lines 28130 28130 \n==========================================\n- Hits 22654 22492 -162 \n- Misses 5476 5638 +162 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `61.95% <0.00%> (-34.79%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <0.00%> (-0.69%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=footer). Last update [7516bcf...3ececb0](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
MEMBER
null
This PR updates the XNLI-MT 1.0 file url, the zip at the old url is corrupted. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6580/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6580", "html_url": "https://github.com/huggingface/transformers/pull/6580", "diff_url": "https://github.com/huggingface/transformers/pull/6580.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6580.patch", "merged_at": 1597770648000 }
https://api.github.com/repos/huggingface/transformers/issues/6579
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6579/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6579/comments
https://api.github.com/repos/huggingface/transformers/issues/6579/events
https://github.com/huggingface/transformers/pull/6579
681,169,081
MDExOlB1bGxSZXF1ZXN0NDY5NjAyNjg4
6,579
[Pegasus Doc] minor typo
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=h1) Report\n> Merging [#6579](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7516bcf27319a2aea9bbe927f8e4d8e501e23c99&el=desc) will **decrease** coverage by `0.52%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6579/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6579 +/- ##\n==========================================\n- Coverage 80.53% 80.00% -0.53% \n==========================================\n Files 156 156 \n Lines 28130 28130 \n==========================================\n- Hits 22654 22506 -148 \n- Misses 5476 5624 +148 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=footer). Last update [7516bcf...1e290b6](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
MEMBER
null
Minor typo correction @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6579/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6579", "html_url": "https://github.com/huggingface/transformers/pull/6579", "diff_url": "https://github.com/huggingface/transformers/pull/6579.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6579.patch", "merged_at": 1597769268000 }
https://api.github.com/repos/huggingface/transformers/issues/6578
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6578/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6578/comments
https://api.github.com/repos/huggingface/transformers/issues/6578/events
https://github.com/huggingface/transformers/issues/6578
681,162,922
MDU6SXNzdWU2ODExNjI5MjI=
6,578
Latest source has bug in pipelines for short inputs (regardless of padding)
{ "login": "jusjosgra", "id": 2428055, "node_id": "MDQ6VXNlcjI0MjgwNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2428055?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jusjosgra", "html_url": "https://github.com/jusjosgra", "followers_url": "https://api.github.com/users/jusjosgra/followers", "following_url": "https://api.github.com/users/jusjosgra/following{/other_user}", "gists_url": "https://api.github.com/users/jusjosgra/gists{/gist_id}", "starred_url": "https://api.github.com/users/jusjosgra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jusjosgra/subscriptions", "organizations_url": "https://api.github.com/users/jusjosgra/orgs", "repos_url": "https://api.github.com/users/jusjosgra/repos", "events_url": "https://api.github.com/users/jusjosgra/events{/privacy}", "received_events_url": "https://api.github.com/users/jusjosgra/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @jusjosgra, \r\n\r\nCan you post a complete code snippet to produce the error. I'm using `master` and running this code does not give me any errors:\r\n\r\n```python\r\nfrom transformers import pipeline\r\nqa = pipeline(\"question-answering\", model=\"bert-large-uncased-whole-word-masking-finetuned-squad\", tokenizer=\"bert-large-uncased-whole-word-masking-finetuned-squad\")\r\noutput = qa(question=question, context=context, max_seq_len=512, max_answer_len=50)\r\n```\r\n\r\nDo you use batches as an input in your example? ", "Using @patrickvonplaten code I am experiencing the same problem here. Is truncation & padding of sqad examples not working correctly?\r\nIn my case, I'm using batches as an input, therefore I introduce a list of questions (the same repeated n times) and n contexts for the model to search the answer in. ", "@alexvaca0 are you running on a pypi version or on the current `master` branch?", "@LysandreJik on the current master branch", "Okay, pinging @mfuntowicz to see if he has any insights.", "I'm starting to suspect this has something to do with certain tokens... Look, this is one of the texts that the pipeline tokenizes to 485 instead of 512:\r\n\r\n**For a city containing multiple hospitals such as NYC, we defined a 102 proximity metric in this study as population normalized number of hospital beds within Prior to analysis of potential predictors, we considered multiple base regression models. 110 Given the significant spatial correlation in the present case data as evidenced by the 111 Moran Index, I (176) = 0.481, p < 0.0005 [24] , we explored potential regression models 112 both with and without spatial effects. We compared four base models (no predictors): 113 1) a Poisson model with random intercept; 2) a Poisson Besag-York-Mollié (BYM) by Eq 1:\r\nwhere υ i has an intrinsic conditional autoregressive (ICAR) structure [27] . We used the 119 reparameterization of the BYM model proposed by Riebler et al. [28] , known as the 120 BYM2 model and shown in Eq 2:\r\nwhere τ γ is the overall precision hyperparameter, ϕ ∈ [0, 1] is the mixing hyperparameter 122 representing the proportional division of variance between the spatial and nonspatial 123 effects, υ * is the spatial (ICAR) effect with a scaling factor such that Var (υ * ) ≈ 1, and 124 ν * is the nonspatial random-effect with ν * ∼ N (0, 1). Penalized complexity (PC) priors 125 are applied to hyperparameters τ γ and ϕ (compared to log-gamma priors in the random 126 intercept model) [29] . All four models used ZCTA population as the exposure and a 127 log-link function. We selected the model with the lowest Deviance Information Criterion 128 (DIC) [30] , representing the best trade-off between model fit and complexity. Characteristics for the four base models examined, including hyperparameters, are 130 shown in \r\ni , scaled nonspatial random-effect; υ * i , scaled spatial random-effect with intrinsic conditional autoregressive structure; τ ν , precision for nonspatial random effect, log-Gamma prior; τ γ , overall precision, penalized complexity (PC) prior; ϕ, mixing parameter, PC prior; n, overdispersion parameter, PC Gamma prior. Multiple regression models were built using a method adjusted from Nikolopoulos et 136 al.**\r\n\r\nI don't know if it's a coincidence that it has some mathematical signs that the tokenizer might not be processing correctly...", "Facing the same issue when dealing with multiple examples and padding strategy=\"longest\". I think it might have to do with the fact that the features are created per example and hence the padding is done per example. The padding code does not get to see all examples to find the longest example.\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L1753-L1764\r\n\r\nSame issue with `squad.py`\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py#L354-L362", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,610
1,610
NONE
null
## Environment info - `transformers` version: 3.0.2 - Platform: Darwin-19.2.0-x86_64-i386-64bit - Python version: 3.6.8 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @mfuntowicz @LysandreJik ## Information Model I am using (Bert, XLNet ...): bert-large-uncased-whole-word-masking-finetuned-squad The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Run the latest source (do not get the error from pip) Steps to reproduce the behavior: 1. install master from source 2. run question answering pipeline 3. Use a document shorter than max_model_length (with or without a padding_strategy) Traceback: ``` Traceback (most recent call last): .... output = pipeline(question=question, context=doc, max_seq_len=512, max_answer_len=50) File "/Users/justingrace/Google Drive/healx_root/code/transformers/src/transformers/pipelines.py", line 1673, in __call__ fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} File "/Users/justingrace/Google Drive/healx_root/code/transformers/src/transformers/pipelines.py", line 1673, in <dictcomp> fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} ValueError: expected sequence of length 512 at dim 1 (got 481) ``` ## Expected behavior the module should not have an error for the non-padded tensors stored in the squad feature set. These are expected to not conform to the max_model_length and need to be handled alternatively -- for instance I am not sure these objects are required at all so they may be removed and masks can be used on the padded tensors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6578/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6577
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6577/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6577/comments
https://api.github.com/repos/huggingface/transformers/issues/6577/events
https://github.com/huggingface/transformers/pull/6577
681,146,601
MDExOlB1bGxSZXF1ZXN0NDY5NTg0MzY2
6,577
CamembertForCausalLM
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten didn't add any tests since it subclasses `RobertaForCausalLM` which is already tested.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=h1) Report\n> Merging [#6577](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12d7624199e727f37bef7f53d527df7fabdb1fd6&el=desc) will **increase** coverage by `1.14%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6577/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6577 +/- ##\n==========================================\n+ Coverage 79.18% 80.33% +1.14% \n==========================================\n Files 156 156 \n Lines 28129 28132 +3 \n==========================================\n+ Hits 22275 22599 +324 \n+ Misses 5854 5533 -321 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=footer). Last update [12d7624...1e68a94](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "hi @patrickvonplaten , could you take a look ? Thanks !", "Yes, right. But this intended for the `EncoderDecoder` model class which can use pre-trained encoder for seq2seq tasks.\r\nSee PR #6411 and #6538 . Also Patrick shared this awesome paper which explores this idea and has shown some good results on seq2seq tasks. https://arxiv.org/pdf/1907.12461.pdf", "Thanks for the explanation! Yes it makes perfect sense to use these models for seq2seq but I still think maybe we should add a note somewhere in the document to explain this?", "Definitely! Will update the docs to explicitly state that this is intended for seq2seq and might not perform well on just causal modelling. Will also link the paper in `EncoderDecoder` doc page, so people will know what to choose.", "Btw, @patil-suraj did you start doing some Roberta2Roberta experiments? Wanted to start running some experiments next week - wanted to check if you already had some interesting results", "@patrickvonplaten Just started one experiment for qg and going to run cnn/dm after that, will let you know the results. \r\nSince Roberta has `bos`, what's the `decoder_start_token_id` for `Roberta2Roberta`, is it `bos` or `pad` token ?", "In my experiments with bert2bert, I used the same token for encoder bos and decoder bos, but it's up to you! `bos` makes more sense to me in the case of Roberta2Roberta.", "Okay, Thanks Patrick!" ]
1,597
1,598
1,598
MEMBER
null
This PR adds `CamembertForCausalLM` by subclassing `RobertaForCausalLM`, so that it can be used with the `EncoderDecoderModel`. @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6577/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6577", "html_url": "https://github.com/huggingface/transformers/pull/6577", "diff_url": "https://github.com/huggingface/transformers/pull/6577.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6577.patch", "merged_at": 1598010774000 }
https://api.github.com/repos/huggingface/transformers/issues/6576
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6576/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6576/comments
https://api.github.com/repos/huggingface/transformers/issues/6576/events
https://github.com/huggingface/transformers/pull/6576
681,135,019
MDExOlB1bGxSZXF1ZXN0NDY5NTc0OTUx
6,576
Add hyperparameter search to Trainer
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=h1) Report\n> Merging [#6576](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f?el=desc) will **decrease** coverage by `0.92%`.\n> The diff coverage is `62.88%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6576/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6576 +/- ##\n==========================================\n- Coverage 80.37% 79.45% -0.93% \n==========================================\n Files 156 156 \n Lines 28058 28380 +322 \n==========================================\n- Hits 22552 22548 -4 \n- Misses 5506 5832 +326 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `48.91% <0.00%> (-0.18%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | |\n| [src/transformers/hf\\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `67.74% <0.00%> (-1.49%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYmFydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `95.55% <ø> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <ø> (ø)` | |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (ø)` | |\n| ... and [43 more](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=footer). Last update [d329c9b...44e9543](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Neat implem! I was initially wondering whether we should just have a `HyperTrainer` whose only job would be to spawn multiple instances of Trainers, given that the adherence between hp search and everything else in this class is rather small.\r\n\r\nThoughts? Others feel free to chime in ", "> I was initially wondering whether we should just have a `HyperTrainer` whose only job would be to spawn multiple instances of Trainers\r\n\r\nWe can also take that road, my reasoning was that since the `HyperTrainer` would need to take all the same arguments as `Trainer` and then some, it made sense to just add it to the class. I think it's easier for a user to just have one class as an access point." ]
1,597
1,598
1,598
COLLABORATOR
null
This PR introduces an access point to easily use optuna or Ray Tune hyperparameter search with `Trainer`. Here is an example of use on MRPC (requires nlp installed from source on branch cloudpickle): ``` from nlp import load_dataset, load_metric from transformers import AutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, Trainer, TrainingArguments tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') dataset = load_dataset('glue', 'mrpc') metric = load_metric('glue', 'mrpc') def encode(examples): outputs = tokenizer(examples['sentence1'], examples['sentence2'], truncation=True) return outputs encoded_dataset = dataset.map(encode, batched=True) # Won't be necessary when this PR is merged with master since the Trainer will do it automatically encoded_dataset.set_format(columns=['attention_mask', 'input_ids', 'token_type_ids', 'label']) def model_init(): return AutoModelForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) # Evaluate during training and a bit more often than the default to be able to prune bad trials early. # Disabling tqdm is a matter of preference. training_args = TrainingArguments("test", evaluate_during_training=True, eval_steps=500, disable_tqdm=True) trainer = Trainer( args=training_args, data_collator=DataCollatorWithPadding(tokenizer), train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) # Defaut objective is the sum of all metrics when metrics are provided, so we have to maximize it. trainer.hyperparameter_search(direction="maximize") ``` This will use optuna or ray Tune depending on which platform is installed (default to optuna if both are there). You can force a backend by passing `backend="ray"` (or `backend="optuna"`). Here is another example on a simple regression problem (will convert to test in a follow-up PR): ``` from transformers import Trainer, TrainingArguments import numpy as np class RegressionDataset: def __init__(self, a=2, b=3, length=64, seed=42): np.random.seed(seed) self.length = length self.x = np.random.normal(size=(length,)).astype(np.float32) self.y = a * self.x + b + np.random.normal(scale=0.1, size=(length,)).astype(np.float32) def __len__(self): return self.length def __getitem__(self, i): return {'input_ids': self.x[i], 'label': self.y[i]} class RegressionModel(torch.nn.Module): def __init__(self, a=0, b=0): super().__init__() self.a = torch.nn.Parameter(torch.tensor(a).float()) self.b = torch.nn.Parameter(torch.tensor(b).float()) def forward(self, input_ids=None, labels=None): y = input_ids * self.a + self.b if labels is None: return (y,) loss = torch.nn.functional.mse_loss(y, labels) return (loss, y) train_set = RegressionDataset() eval_set = RegressionDataset() model = RegressionModel() training_args = TrainingArguments("test", evaluate_during_training=True, eval_steps=8, disable_tqdm=True) def compute_metrics(eval_pred): predictions, labels = eval_pred true = np.abs(predictions - labels) <= 0.25 return {'accuracy': true.astype(np.float32).mean().item()} trainer = Trainer(args=training_args, train_dataset=train_set, eval_dataset=eval_set, model_init=lambda: RegressionModel(), compute_metrics=compute_metrics) best_trial = trainer.hyperparameter_search(direction="maximize") ``` To customize the hyperparameter searched, pass along a `hp_space` function like this (for optuna): ``` def my_hp_space(trial): return { "learning_rate": trial.suggest_float("learning_rate", 1e-2, 1, log=True), "num_train_epochs": trial.suggest_int("num_train_epochs", 1, 5), "seed": trial.suggest_int("seed", 1, 40), "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [4, 8, 16, 32, 64]), } best_trial = trainer.hyperparameter_search(direction="maximize", hp_space=my_hp_space) ``` To customize the objective, pass along a `compute_objective` function.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6576/reactions", "total_count": 23, "+1": 8, "-1": 0, "laugh": 0, "hooray": 8, "confused": 0, "heart": 0, "rocket": 7, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6576/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6576", "html_url": "https://github.com/huggingface/transformers/pull/6576", "diff_url": "https://github.com/huggingface/transformers/pull/6576.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6576.patch", "merged_at": 1598284126000 }
https://api.github.com/repos/huggingface/transformers/issues/6575
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6575/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6575/comments
https://api.github.com/repos/huggingface/transformers/issues/6575/events
https://github.com/huggingface/transformers/issues/6575
681,084,803
MDU6SXNzdWU2ODEwODQ4MDM=
6,575
Tokenizer further tokenizes pretokenized input
{ "login": "bogdankostic", "id": 48713846, "node_id": "MDQ6VXNlcjQ4NzEzODQ2", "avatar_url": "https://avatars.githubusercontent.com/u/48713846?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bogdankostic", "html_url": "https://github.com/bogdankostic", "followers_url": "https://api.github.com/users/bogdankostic/followers", "following_url": "https://api.github.com/users/bogdankostic/following{/other_user}", "gists_url": "https://api.github.com/users/bogdankostic/gists{/gist_id}", "starred_url": "https://api.github.com/users/bogdankostic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bogdankostic/subscriptions", "organizations_url": "https://api.github.com/users/bogdankostic/orgs", "repos_url": "https://api.github.com/users/bogdankostic/repos", "events_url": "https://api.github.com/users/bogdankostic/events{/privacy}", "received_events_url": "https://api.github.com/users/bogdankostic/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Hi,\r\n\r\n`is_pretokenized=True` actually means that you are providing a list of **words** as strings instead of a full sentence or paragraph not sub-words. The step which is skipped in this case is the *pre* tokenization step, not the tokenization step.\r\n\r\nThis is useful for NER or token classification for instance but I understand that the wording can be confusing, we will try to make it more clear in the docstring and the page of the doc ([here](https://huggingface.co/transformers/preprocessing.html#pre-tokenized-inputs)) cc @sgugger and @LysandreJik ", "Adding this to my TODO.", "Thanks for making this clear! :)" ]
1,597
1,598
1,597
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: current master - Platform: MacOS - Python version: 3.7 ### Who can help @mfuntowicz ## Information It seems that passing pretokenized input to the Tokenizer and setting `is_pretokenized=True` doesn't prevent the Tokenizer from further tokenizing the input. This issue already came up in #6046 and the reason for this seems to be #6573 . A workaround is to set `is_pretokenized=False`. What hasn't been reported yet is that this issue also arises with FastTokenizers where we see the same behavior. However, there is no workaround for FastTokenizers (or at least I haven't found one...). Setting `is_pretokenized=False` will raise a ValueError. ## To reproduce ```python from transformers.tokenization_auto import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-german-cased") fast_tokenizer = AutoTokenizer.from_pretrained("bert-base-german-cased", use_fast=True) text = "Schartau sagte dem Tagesspiegel, dass Fischer ein Idiot ist" pretokenized_text = ['Schar', '##tau', 'sagte', 'dem', 'Tages', '##spiegel', ',', 'dass', 'Fischer', 'ein', 'Id', '##iot', 'ist'] tokenized = tokenizer.encode(text) # returns list of len 15 -> 13 tokens + 2 special tokens pretokenized_tok = tokenizer.encode(pretokenized_text, is_pretokenized=True) # returns list of len 23 -> too large pretokenized_tok_2 = tokenizer.encode(pretokenized_text, is_pretokenized=False) # returns list of len 15 -> 13 tokens + 2 special tokens fast_tokenized = fast_tokenizer.encode(text) # returns list of len 15 -> 13 tokens + 2 special tokens fast_pretokenized_tok = fast_tokenizer.encode(pretokenized_text, is_pretokenized=True) # returns list of len 23 -> too large # fast_pretokenizer_tok2 = fast_tokenizer.encode(pretokenized_text, is_pretokenized=False) # would raise: 'ValueError: TextInputSequence must be str' tokenized_decoded = tokenizer.decode(tokenized) # returns '[CLS] Schartau sagte dem Tagesspiegel, dass Fischer ein Idiot ist [SEP]' pretokenized_tok_decoded = tokenizer.decode(pretokenized_tok) # returns '[CLS] Schar # # tau sagte dem Tages # # spiegel, dass Fischer ein Id # # iot ist [SEP]' pretokenized_tok_2_decoded = tokenizer.decode(pretokenized_tok_2) # returns '[CLS] Schartau sagte dem Tagesspiegel, dass Fischer ein Idiot ist [SEP]' fast_tokenized_decoded = fast_tokenizer.decode(fast_tokenized) # returns '[CLS] Schartau sagte dem Tagesspiegel, dass Fischer ein Idiot ist [SEP]' fast_pretokenized_tok_decoded = fast_tokenizer.decode(fast_pretokenized_tok) # returns '[CLS] Schar # # tau sagte dem Tages # # spiegel, dass Fischer ein Id # # iot ist [SEP]' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6575/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6575/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6574
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6574/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6574/comments
https://api.github.com/repos/huggingface/transformers/issues/6574/events
https://github.com/huggingface/transformers/pull/6574
681,064,480
MDExOlB1bGxSZXF1ZXN0NDY5NTE2ODM4
6,574
[docs] Fix number of 'ug' occurrences in tokenizer_summary
{ "login": "romainr", "id": 17945, "node_id": "MDQ6VXNlcjE3OTQ1", "avatar_url": "https://avatars.githubusercontent.com/u/17945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/romainr", "html_url": "https://github.com/romainr", "followers_url": "https://api.github.com/users/romainr/followers", "following_url": "https://api.github.com/users/romainr/following{/other_user}", "gists_url": "https://api.github.com/users/romainr/gists{/gist_id}", "starred_url": "https://api.github.com/users/romainr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/romainr/subscriptions", "organizations_url": "https://api.github.com/users/romainr/orgs", "repos_url": "https://api.github.com/users/romainr/repos", "events_url": "https://api.github.com/users/romainr/events{/privacy}", "received_events_url": "https://api.github.com/users/romainr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=h1) Report\n> Merging [#6574](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfa26d2b412ac3494eef06506004ca857c115ad9&el=desc) will **decrease** coverage by `1.09%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6574/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6574 +/- ##\n==========================================\n- Coverage 79.24% 78.15% -1.10% \n==========================================\n Files 156 156 \n Lines 28130 28130 \n==========================================\n- Hits 22292 21985 -307 \n- Misses 5838 6145 +307 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `66.00% <0.00%> (-32.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-29.33%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=footer). Last update [cfa26d2...5e35d96](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks!" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6574/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6574", "html_url": "https://github.com/huggingface/transformers/pull/6574", "diff_url": "https://github.com/huggingface/transformers/pull/6574.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6574.patch", "merged_at": 1597760605000 }
https://api.github.com/repos/huggingface/transformers/issues/6573
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6573/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6573/comments
https://api.github.com/repos/huggingface/transformers/issues/6573/events
https://github.com/huggingface/transformers/pull/6573
681,062,594
MDExOlB1bGxSZXF1ZXN0NDY5NTE1MzQ0
6,573
[docs] Fix wrong newline in the middle of a paragraph
{ "login": "romainr", "id": 17945, "node_id": "MDQ6VXNlcjE3OTQ1", "avatar_url": "https://avatars.githubusercontent.com/u/17945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/romainr", "html_url": "https://github.com/romainr", "followers_url": "https://api.github.com/users/romainr/followers", "following_url": "https://api.github.com/users/romainr/following{/other_user}", "gists_url": "https://api.github.com/users/romainr/gists{/gist_id}", "starred_url": "https://api.github.com/users/romainr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/romainr/subscriptions", "organizations_url": "https://api.github.com/users/romainr/orgs", "repos_url": "https://api.github.com/users/romainr/repos", "events_url": "https://api.github.com/users/romainr/events{/privacy}", "received_events_url": "https://api.github.com/users/romainr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=h1) Report\n> Merging [#6573](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfa26d2b412ac3494eef06506004ca857c115ad9&el=desc) will **increase** coverage by `0.14%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6573/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6573 +/- ##\n==========================================\n+ Coverage 79.24% 79.38% +0.14% \n==========================================\n Files 156 156 \n Lines 28130 28130 \n==========================================\n+ Hits 22292 22332 +40 \n+ Misses 5838 5798 -40 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6573/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6573/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+3.52%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6573/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.16% <0.00%> (+32.50%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=footer). Last update [cfa26d2...0a20942](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks!" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6573/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6573", "html_url": "https://github.com/huggingface/transformers/pull/6573", "diff_url": "https://github.com/huggingface/transformers/pull/6573.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6573.patch", "merged_at": 1597760564000 }
https://api.github.com/repos/huggingface/transformers/issues/6572
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6572/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6572/comments
https://api.github.com/repos/huggingface/transformers/issues/6572/events
https://github.com/huggingface/transformers/pull/6572
681,046,488
MDExOlB1bGxSZXF1ZXN0NDY5NTAyNDgy
6,572
Dataset and DataCollator for BERT Next Sentence Prediction (NSP) task.
{ "login": "mojave-pku", "id": 26648528, "node_id": "MDQ6VXNlcjI2NjQ4NTI4", "avatar_url": "https://avatars.githubusercontent.com/u/26648528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mojave-pku", "html_url": "https://github.com/mojave-pku", "followers_url": "https://api.github.com/users/mojave-pku/followers", "following_url": "https://api.github.com/users/mojave-pku/following{/other_user}", "gists_url": "https://api.github.com/users/mojave-pku/gists{/gist_id}", "starred_url": "https://api.github.com/users/mojave-pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mojave-pku/subscriptions", "organizations_url": "https://api.github.com/users/mojave-pku/orgs", "repos_url": "https://api.github.com/users/mojave-pku/repos", "events_url": "https://api.github.com/users/mojave-pku/events{/privacy}", "received_events_url": "https://api.github.com/users/mojave-pku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=h1) Report\n> Merging [#6572](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfa26d2b412ac3494eef06506004ca857c115ad9?el=desc) will **increase** coverage by `1.01%`.\n> The diff coverage is `13.15%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6572/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6572 +/- ##\n==========================================\n+ Coverage 79.24% 80.26% +1.01% \n==========================================\n Files 156 156 \n Lines 28130 28243 +113 \n==========================================\n+ Hits 22292 22668 +376 \n+ Misses 5838 5575 -263 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `56.32% <10.52%> (-35.52%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `62.80% <13.33%> (-28.11%)` | :arrow_down: |\n| [src/transformers/data/datasets/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=footer). Last update [cfa26d2...e924e84](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for reply and carefully review!\r\nI understand what you and @JetRunner mean now. \r\nThe purpose of submitting this script is to provide a tool that can test `DataCollator` and `TextDataset` (I don't know if I am doing this right). \r\nI agree with you and think it is not elegant to add a `run_bert_nsp.py` script alone. I will remove this script and support NSP together with MLM objective later.", "Mmmm, your rebase makes the PR touch way too many files now and quite unreadable. You should close and reopen it (once ready) so we can see the diff a little bit better.", "ok i will reopen the pr. thanks~", "I’m very sorry, I didn’t know that pr should be closed before rebase, which resulted in PR touch way too many files.\r\nI didn't notice you have already implement the `BertForPreTraining` in the modeling_bert file before, so I modified `BertForNextSencencePrediction` to support both mlm and nsp objectives. \r\nThese changes have now been revert.", "@sgugger `DataCollatorForNextSencencePrediction` and `TextDatasetForNextSencencePrediction` now support mlm and nsp together. ", "You need to reopen a new PR, this one will forever have all those files diff with all the old commits you rebased, so we can't review it.", "got it." ]
1,597
1,598
1,598
CONTRIBUTOR
null
I am working with some pre-training tasks recently, and found that transformers have not yet supported NSP tasks. So I write the code of **Dataset and DataCollator for BERT NSP task**, now I try to contribute. This is my first contribution to an open source project. If there is something wrong, please point it out, thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6572/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6572", "html_url": "https://github.com/huggingface/transformers/pull/6572", "diff_url": "https://github.com/huggingface/transformers/pull/6572.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6572.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6571
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6571/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6571/comments
https://api.github.com/repos/huggingface/transformers/issues/6571/events
https://github.com/huggingface/transformers/pull/6571
681,029,373
MDExOlB1bGxSZXF1ZXN0NDY5NDg4MDk0
6,571
token-classification: update url of GermEval 2014 dataset
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=h1) Report\n> Merging [#6571](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7719ecd19f63876fdc2d31699977c7ced3643417?el=desc) will **decrease** coverage by `0.28%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6571/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6571 +/- ##\n==========================================\n- Coverage 82.26% 81.98% -0.29% \n==========================================\n Files 172 172 \n Lines 33077 33077 \n==========================================\n- Hits 27211 27118 -93 \n- Misses 5866 5959 +93 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.52% <0.00%> (-34.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.31% <0.00%> (-6.73%)` | :arrow_down: |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `91.89% <0.00%> (-5.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.19% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.17% <0.00%> (-2.70%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.72%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=footer). Last update [5c1d5ea...debfb81](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This one is obsolete, correct @stefan-it ? Please confirm and close if so.", "Hi @vblagoje ,\r\n\r\nthe PR would also update the readme example 😅\r\n\r\nI will resolve the merge conflicts now!", "Resolved 🤓" ]
1,597
1,600
1,600
COLLABORATOR
null
Hi, the URL to the GermEval 2014 dataset was recently changed as the dataset is stored on a Google Drive now. This PR updates the URL in main documentation and in the example shell scripts.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6571/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6571", "html_url": "https://github.com/huggingface/transformers/pull/6571", "diff_url": "https://github.com/huggingface/transformers/pull/6571.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6571.patch", "merged_at": 1600424286000 }
https://api.github.com/repos/huggingface/transformers/issues/6570
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6570/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6570/comments
https://api.github.com/repos/huggingface/transformers/issues/6570/events
https://github.com/huggingface/transformers/issues/6570
681,014,287
MDU6SXNzdWU2ODEwMTQyODc=
6,570
Can not use the convert_graph_to_onnx.py to convert the pytorch model to onnx model
{ "login": "KellyZhang2020", "id": 50606395, "node_id": "MDQ6VXNlcjUwNjA2Mzk1", "avatar_url": "https://avatars.githubusercontent.com/u/50606395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KellyZhang2020", "html_url": "https://github.com/KellyZhang2020", "followers_url": "https://api.github.com/users/KellyZhang2020/followers", "following_url": "https://api.github.com/users/KellyZhang2020/following{/other_user}", "gists_url": "https://api.github.com/users/KellyZhang2020/gists{/gist_id}", "starred_url": "https://api.github.com/users/KellyZhang2020/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KellyZhang2020/subscriptions", "organizations_url": "https://api.github.com/users/KellyZhang2020/orgs", "repos_url": "https://api.github.com/users/KellyZhang2020/repos", "events_url": "https://api.github.com/users/KellyZhang2020/events{/privacy}", "received_events_url": "https://api.github.com/users/KellyZhang2020/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "same issues here. please help!", "same issue", "Same issue with Camembert torch model\r\n\r\npython transformers/convert_graph_to_onnx.py --model camembert-base --framework pt /tmp/camembert-base.onnx\r\n\r\n====== Converting model to ONNX ======\r\nONNX opset version set to: 11\r\nLoading pipeline (model: camembert-base, tokenizer: camembert-base)\r\nUsing framework PyTorch: 1.4.0\r\nFound input input_ids with shape: {0: 'batch', 1: 'sequence'}\r\nFound input attention_mask with shape: {0: 'batch', 1: 'sequence'}\r\nFound output output_0 with shape: {0: 'batch', 1: 'sequence'}\r\nFound output output_1 with shape: {0: 'batch'}\r\nEnsuring inputs are in correct order\r\ntoken_type_ids is not present in the generated input list.\r\nGenerated inputs order: ['input_ids', 'attention_mask']\r\nError while converting the model: export() got an unexpected keyword argument 'use_external_data_format'\r\n", "My issue was due to an older version of pytorch v1.4 where export function arguments are not the same as in version v1.6", "Any Solutions for this issue ?", "Hello @junkgear, we're doing a rework of the ONNX implementation, you can take a look at the proposal here: https://github.com/huggingface/transformers/pull/11786\r\n\r\nLet me know if that works for you!", "I dumped the vocab and config file in model folder and was able to convert\r\nGave model folder as input instead of bin file to convert_graph_to_onnx.py script\r\n\r\n```\r\nfrom transformers import (WEIGHTS_NAME, AdamW, get_linear_schedule_with_warmup,\r\n RobertaConfig, RobertaModel, RobertaTokenizer)\r\n\r\nMODEL_CLASSES = {'roberta': (RobertaConfig, RobertaModel, RobertaTokenizer)}\r\n\r\nconfig_class, model_class, tokenizer_class = MODEL_CLASSES['roberta']\r\n\r\nconfig = config_class.from_pretrained('microsoft/codebert-base')\r\n\r\ntokenizer = tokenizer_class.from_pretrained('microsoft/codebert-base')\r\n\r\n#convert config file to json and save in model folder\r\nconfig.to_json_file('model/config.json')\r\n\r\n#save the vocabulary\r\ntokenizer.save_vocabulary('model/')\r\n```", "> I dumped the vocab and config file in model folder and was able to convert\r\n> Gave model folder as input instead of bin file to convert_graph_to_onnx.py script\r\n> \r\n> ```\r\n> from transformers import (WEIGHTS_NAME, AdamW, get_linear_schedule_with_warmup,\r\n> RobertaConfig, RobertaModel, RobertaTokenizer)\r\n> \r\n> MODEL_CLASSES = {'roberta': (RobertaConfig, RobertaModel, RobertaTokenizer)}\r\n> \r\n> config_class, model_class, tokenizer_class = MODEL_CLASSES['roberta']\r\n> \r\n> config = config_class.from_pretrained('microsoft/codebert-base')\r\n> \r\n> tokenizer = tokenizer_class.from_pretrained('microsoft/codebert-base')\r\n> \r\n> #convert config file to json and save in model folder\r\n> config.to_json_file('model/config.json')\r\n> \r\n> #save the vocabulary\r\n> tokenizer.save_vocabulary('model/')\r\n> ```\r\nThis reply saved me an hour or two. If you look into convert_graph_to_onnx.py, --model parameter can only be HuggingFace's model id or a path. So anything like my_model.ckpt won't work. It should be a path containing vocab.txt, config.json, and pytorch_model.bin. Also the tokenizer should be either a tokenizer/model id or a folder containing the tokenizer files. Thanks a lot @junkgear ! " ]
1,597
1,626
1,606
NONE
null
When I convert the pytorch model to onnx as follow: ``` import os import torch from pytorch_pretrained_bert import BertModel model = BertModel.from_pretrained('bert-base-uncased') base_path = os.path.dirname(__file__) pt_path = os.path.join(base_path, 'bert.pt') torch.save(model, pt_path) ``` Then I use the convert_graph_to_onnx.py to convert the pytorch model to onnx model, as follow: `python src/transformers/convert_graph_to_onnx.py --framework pt --model bert.pt bert.onnx` **Error:** `Error while converting the model: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte` I don't know what caused this - `transformers` version:3.0.1 - Platform: pycharm - Python version: 3.7.2 - PyTorch version (GPU?): 1.6.0 - Tensorflow version (GPU?): 2.0.0 - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO Thank you for the help :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6570/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6570/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6569
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6569/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6569/comments
https://api.github.com/repos/huggingface/transformers/issues/6569/events
https://github.com/huggingface/transformers/pull/6569
680,986,578
MDExOlB1bGxSZXF1ZXN0NDY5NDUzMDg2
6,569
[Model card] Bert2GPT2 EncoderDecoder model
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,597
1,597
1,597
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6569/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6569", "html_url": "https://github.com/huggingface/transformers/pull/6569", "diff_url": "https://github.com/huggingface/transformers/pull/6569.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6569.patch", "merged_at": 1597771698000 }
https://api.github.com/repos/huggingface/transformers/issues/6568
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6568/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6568/comments
https://api.github.com/repos/huggingface/transformers/issues/6568/events
https://github.com/huggingface/transformers/issues/6568
680,965,035
MDU6SXNzdWU2ODA5NjUwMzU=
6,568
KeyError with DPR reader in Question Answering pipeline
{ "login": "antoniolanza1996", "id": 40452030, "node_id": "MDQ6VXNlcjQwNDUyMDMw", "avatar_url": "https://avatars.githubusercontent.com/u/40452030?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antoniolanza1996", "html_url": "https://github.com/antoniolanza1996", "followers_url": "https://api.github.com/users/antoniolanza1996/followers", "following_url": "https://api.github.com/users/antoniolanza1996/following{/other_user}", "gists_url": "https://api.github.com/users/antoniolanza1996/gists{/gist_id}", "starred_url": "https://api.github.com/users/antoniolanza1996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antoniolanza1996/subscriptions", "organizations_url": "https://api.github.com/users/antoniolanza1996/orgs", "repos_url": "https://api.github.com/users/antoniolanza1996/repos", "events_url": "https://api.github.com/users/antoniolanza1996/events{/privacy}", "received_events_url": "https://api.github.com/users/antoniolanza1996/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @LysandreJik and @lhoestq ,\r\nFYI, there is still this error on Transformers.\r\nHave you planned to support DPR into pipeline API?\r\n\r\nThanks", "Thanks for reporting !\r\nThe reader is currently not configured as a model for question answering indeed, but that's definitely something we'll change.\r\nWe'll also make it loadable as an an open-domain question answering Pipeline object.", "Hi @lhoestq ,\r\nI got it.\r\nIs there any approximate release date?\r\nI believe that open-domain QA Pipeline will be really useful, thank you. \r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,606
1,606
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.0.2 (I have also tried with current `master`) - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic (with Google Colab) - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information Model I am using: DPR reader (i.e. https://huggingface.co/facebook/dpr-reader-single-nq-base) to perform a Question Answering task. ## To reproduce Steps to reproduce the behavior: ```from transformers import pipeline model_name="facebook/dpr-reader-single-nq-base" tokenizer_name="facebook/dpr-reader-single-nq-base" my_pipeline = pipeline("question-answering", model=model_name, tokenizer=tokenizer_name) ``` Error message: ``` KeyError Traceback (most recent call last) <ipython-input-2-b3a2784cffcf> in <module>() 4 model_name="facebook/dpr-reader-single-nq-base" 5 tokenizer_name="facebook/dpr-reader-single-nq-base" ----> 6 my_pipeline = pipeline("question-answering", model=model_name, tokenizer=tokenizer_name) 2 frames /usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 208 209 if "model_type" in config_dict: --> 210 config_class = CONFIG_MAPPING[config_dict["model_type"]] 211 return config_class.from_dict(config_dict, **kwargs) 212 else: KeyError: 'dpr' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6568/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6567
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6567/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6567/comments
https://api.github.com/repos/huggingface/transformers/issues/6567/events
https://github.com/huggingface/transformers/pull/6567
680,955,533
MDExOlB1bGxSZXF1ZXN0NDY5NDI3NTk1
6,567
XLNet Bug when training with apex 16-bit precision
{ "login": "johndolgov", "id": 22281936, "node_id": "MDQ6VXNlcjIyMjgxOTM2", "avatar_url": "https://avatars.githubusercontent.com/u/22281936?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johndolgov", "html_url": "https://github.com/johndolgov", "followers_url": "https://api.github.com/users/johndolgov/followers", "following_url": "https://api.github.com/users/johndolgov/following{/other_user}", "gists_url": "https://api.github.com/users/johndolgov/gists{/gist_id}", "starred_url": "https://api.github.com/users/johndolgov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johndolgov/subscriptions", "organizations_url": "https://api.github.com/users/johndolgov/orgs", "repos_url": "https://api.github.com/users/johndolgov/repos", "events_url": "https://api.github.com/users/johndolgov/events{/privacy}", "received_events_url": "https://api.github.com/users/johndolgov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=h1) Report\n> Merging [#6567](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12d7624199e727f37bef7f53d527df7fabdb1fd6?el=desc) will **decrease** coverage by `0.77%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6567/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6567 +/- ##\n==========================================\n- Coverage 79.18% 78.41% -0.78% \n==========================================\n Files 156 156 \n Lines 28129 28129 \n==========================================\n- Hits 22275 22056 -219 \n- Misses 5854 6073 +219 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `83.30% <100.00%> (ø)` | |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-29.33%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=footer). Last update [12d7624...1c18ddf](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Of course, @JetRunner, here it is \r\n![Screenshot from 2020-08-20 11-47-50](https://user-images.githubusercontent.com/22281936/90748799-516c7900-e2db-11ea-993f-7f38642b155f.png)\r\n", "@JetRunner, done", "Merging since the CI error looks unrelated." ]
1,597
1,597
1,597
CONTRIBUTOR
null
XLNet training fail, while using 16-bit precision, because of tensor creation with explicit usage dtype=torch.float mode in relative_positional_encodings function.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6567/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6567", "html_url": "https://github.com/huggingface/transformers/pull/6567", "diff_url": "https://github.com/huggingface/transformers/pull/6567.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6567.patch", "merged_at": 1597944864000 }
https://api.github.com/repos/huggingface/transformers/issues/6566
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6566/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6566/comments
https://api.github.com/repos/huggingface/transformers/issues/6566/events
https://github.com/huggingface/transformers/issues/6566
680,882,835
MDU6SXNzdWU2ODA4ODI4MzU=
6,566
Can not convert the pytorch pretrained bert model to onnx model
{ "login": "KellyZhang2020", "id": 50606395, "node_id": "MDQ6VXNlcjUwNjA2Mzk1", "avatar_url": "https://avatars.githubusercontent.com/u/50606395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KellyZhang2020", "html_url": "https://github.com/KellyZhang2020", "followers_url": "https://api.github.com/users/KellyZhang2020/followers", "following_url": "https://api.github.com/users/KellyZhang2020/following{/other_user}", "gists_url": "https://api.github.com/users/KellyZhang2020/gists{/gist_id}", "starred_url": "https://api.github.com/users/KellyZhang2020/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KellyZhang2020/subscriptions", "organizations_url": "https://api.github.com/users/KellyZhang2020/orgs", "repos_url": "https://api.github.com/users/KellyZhang2020/repos", "events_url": "https://api.github.com/users/KellyZhang2020/events{/privacy}", "received_events_url": "https://api.github.com/users/KellyZhang2020/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,603
1,603
NONE
null
When I convert the pytorch pretrained bert model to onnx model as follows: ``` import os import torch from pytorch_pretrained_bert import BertTokenizer, BertModel model = BertModel.from_pretrained('bert-base-uncased') input_1 = torch.LongTensor(1, 14) input_2 = torch.LongTensor(1, 14) input_3 = torch.LongTensor(1, 14) base_path = os.path.dirname(__file__) onnx_path = os.path.join(base_path, 'bert.onnx') torch.onnx.export(model, (input_1, input_2, input_3), onnx_path) ``` **raise error**: `IndexError: index out of range in self` - `transformers` version: 3.0.2 - Platform: pycharm - Python version: 3.8 - PyTorch version (GPU?): 1.6.0 no GPU - onnx version: 1.7.0 - pytorch-pretrained-bert version: 0.6.2 Thank you for the help :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6566/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6565
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6565/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6565/comments
https://api.github.com/repos/huggingface/transformers/issues/6565/events
https://github.com/huggingface/transformers/issues/6565
680,844,942
MDU6SXNzdWU2ODA4NDQ5NDI=
6,565
New Feature: Best-First Beam Search
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "The first step would be refactoring our current implementation with the priority queue. I am on it.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,603
1,603
CONTRIBUTOR
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Best-First Beam Search is a faster beam search algorithm for decoding. https://arxiv.org/abs/2007.03909 (By Clara Meister, Tim Vieira, Ryan Cotterell) ## Motivation This technique can be essential for production-purpose inference and of great interest to our users. It would be a good idea to integrate Best-First Beam Search to Hugging Face transformers (for GPT, BART, T5, etc.).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6565/reactions", "total_count": 10, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6565/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6564
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6564/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6564/comments
https://api.github.com/repos/huggingface/transformers/issues/6564/events
https://github.com/huggingface/transformers/issues/6564
680,832,035
MDU6SXNzdWU2ODA4MzIwMzU=
6,564
T5 Gradient Checkpointing
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Also pinging @LysandreJik for notification in case this is easy to implement", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Keep it a live :)", "That's an important feature indeed! Will try to tackle this with @LysandreJik @VictorSanh in the new year :-) ", "Hi, I'm not too familiar with T5 internals but I crudely tried modifying `modeling_t5.py` as OP suggested, but I ran into some issues with unsupported return values for `torch.utils.checkpoint.checkpoint`, so it seems like there might be something else other than that block that needs changing?\r\n```\r\n File \"/data/sauravkadavath/miniconda3/envs/transformers-4.0.0/lib/python3.7/site-packages/torch/utils/checkpoint.py\", line 163, in checkpoint\r\n return CheckpointFunction.apply(function, preserve, *args)\r\nTypeError: CheckpointFunctionBackward.forward: expected Tensor or tuple of Tensor (got NoneType) for return value 1\r\n```", "Hey @ssss1029, \r\n\r\nThanks for playing around with the feature! Would you mind using your code to open a PR? I'll help you get it merged. It is very well possible that we might have to change some more code in T5 to make it work. Ideally, I'd try to base the T5 gradient checkpointing's code as much as possible on how Bart does it.", "Lots of people have been asking for T5 checkpointing, so your PR would be a great contribution if you want to give it a try :-) ", "Hi Patrick, unfortunately, I'm pretty new to Huggingface internals and I won't have the bandwidth to implement this.", "@patrickvonplaten @ssss1029\r\nJust a straightforward workaround, but not for PR.\r\nI modify the torch.utils.checkpoint file to overcome its limitation. See the code below, all the modificaitons are with comments.\r\nTraining with t5-base, I obverse the loss is droping down as same as with gradient_checkpointing off and the memory usage drops down as well. But don't have time to do full verification now.\r\n\r\n### 1. checkpoint.CheckpointFunction\r\n```python\r\nclass CheckpointFunction(torch.autograd.Function):\r\n\r\n @staticmethod\r\n def forward(ctx, run_function, preserve_rng_state, *args):\r\n check_backward_validity(args)\r\n ctx.run_function = run_function\r\n ctx.preserve_rng_state = preserve_rng_state\r\n if preserve_rng_state:\r\n ctx.fwd_cpu_state = torch.get_rng_state()\r\n ctx.had_cuda_in_fwd = False\r\n if torch.cuda._initialized:\r\n ctx.had_cuda_in_fwd = True\r\n ctx.fwd_gpu_devices, ctx.fwd_gpu_states = get_device_states(*args)\r\n ctx.save_for_backward(*args)\r\n with torch.no_grad():\r\n outputs = run_function(*args)\r\n # return outputs\r\n\r\n #\r\n # Lie to torch we have no None items, to avoid the assert\r\n #\r\n result = []\r\n for o in outputs:\r\n if o is None:\r\n o = torch.zeros(0).cuda()\r\n result.append(o)\r\n\r\n return tuple(result)\r\n\r\n @staticmethod\r\n def backward(ctx, *args):\r\n if not torch.autograd._is_checkpoint_valid():\r\n raise RuntimeError(\"Checkpointing is not compatible with .grad(), please use .backward() if possible\")\r\n inputs = ctx.saved_tensors\r\n rng_devices = []\r\n if ctx.preserve_rng_state and ctx.had_cuda_in_fwd:\r\n rng_devices = ctx.fwd_gpu_devices\r\n with torch.random.fork_rng(devices=rng_devices, enabled=ctx.preserve_rng_state):\r\n if ctx.preserve_rng_state:\r\n torch.set_rng_state(ctx.fwd_cpu_state)\r\n if ctx.had_cuda_in_fwd:\r\n set_device_states(ctx.fwd_gpu_devices, ctx.fwd_gpu_states)\r\n detached_inputs = detach_variable(inputs)\r\n with torch.enable_grad():\r\n outputs = ctx.run_function(*detached_inputs)\r\n\r\n if isinstance(outputs, torch.Tensor):\r\n outputs = (outputs,)\r\n \r\n #\r\n # Skip None items and tensors which requires_grad are False when do backward\r\n #\r\n backward_outputs = []\r\n backward_args = []\r\n for o, a in zip(outputs, args):\r\n if o is not None and o.requires_grad:\r\n backward_outputs.append(o)\r\n backward_args.append(a)\r\n torch.autograd.backward(backward_outputs, backward_args)\r\n\r\n # torch.autograd.backward(outputs, args)\r\n grads = tuple(inp.grad if isinstance(inp, torch.Tensor) else inp\r\n for inp in detached_inputs)\r\n return (None, None) + grads\r\n```\r\n\r\n### 2. checkpoint.checkpoint()\r\n```python\r\ndef checkpoint(function, *args, **kwargs):\r\n preserve = kwargs.pop('preserve_rng_state', True)\r\n if kwargs:\r\n raise ValueError(\"Unexpected keyword arguments: \" + \",\".join(arg for arg in kwargs))\r\n\r\n outputs = CheckpointFunction.apply(function, preserve, *args)\r\n\r\n #\r\n # Resotre None items to result\r\n #\r\n result = []\r\n for o in outputs:\r\n if len(o) == 0:\r\n o = None\r\n result.append(o)\r\n\r\n return tuple(result)\r\n```\r\n\r\n### 3. modeling_t5.T5Stack.forward(), just the common way\r\n```python\r\n if getattr(self.config, \"gradient_checkpointing\", False):\r\n\r\n def create_custom_forward(module):\r\n def custom_forward(*inputs):\r\n return tuple(module(*inputs, use_cache, output_attentions))\r\n\r\n return custom_forward\r\n\r\n layer_outputs = checkpoint(\r\n create_custom_forward(layer_module),\r\n hidden_states,\r\n extended_attention_mask,\r\n position_bias,\r\n encoder_hidden_states,\r\n encoder_extended_attention_mask,\r\n encoder_decoder_position_bias,\r\n head_mask[i],\r\n past_key_value,\r\n )\r\n\r\n else:\r\n layer_outputs = layer_module(\r\n hidden_states,\r\n attention_mask=extended_attention_mask,\r\n position_bias=position_bias,\r\n encoder_hidden_states=encoder_hidden_states,\r\n encoder_attention_mask=encoder_extended_attention_mask,\r\n encoder_decoder_position_bias=encoder_decoder_position_bias,\r\n head_mask=head_mask[i],\r\n past_key_value=past_key_value,\r\n use_cache=use_cache,\r\n output_attentions=output_attentions,\r\n )\r\n```\r\n\r\n", "Hey @xFinal , \r\n\r\nYour 3rd approach is definitely the one we'd be super happy to integrate into Transformers. Thanks a mille for you contribution already. If anyone in the community wants to give it a shot to add @xFinal's 3rd proposed solution to `modeling_t5.py` that would be awesome :-)", "Hi @patrickvonplaten ,\r\n\r\nGlad to hear it's helpful! But I have two worries about the integration:\r\n1. It's not tested by a full train yet.\r\n2. The approch now is to modify the **torch.utils.checkpoint** file which is a part of Pytorch. Maybe not suitable for integration I think. Maybe there will be more elegant way, like adjust t5 itself?\r\n", "Hi @xFinal ,\r\nI tried your solution and got the following error:\r\n\r\n`TypeError('CheckpointFunctionBackward.forward: expected Tensor or tuple of Tensor (got tuple) for return value 1') \r\n> /share/home/dwaydwaydway/t5/src/transformers/src/transformers/models/t5/modified_gradient_ckpt.py(124)checkpoint() \r\n 123 \r\n--> 124 outputs = CheckpointFunction.apply(function, preserve, *args) \r\n 125 `\r\n\r\nMay I ask which pytorch version did you use?", "@dwaydwaydway,\r\nThe verison is 1.7.1\r\nMake sure return tuple type in CheckpointFunction.forward()", "This issue has been stale for 1 month.", "Inspired by @xFinal's solution, I implemented [another workaround](https://github.com/veritable-tech/transformers/commit/cdbc6d52ff07fe41c9585fdae017279ea1a4cf6b) that doesn't require modifying the `Checkpoint` class (by returning a dummy Tensor instead of None in `T5Block.forward`).\r\n\r\nIt seems to work, but my tests might not be comprehensive enough.", "Hey @ceshine - do you mind opening a PR for it? :-)", "> Hey @ceshine - do you mind opening a PR for it? :-)\r\n\r\nNot at all. I'll open a PR after a bit more polishing.", "@ceshine that's great! :)" ]
1,597
1,619
1,619
CONTRIBUTOR
null
# 🚀 Feature request Currently, only Bert supports gradient checkpointing which allow the model to be fine-tuned on GPUs with small memory. It will be great to make T5 also support gradient checkpointing. Code: https://github.com/huggingface/transformers/blob/0735def8e1200ed45a2c33a075bc1595b12ef56a/src/transformers/modeling_bert.py#L461 ## Motivation T5 has very big models with 3B and 11B parameters which make it impossible to be fine-tuned on most GPUs. Gradient checkpointing will allow these huge models to be fine-tuned on GPUs. This will lead to much better results on downstream tasks using on house GPUs without the need to fine-tuned it on TPUs. ## Your contribution If I am not mistaken all what need to be change is the following block: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py#L752 ``` for i, (layer_module, past_key_value_state) in enumerate(zip(self.block, past_key_value_states)): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if getattr(self.config, "gradient_checkpointing", False): def create_custom_forward(module): def custom_forward(*inputs): return module(*inputs, output_attentions) return custom_forward layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(layer_module), hidden_states, extended_attention_mask, position_bias, encoder_hidden_states, encoder_extended_attention_mask, encoder_decoder_position_bias, head_mask[i], past_key_value_state, use_cache, output_attentions, ) else: layer_outputs = layer_module( hidden_states, attention_mask=extended_attention_mask, position_bias=position_bias, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, encoder_decoder_position_bias=encoder_decoder_position_bias, head_mask=head_mask[i], past_key_value_state=past_key_value_state, use_cache=use_cache, output_attentions=output_attentions, ) # layer_outputs is a tuple with: # hidden-states, key-value-states, (self-attention weights), (self-attention position bias), (cross-attention weights), (cross-attention position bias) hidden_states, present_key_value_state = layer_outputs[:2] if i == 0: # We share the position biases between the layers - the first layer store them # layer_outputs = hidden-states, key-value-states (self-attention weights), (self-attention position bias), (cross-attention weights), (cross-attention position bias) position_bias = layer_outputs[3 if output_attentions else 2] if self.is_decoder and encoder_hidden_states is not None: encoder_decoder_position_bias = layer_outputs[5 if output_attentions else 3] # append next layer key value states if use_cache: present_key_value_states = present_key_value_states + (present_key_value_state,) if output_attentions: all_attentions = all_attentions + (layer_outputs[2],) # We keep only self-attention weights for now ``` @patrickvonplaten thanks in advance for looking into it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6564/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6563
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6563/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6563/comments
https://api.github.com/repos/huggingface/transformers/issues/6563/events
https://github.com/huggingface/transformers/issues/6563
680,781,148
MDU6SXNzdWU2ODA3ODExNDg=
6,563
Pretokenized text handling mistake in tokenization_utils.py file
{ "login": "syuoni", "id": 21126786, "node_id": "MDQ6VXNlcjIxMTI2Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/21126786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/syuoni", "html_url": "https://github.com/syuoni", "followers_url": "https://api.github.com/users/syuoni/followers", "following_url": "https://api.github.com/users/syuoni/following{/other_user}", "gists_url": "https://api.github.com/users/syuoni/gists{/gist_id}", "starred_url": "https://api.github.com/users/syuoni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/syuoni/subscriptions", "organizations_url": "https://api.github.com/users/syuoni/orgs", "repos_url": "https://api.github.com/users/syuoni/repos", "events_url": "https://api.github.com/users/syuoni/events{/privacy}", "received_events_url": "https://api.github.com/users/syuoni/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,603
1,603
NONE
null
Hi, I'm wondering if this is a mistake that in the `tokenization_utils.py` file, the pretokenized text will be further tokenized, while the raw text would not. The code is as follows: ``` if is_pretokenized: tokens = list(itertools.chain(*(self.tokenize(t, is_pretokenized=True, **kwargs) for t in text))) return self.convert_tokens_to_ids(tokens) else: return self.convert_tokens_to_ids(text) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6563/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6563/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6562
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6562/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6562/comments
https://api.github.com/repos/huggingface/transformers/issues/6562/events
https://github.com/huggingface/transformers/issues/6562
680,761,419
MDU6SXNzdWU2ODA3NjE0MTk=
6,562
[Urgent] Word embedding initialization documentation and code might mismatch
{ "login": "guoxuxu", "id": 29363464, "node_id": "MDQ6VXNlcjI5MzYzNDY0", "avatar_url": "https://avatars.githubusercontent.com/u/29363464?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guoxuxu", "html_url": "https://github.com/guoxuxu", "followers_url": "https://api.github.com/users/guoxuxu/followers", "following_url": "https://api.github.com/users/guoxuxu/following{/other_user}", "gists_url": "https://api.github.com/users/guoxuxu/gists{/gist_id}", "starred_url": "https://api.github.com/users/guoxuxu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guoxuxu/subscriptions", "organizations_url": "https://api.github.com/users/guoxuxu/orgs", "repos_url": "https://api.github.com/users/guoxuxu/repos", "events_url": "https://api.github.com/users/guoxuxu/events{/privacy}", "received_events_url": "https://api.github.com/users/guoxuxu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "the new embeddings are initialized with N(0, 0.02) by default\r\n\r\nin `_get_resized_embeddings`\r\nhttps://github.com/huggingface/transformers/blob/9c2b2db2cdf0af968aae58d6075b6654224fb760/src/transformers/modeling_utils.py#L650-L651\r\n\r\ncalling `_init_weights`\r\nhttps://github.com/huggingface/transformers/blob/9c2b2db2cdf0af968aae58d6075b6654224fb760/src/transformers/modeling_bert.py#L592-L597", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,608
1,608
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> https://huggingface.co/transformers/main_classes/model.html ```resize_token_embeddings``` this documentation says it returns ```torch.nn.Embeddings``` The source code also used ```nn.Embedding``` (https://huggingface.co/transformers/_modules/transformers/modeling_utils.html#PreTrainedModel.resize_token_embeddings). But I checked the resized ```embedding.weight``` that the added embedding weight std() is about 0.01 ~ 0.02 and mean is around 0. While pytorch ```nn.Embedding``` is initialized from **N(0, 1)** (https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html). Is there any gap between documentation and implementation? May I know ```resize_token_embeddings``` initialize weights from ```uniform(-0.05, 0.05)``` or other distributions ?? which might not be N(0, 1). Though the source code really used ```nn.Embedding``` ..... - `transformers` version: 2.5.1 - Platform: linux - Python version: 3.7.4 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') tokenizer.add_tokens('<MEME>') bert_model = BertModel.from_pretrained("bert-base-uncased") bert_model.resize_token_embeddings(len(tokenizer)) print(bert_model.embeddings.word_embeddings.weight[-1].std()) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6562/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6561
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6561/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6561/comments
https://api.github.com/repos/huggingface/transformers/issues/6561/events
https://github.com/huggingface/transformers/issues/6561
680,761,019
MDU6SXNzdWU2ODA3NjEwMTk=
6,561
Create my own language model
{ "login": "dhimasyoga16", "id": 43445454, "node_id": "MDQ6VXNlcjQzNDQ1NDU0", "avatar_url": "https://avatars.githubusercontent.com/u/43445454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhimasyoga16", "html_url": "https://github.com/dhimasyoga16", "followers_url": "https://api.github.com/users/dhimasyoga16/followers", "following_url": "https://api.github.com/users/dhimasyoga16/following{/other_user}", "gists_url": "https://api.github.com/users/dhimasyoga16/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhimasyoga16/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhimasyoga16/subscriptions", "organizations_url": "https://api.github.com/users/dhimasyoga16/orgs", "repos_url": "https://api.github.com/users/dhimasyoga16/repos", "events_url": "https://api.github.com/users/dhimasyoga16/events{/privacy}", "received_events_url": "https://api.github.com/users/dhimasyoga16/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "For open-ended questions like this you should try https://discuss.huggingface.co\r\n\r\nDid you read https://huggingface.co/blog/how-to-train?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,603
1,603
NONE
null
I'm new in NLP and i want to give BERT a try. I have a wikipedia corpus (in Indonesian language of course, and in .txt format) and want to train it with bert multilingual cased. For further use, i expect that BERT can "adapt" well with indonesian language and can do specific task which is a text similarity task, or if possible do automated scoring based on the 2 texts given. Can i use the [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) to fine-tune and create my own language model? If it's possible, then what are the exact steps to achieve this? Thankyou in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6561/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6560
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6560/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6560/comments
https://api.github.com/repos/huggingface/transformers/issues/6560/events
https://github.com/huggingface/transformers/issues/6560
680,756,460
MDU6SXNzdWU2ODA3NTY0NjA=
6,560
Huggingface create_optimizer method not working
{ "login": "brand17", "id": 36546021, "node_id": "MDQ6VXNlcjM2NTQ2MDIx", "avatar_url": "https://avatars.githubusercontent.com/u/36546021?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brand17", "html_url": "https://github.com/brand17", "followers_url": "https://api.github.com/users/brand17/followers", "following_url": "https://api.github.com/users/brand17/following{/other_user}", "gists_url": "https://api.github.com/users/brand17/gists{/gist_id}", "starred_url": "https://api.github.com/users/brand17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brand17/subscriptions", "organizations_url": "https://api.github.com/users/brand17/orgs", "repos_url": "https://api.github.com/users/brand17/repos", "events_url": "https://api.github.com/users/brand17/events{/privacy}", "received_events_url": "https://api.github.com/users/brand17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems that keras is passing `experimental_aggregate_gradients` to `apply_gradients`, but the *transformers* TF2 optimizer does not have this argument (see https://github.com/huggingface/transformers/blob/master/src/transformers/optimization_tf.py#L224).\r\n\r\nOne workaround right now is to set `optimizer._HAS_AGGREGATE_GRAD = False`, which prevents keras from passing this argument.", "Thanks for the analysis @volker42maru. @jplu when you're back from vacation, we should fix this optimizer to accept this argument.", "Hello!\r\n\r\nI was aware of this, and this was on purpose to make the trainer compliant with all the TF 2.X versions. Now that the trainer is fixed to v 2.2 min, I will modify accordingly the method. Thanks @volker42maru to raise this and make me remember I had to update this.", "The PR #6717 should fix the problem." ]
1,597
1,598
1,598
NONE
null
## Environment info - `transformers` version: 3.0.2 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.6.6 - PyTorch version (GPU?): 1.5.0+cpu (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @jplu ## Information Model I am using (Bert, XLNet ...): Roberta The problem arises when using: * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run this code: ``` import tensorflow as tf from transformers import RobertaConfig, TFRobertaForMaskedLM, create_optimizer config = RobertaConfig() optimizer,lr = create_optimizer(1e-4,1000000,10000,0.1,1e-6,0.01) training_loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model = TFRobertaForMaskedLM(config) model.compile(optimizer=optimizer, loss=training_loss) input = tf.random.uniform(shape=[1,25], maxval=100, dtype=tf.int32) hist = model.fit(input, input, epochs=1, steps_per_epoch=1,verbose=0) ``` 2. I am getting an error: > TypeError: apply_gradients() got an unexpected keyword argument 'experimental_aggregate_gradients' ## Expected behavior optimizer should be created
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6560/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6559
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6559/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6559/comments
https://api.github.com/repos/huggingface/transformers/issues/6559/events
https://github.com/huggingface/transformers/issues/6559
680,737,640
MDU6SXNzdWU2ODA3Mzc2NDA=
6,559
Finetuning GPT2 produces IndexError: index out of range in self error
{ "login": "aclifton314", "id": 53267795, "node_id": "MDQ6VXNlcjUzMjY3Nzk1", "avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aclifton314", "html_url": "https://github.com/aclifton314", "followers_url": "https://api.github.com/users/aclifton314/followers", "following_url": "https://api.github.com/users/aclifton314/following{/other_user}", "gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}", "starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions", "organizations_url": "https://api.github.com/users/aclifton314/orgs", "repos_url": "https://api.github.com/users/aclifton314/repos", "events_url": "https://api.github.com/users/aclifton314/events{/privacy}", "received_events_url": "https://api.github.com/users/aclifton314/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "looks similar to #6192", "Hey @aclifton314, \r\n\r\nCould you post a complete code example here so that we can reproduce the error?", "@patil-suraj thanks for your reply. I checked the vocab and input embed size and they are the same:\r\n```python\r\nmodel.transformer.wte.weight.shape[0] = 50257\r\nlen(model.tokenizer) = 50257\r\n```\r\nThe OP in the issue you linked to figured out that, for him, the issue was coming from the cross entropy loss:\r\n\r\n> I figured it out, it was due to the change in ignore_index=-100 instead of -1 in the cross entropy loss which was causing the issue. I'll close this.\r\n\r\nFor me, I calculate a loss with respect to an n-grams model I made so I'm not entirely sure that is the source of the problem. What's more, doing a test on the training with a dataset that has 1024 examples completes, while using a dataset with 5426 examples produces the `IndexError: index out of range in self` error.", "@patrickvonplaten sure, let me see what I can put together.", "@patrickvonplaten I'm having problems coming up with some sample code that reproduces the problem exactly. Since I am generating text during the training process, my hunch is that I generate a sequence that is longer than what GPT2 allows. It looks like another user posted some code that might serve as a sample to reproduce the error here: https://github.com/huggingface/transformers/issues/6599", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,604
1,604
NONE
null
## System Info Pop!_OS 20.04 Pytorch: 1.5.1 Transformers: 3.0.2 Tokenizers: 0.8.1rc1 Python: 3.7.6 Pretrained Model: GPT2 Pretrained Tokenizer: GPT2 ## Question I am finetuning the pretrained GPT2 model on my dataset, using a custom defined loss function. I am getting this error below, but don't really have an idea as to what the issue might be. Here is the error: ```python Traceback (most recent call last): File "run_finetune_gpt2.py", line 143, in <module> main() File "run_finetune_gpt2.py", line 130, in main trainer.train() File "/usr/local/anaconda3/lib/python3.6/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/usr/local/anaconda3/lib/python3.6/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/data/ric-2020/textgen/finetune_gpt2.py", line 90, in forward top_p=0.95 File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/usr/local/anaconda3/lib/python3.6/site-packages/transformers/generation_utils.py", line 483, in generate model_specific_kwargs=model_specific_kwargs File "/usr/local/anaconda3/lib/python3.6/site-packages/transformers/generation_utils.py", line 524, in _generate_no_beam_search outputs = self.generate_text_while_finetuning(**model_inputs) File "/data/ric-2020/textgen/finetune_gpt2.py", line 44, in generate_text_while_finetuning output_hidden_states=output_hidden_states, File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/anaconda3/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 470, in forward position_embeds = self.wpe(position_ids) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1724, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self ``` I save many of the inputs that go into the training process: ```python TRAINING ARGS output_dir: models/textgen/out overwrite_output_dir: False do_train: True do_eval: False do_predict: False evaluate_during_training: False per_device_train_batch_size: 8 per_device_eval_batch_size: 8 per_gpu_train_batch_size: None per_gpu_eval_batch_size: None gradient_accumulation_steps: 1 learning_rate: 5e-05 weight_decay: 0.0 adam_epsilon: 1e-08 max_grad_norm: 1.0 num_train_epochs: 3.0 max_steps: -1 warmup_steps: 0 logging_dir: models/textgen/logs logging_first_step: False logging_steps: 500 save_steps: 500 save_total_limit: None no_cuda: False seed: 42 fp16: False fp16_opt_level: O1 local_rank: -1 tpu_num_cores: None tpu_metrics_debug: False debug: False dataloader_drop_last: False eval_steps: 1000 past_index: -1 --------------------------------------------- MODEL ARGS activation_function: gelu_new architectures: ['GPT2LMHeadModel'] attn_pdrop: 0.1 bos_token_id: 50256 embd_pdrop: 0.1 eos_token_id: 50256 initializer_range: 0.02 layer_norm_epsilon: 1e-05 model_type: gpt2 n_ctx: 1024 n_embd: 768 n_head: 12 n_layer: 12 n_positions: 1024 resid_pdrop: 0.1 summary_activation: None summary_first_dropout: 0.1 summary_proj_to_labels: True summary_type: cls_index summary_use_proj: True task_specific_params: {'text-generation': {'do_sample': True, 'max_length': 50}} vocab_size: 50257 --------------------------------------------- TRAINER ARGS args: TrainingArguments( output_dir='models/textgen/out', overwrite_output_dir=False, do_train='True', do_eval=False, do_predict=False, evaluate_during_training=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='models/textgen/logs', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1) data_collator: <function sd_data_collator at 0x7ffaba8f8e18> train_dataset: <custom_dataset.SDAbstractsDataset object at 0x7ffa18c8c400> eval_dataset: None compute_metrics: None prediction_loss_only: False optimizers: None tb_writer: <torch.utils.tensorboard.writer.SummaryWriter object at 0x7ff9f79e45c0> ``` The interesting thing is that the dataset that generates this error consists of 5426 examples. However, if I use another dataset that has 1024 examples, the training completes. So I'm not really sure what's going on. I'm happy to provide any other information. Thanks in advance for the help!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6559/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6558
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6558/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6558/comments
https://api.github.com/repos/huggingface/transformers/issues/6558/events
https://github.com/huggingface/transformers/issues/6558
680,734,883
MDU6SXNzdWU2ODA3MzQ4ODM=
6,558
Can we resize embedding with embedding weighted initialized differently??
{ "login": "guoxuxu", "id": 29363464, "node_id": "MDQ6VXNlcjI5MzYzNDY0", "avatar_url": "https://avatars.githubusercontent.com/u/29363464?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guoxuxu", "html_url": "https://github.com/guoxuxu", "followers_url": "https://api.github.com/users/guoxuxu/followers", "following_url": "https://api.github.com/users/guoxuxu/following{/other_user}", "gists_url": "https://api.github.com/users/guoxuxu/gists{/gist_id}", "starred_url": "https://api.github.com/users/guoxuxu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guoxuxu/subscriptions", "organizations_url": "https://api.github.com/users/guoxuxu/orgs", "repos_url": "https://api.github.com/users/guoxuxu/repos", "events_url": "https://api.github.com/users/guoxuxu/events{/privacy}", "received_events_url": "https://api.github.com/users/guoxuxu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "One way is to use `config.initializer_range`\r\n\r\n#6562", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,608
1,608
NONE
null
# ❓ Questions & Help When we add new tokens, this method automatically adds embedding using torch nn.Embedding. https://huggingface.co/transformers/_modules/transformers/modeling_utils.html#PreTrainedModel.resize_token_embeddings The documentation says the resized embeddings are nn. Embedding, which said they by default initialize weights from N(0, 1) (https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html). But I have checked that the resized embedding weights are almost N(0, 0.01) or N(0, 0.02)? Can I check **the true distribution of resized embedding weights** ? ## If I want the embedding weights initialized differently, how can I achieve that efficiently? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6558/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6557
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6557/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6557/comments
https://api.github.com/repos/huggingface/transformers/issues/6557/events
https://github.com/huggingface/transformers/pull/6557
680,728,175
MDExOlB1bGxSZXF1ZXN0NDY5MjMyODc2
6,557
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=h1) Report\n> Merging [#6557](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12d7624199e727f37bef7f53d527df7fabdb1fd6&el=desc) will **increase** coverage by `1.14%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6557/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6557 +/- ##\n==========================================\n+ Coverage 79.18% 80.33% +1.14% \n==========================================\n Files 156 156 \n Lines 28129 28129 \n==========================================\n+ Hits 22275 22597 +322 \n+ Misses 5854 5532 -322 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+3.79%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=footer). Last update [12d7624...ce57708](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Cool dataset from AllenAI" ]
1,597
1,597
1,597
CONTRIBUTOR
null
Good morning, @julien-c ! Have a nice day! =)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6557/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6557", "html_url": "https://github.com/huggingface/transformers/pull/6557", "diff_url": "https://github.com/huggingface/transformers/pull/6557.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6557.patch", "merged_at": 1597768934000 }
https://api.github.com/repos/huggingface/transformers/issues/6556
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6556/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6556/comments
https://api.github.com/repos/huggingface/transformers/issues/6556/events
https://github.com/huggingface/transformers/pull/6556
680,724,178
MDExOlB1bGxSZXF1ZXN0NDY5MjI5NjA1
6,556
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=h1) Report\n> Merging [#6556](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12d7624199e727f37bef7f53d527df7fabdb1fd6&el=desc) will **decrease** coverage by `0.77%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6556/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6556 +/- ##\n==========================================\n- Coverage 79.18% 78.41% -0.78% \n==========================================\n Files 156 156 \n Lines 28129 28129 \n==========================================\n- Hits 22275 22057 -218 \n- Misses 5854 6072 +218 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-29.33%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.11% <0.00%> (-0.84%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=footer). Last update [12d7624...6a0e0c9](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6556/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6556", "html_url": "https://github.com/huggingface/transformers/pull/6556", "diff_url": "https://github.com/huggingface/transformers/pull/6556.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6556.patch", "merged_at": 1597769000000 }
https://api.github.com/repos/huggingface/transformers/issues/6555
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6555/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6555/comments
https://api.github.com/repos/huggingface/transformers/issues/6555/events
https://github.com/huggingface/transformers/pull/6555
680,712,229
MDExOlB1bGxSZXF1ZXN0NDY5MjE5Njg5
6,555
Small typo fixes for model card: electra-base-german-uncased
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=h1) Report\n> Merging [#6555](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12d7624199e727f37bef7f53d527df7fabdb1fd6&el=desc) will **increase** coverage by `0.81%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6555/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6555 +/- ##\n==========================================\n+ Coverage 79.18% 79.99% +0.81% \n==========================================\n Files 156 156 \n Lines 28129 28129 \n==========================================\n+ Hits 22275 22503 +228 \n+ Misses 5854 5626 -228 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+3.79%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.22% <0.00%> (+9.72%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.16% <0.00%> (+32.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=footer). Last update [12d7624...b2a7699](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6555/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6555", "html_url": "https://github.com/huggingface/transformers/pull/6555", "diff_url": "https://github.com/huggingface/transformers/pull/6555.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6555.patch", "merged_at": 1597753313000 }
https://api.github.com/repos/huggingface/transformers/issues/6554
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6554/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6554/comments
https://api.github.com/repos/huggingface/transformers/issues/6554/events
https://github.com/huggingface/transformers/issues/6554
680,676,311
MDU6SXNzdWU2ODA2NzYzMTE=
6,554
The squad processor's multi-threading crashes the script / causes large models to reload every call
{ "login": "Avlyssna", "id": 7981517, "node_id": "MDQ6VXNlcjc5ODE1MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7981517?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Avlyssna", "html_url": "https://github.com/Avlyssna", "followers_url": "https://api.github.com/users/Avlyssna/followers", "following_url": "https://api.github.com/users/Avlyssna/following{/other_user}", "gists_url": "https://api.github.com/users/Avlyssna/gists{/gist_id}", "starred_url": "https://api.github.com/users/Avlyssna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Avlyssna/subscriptions", "organizations_url": "https://api.github.com/users/Avlyssna/orgs", "repos_url": "https://api.github.com/users/Avlyssna/repos", "events_url": "https://api.github.com/users/Avlyssna/events{/privacy}", "received_events_url": "https://api.github.com/users/Avlyssna/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,603
1,603
NONE
null
## Environment info - `transformers` version: 3.0.2 - Platform: Windows-10-10.0.17763-SP0 - Python version: 3.6.8 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: NA ### Who can help @mfuntowicz ## Information When using the `question-answering` pipeline, my script is run multiple times due to the thread pool within `squad_convert_examples_to_features`. In combination with other large models that take minutes to load, this causes them to reload every time the pipeline is called - or error outright if the code is not encapsulated. I've temporarily patched the `transformers\data\processors\squad.py` file on my end, forcing it to run in the current thread rather than multi-threading: ```python # with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p: # annotate_ = partial( # squad_convert_example_to_features, # max_seq_length=max_seq_length, # doc_stride=doc_stride, # max_query_length=max_query_length, # is_training=is_training, # ) # features = list( # tqdm( # p.imap(annotate_, examples, chunksize=32), # total=len(examples), # desc="convert squad examples to features", # disable=not tqdm_enabled, # ) # ) squad_convert_example_to_features_init(tokenizer) for example in examples: features.append(squad_convert_example_to_features( example, max_seq_length=max_seq_length, doc_stride=doc_stride, max_query_length=max_query_length, is_training=is_training )) ``` I'm wondering if there's a better solution here, though. Is this a bug? Or maybe an option could added in to allow single-threaded processing of the `examples`? I may be doing something wrong on my end too; not sure. The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. For a simplified example, run the following python script from terminal: ```python #!/usr/bin/env python3 from transformers import pipeline print('The script has been imported.') nlp = pipeline('question-answering', framework='pt') print(nlp(question='Who walked on the moon?', context='Niel Armstrong walked on the moon.')) ``` ``` (environment) C:\Users\Dennis\Desktop\New folder>python runner.py The script has been imported. The script has been imported. Traceback (most recent call last): File "<string>", line 1, in <module> File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 105, in spawn_main Traceback (most recent call last): exitcode = _main(fd) File "runner.py", line 6, in <module> File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 114, in _main print(nlp(question='Who walked on the moon?', context='Niel Armstrong walked on the moon.')) prepare(preparation_data) File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\pipelines.py", line 1264, in __call__ File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path for example in examples File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\pipelines.py", line 1264, in <listcomp> run_name="__mp_main__") File "c:\users\dennis\appdata\local\programs\python\python36\lib\runpy.py", line 263, in run_path for example in examples pkg_name=pkg_name, script_name=fname) File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\data\processors\squad.py", line 325, in squad_convert_examples_to_features File "c:\users\dennis\appdata\local\programs\python\python36\lib\runpy.py", line 96, in _run_module_code with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p: mod_name, mod_spec, pkg_name, script_name) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\context.py", line 119, in Pool File "c:\users\dennis\appdata\local\programs\python\python36\lib\runpy.py", line 85, in _run_code context=self.get_context()) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\pool.py", line 174, in __init__ exec(code, run_globals) File "C:\Users\Dennis\Desktop\New folder\runner.py", line 6, in <module> self._repopulate_pool() print(nlp(question='Who walked on the moon?', context='Niel Armstrong walked on the moon.')) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\pool.py", line 239, in _repopulate_pool File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\pipelines.py", line 1264, in __call__ w.start() File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\context.py", line 322, in _Popen for example in examples File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\pipelines.py", line 1264, in <listcomp> return Popen(process_obj) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__ reduction.dump(process_obj, to_child) for example in examples File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\reduction.py", line 60, in dump File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\data\processors\squad.py", line 325, in squad_convert_examples_to_features ForkingPickler(file, protocol).dump(obj) with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p: BrokenPipeError: [Errno 32] Broken pipe File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\context.py", line 119, in Pool context=self.get_context()) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\pool.py", line 174, in __init__ self._repopulate_pool() File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\pool.py", line 239, in _repopulate_pool w.start() File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 143, in get_preparation_data _check_not_importing_main() File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. ``` ## Expected behavior The script doesn't crash / reload the script every pipeline call.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6554/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6553
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6553/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6553/comments
https://api.github.com/repos/huggingface/transformers/issues/6553/events
https://github.com/huggingface/transformers/pull/6553
680,658,292
MDExOlB1bGxSZXF1ZXN0NDY5MTc1NDUz
6,553
fix incorrect codecov reports
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great, thanks for diving into it @stas00!", "Alas, it solved only part of the problem - the main issue is still there https://github.com/huggingface/transformers/pull/6553 :(" ]
1,597
1,598
1,597
CONTRIBUTOR
null
As discussed at https://github.com/huggingface/transformers/issues/6317 codecov currently sends an invalid report when it fails to find a code coverage report for the base it checks against. When this happens it goes looking for the nearest hash with report, which often leads to a report that is not representative of the true impact of the proposed PR. This PR fixes it by adding: `require_base: yes` # don't report if there is no base coverage report And then: - let's add this for clarity, this supposedly is already the default. `require_head: yes` # don't report if there is no head coverage report - and perhaps no point reporting on doc changes as they don't make any difference to the coverage and the 0% change comment just generates noise: `require_changes: true` # only comment if there was a change in coverage These options are documented here: https://docs.codecov.io/docs/codecovyml-reference#comment
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6553/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6553", "html_url": "https://github.com/huggingface/transformers/pull/6553", "diff_url": "https://github.com/huggingface/transformers/pull/6553.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6553.patch", "merged_at": 1597760474000 }
https://api.github.com/repos/huggingface/transformers/issues/6552
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6552/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6552/comments
https://api.github.com/repos/huggingface/transformers/issues/6552/events
https://github.com/huggingface/transformers/issues/6552
680,623,420
MDU6SXNzdWU2ODA2MjM0MjA=
6,552
Add model card for facebook/mbart-large-en-ro
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "```\r\n---\r\nlanguage:\r\n- en\r\n- ro\r\n---\r\n```\r\n\r\nno?", "I didn't know you can have two! Will inference api still populate the text box with \"My name is wolfgang\"?", "Yes it will use the first one to populate the widget if there are several", "Done: new config/moon-landing bug\r\n\r\n![image](https://user-images.githubusercontent.com/6045025/90800838-2c7c0400-e2e3-11ea-8d4d-40e0aeac0850.png)\r\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
Suggested content: ___ tags: - translation language: en ___ link to fairseq readme link to docs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6552/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6551
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6551/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6551/comments
https://api.github.com/repos/huggingface/transformers/issues/6551/events
https://github.com/huggingface/transformers/issues/6551
680,611,236
MDU6SXNzdWU2ODA2MTEyMzY=
6,551
TFTrainer Example
{ "login": "alexorona", "id": 11825654, "node_id": "MDQ6VXNlcjExODI1NjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexorona", "html_url": "https://github.com/alexorona", "followers_url": "https://api.github.com/users/alexorona/followers", "following_url": "https://api.github.com/users/alexorona/following{/other_user}", "gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexorona/subscriptions", "organizations_url": "https://api.github.com/users/alexorona/orgs", "repos_url": "https://api.github.com/users/alexorona/repos", "events_url": "https://api.github.com/users/alexorona/events{/privacy}", "received_events_url": "https://api.github.com/users/alexorona/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I am facing issue with :\r\n/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py in __init__(self, model, args, train_dataset, eval_dataset, compute_metrics, prediction_loss_only, tb_writer, optimizers)\r\n 87 self.tb_writer = tb_writer\r\n 88 else:\r\n---> 89 self.tb_writer = tf.summary.create_file_writer(self.args.logging_dir)\r\n 90 \r\n 91 if is_wandb_available():\r\n\r\nAttributeError: 'dict' object has no attribute 'logging_dir'\r\n\r\nOne good working example of TFTrainer would be very helpful. Thanks", "We're working on the examples and there should be one for every task soon (in PyTorch and TensorFlow). For your specific problem, I think it's missing a dictionary. There is a brand new tutorial from @joeddav on how to fine-tune a model on your custom dataset that should be helpful to you [here](https://huggingface.co/transformers/master/custom_datasets.html). Click on the TensorFlow button on the code examples to switch the code from PyTorch to TensorFlow, or on the open in colab button at the top where you can select the TensorFlow notebook that goes with the tutorial.", "Yes, you want to pass a tuple to `from_tensor_slices` where the first element is a dict of `kwarg:input` and the second is the labels. So in your case:\r\n\r\n```\r\ntrain_dataset = tf.data.Dataset.from_tensor_slices((\r\n {\"input_ids\": input_ids, \"attention_mask\": attention_mask, \"token_type_ids\": token_type_ids},\r\n labels # just a tensor, or a dict if multiple outputs as with question answering\r\n))\r\n```\r\n\r\nThe minibatches in the format of the inputs dict will by passed as kwargs to the model at each train step. The [tutorial](https://huggingface.co/transformers/master/custom_datasets.html) @sgugger recommended has some more examples.", "Thanks Joe and Sylvain!\r\n- Transformers 3.0.2\r\n- Windows\r\n- Tensorflow 2.3\r\n- Model: TFGPT2LMHeadModel \r\n\r\n@joeddav Hmmm... there might be an issue with parsing inputs for `TFGPT2LMHeadModel` or their might be problems with `_training_steps` (I remember reading that it was being deprecated or rewritten somewhere). I tried implementing the solution you indicated above, an extrapolation from the example that Sylvain linked to, and other variations, all with the same effect `ValueError: too many values to unpack (expected 2)` which triggers on this line in TFTrainer `for step, training_loss in enumerate(self._training_steps(train_ds, optimizer))`.\r\n\r\nTried:\r\n- `input_ids`, `attention_mask`, `token_type_ids` each as a list of lists\r\n- `input_ids`, `attention_mask`, `token_type_ids` each as a tf tensor\r\n- `input_ids`, `attention_mask`, `token_type_ids` each as list of numpy arrays\r\n- `input_ids`, `attention_mask`, `token_type_ids` each as a single numpy array\r\n- Where `train_encodings` is a dictionary of the above input types (e.g. `train_encodings['input_ids'] = input_ids`, neither of these worked when passed to `TFTrainer`:\r\n-- `train_dataset = tf.data.Dataset.from_tensor_slices(train_encodings)`\r\n-- `train_dataset = tf.data.Dataset.from_tensor_slices((train_encodings))`\r\n\r\nSince labels is not a recognized argument for `TFGPT2LMHeadModel`, presumably labels would be be just another key in `train_encodings` (e.g. `train_encodings['labels'] = labels)`.\r\n\r\n@sgugger I encountered an encoding error when I was testing the inputs from IMDb reviews example. Here's a potential replacement that worked for me:\r\n```\r\ndef read_imdb_split(split_dir):\r\n split_dir = Path(split_dir)\r\n texts = []\r\n labels = []\r\n for label_dir in [\"pos\", \"neg\"]:\r\n for text_file in (split_dir/label_dir).iterdir():\r\n f = open(text_file, encoding = 'utf-8')\r\n texts.append(f.read())\r\n f.close()\r\n labels.append(0 if label_dir is \"neg\" else 1)\r\n\r\n return texts, labels\r\n```\r\n\r\n", "@alexorona ahh, I believe this is an issue with TensorFlow LM-head models that we recently resolved – previously these models didn't take `labels` and didn't calculate the loss, so they didn't work with Trainer. Try building transformers from source and see if you still have the issue. (You can install from source by cloning the repo or just doing `pip install --upgrade git+https://github.com/huggingface/transformers.git`). Then you'll want to prepare your dataset so that the `labels` are the encoded `input_ids`:\r\n```\r\ntf.data.Dataset.from_tensor_slices((dict(train_encodings), train_encodings.input_ids))\r\n```\r\nIf `train_encodings` are of type `BatchEncoding`, I believe you'll have to explicitly cast them as a dict as I do above.\r\n\r\nWe should document this in `TFTrainer`.", "@joeddav Thanks! So I kind of got this to work, but could use some clarification on your last comment. After building from source, this will run until eval if inputs are already tf tensors:\r\n\r\n```\r\ninput_ids = tf.convert_to_tensor(input_ids)\r\nattention_mask = tf.convert_to_tensor(attention_mask)\r\ntoken_type_ids = tf.convert_to_tensor(token_types)\r\nlabels = tf.convert_to_tensor(labels)\r\n\r\ntrain_encodings = {\"input_ids\": input_ids, \r\n \"attention_mask\": attention_mask,\r\n \"token_type_ids\": token_type_ids}\r\n\r\ntrain_dataset = tf.data.Dataset.from_tensor_slices((train_encodings, labels))\r\n```\r\nI'm getting a warning that says `Converting sparse IndexedSlices to a dense Tensor of unknown shape` and an error that it can't find _prediction_loop -- `'TFTrainer' object has no attribute '_prediction_loop'` -- the latter of which is probably just a result of the changes to TFTrainer.\r\n\r\nI'm not sure how to interpret `train_encodings.input_ids`. Are you saying that we should make `train_encodings `an object with the labels set to input_ids? Labels are usually in the range `[-100, 0, ..., config.vocab_size] `with `-100` indicating its not part of the target. There's a lot of situations and setups where you want a token in the `input_ids`, but you don't want to calculate loss on it (for example when distinguishing between the target input and the history).", "> an error that it can't find _prediction_loop -- 'TFTrainer' object has no attribute '_prediction_loop' -- the latter of which is probably just a result of the changes to TFTrainer.\r\n\r\nYep, that's just a bug. `TFTrainer._prediction_step` is deprecated and it looks like we missed a reference to it. I think line 415 of `trainer_tf.py` just needs to be changed to call `self.prediction_step`.\r\n\r\n> Are you saying that we should make train_encodings an object with the labels set to input_ids?\r\n\r\nNo, sorry. You're right there are lots of situations where you would need something more complex, I was just using that as the most basic example of passing in labels for LM training. You just want the labels to be of the same shape as `input_ids` with the range exactly as you described.\r\n\r\n> I'm getting a warning that says Converting sparse IndexedSlices to a dense Tensor of unknown shape\r\n\r\nWhat format are your `labels` in? I'm not sure why they'd be sparse. TFTrainer will calculate the loss by calling `model(batch_encodings, labels=batch_labels)` which returns the loss as the first element. Try just passing one instance to the model and see if you get any errors and check that the returned loss looks reasonable (i.e. not NaN or something). In your case, that'd look like,\r\n```python\r\noutputs = model({key: val[:1] for key, val in train_encodings.items()}, labels=labels[:1])\r\nloss = outputs[0]\r\n```", "I got this working, thanks for the help.\r\n```\r\ndef example_to_features(input_ids,attention_masks,token_type_ids,y):\r\n return {\"input_ids\": input_ids,\r\n \"attention_mask\": attention_masks,\r\n \"token_type_ids\": token_type_ids},y\r\ntrain_ds = tf.data.Dataset.from_tensor_slices((input_ids_train,attention_masks_train,token_ids_train,label_ids_train)).map(example_to_features)\r\nand use train_ds for training\r\ntrainer = TFTrainer(\r\n model=model, \r\n args=training_args, \r\n train_dataset=train_ds, \r\n eval_dataset=test_ds \r\n )\r\n```\r\nOne question, when I do trainer.train(), it's not displaying progress, but I see in logs it's training. Is there some verbose option I am missing?", "Yeah the TFTrainer is not using any progress bar. Will add them soonish (with an option to disable for people who prefer not to see them), like in the PyTorch Trainer.", "Thank you, Also if chose to train native Keras way:\r\n`optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)\r\nmodel.compile(optimizer=optimizer, loss=model.compute_loss) # can also use any keras loss fn\r\nmodel.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16)`\r\nwhy is model.train() missing? I thought without it it still be eval mode right?", "@astromad You can edit the `TFTrainer` file directly (or copy it from GitHub and add create your own variation, which is what I did). You can add a basic progress bar at about line 500:\r\n\r\n```\r\ntrain_iterator = tqdm(range(epochs_trained, int(epochs + 1)), desc=\"Epoch\")\r\n for epoch_iter in train_iterator:\r\n # Reset the past mems state at the beginning of each epoch if necessary.\r\n if self.args.past_index >= 0:\r\n self._past = None\r\n\r\n epoch_iterator = tqdm(train_ds, desc=\"Iteration\")\r\n for step, batch in enumerate(epoch_iterator):\r\n```\r\n\r\nAdditionally, there's a way to display training loss, but my progress is not that far. I built a custom variation of `Trainer` that does that, but haven't yet incorporated all the changes into `TFTrainer` because the structure is different. Here's my progress so far in introducing continuous display (note: it won't be accurate because there's a number I need to divide by):\r\n\r\n training_loss = self.train_loss.result() / ((step + 1) * self.total_train_batch_size)\r\n loss_update = 'Train Epoch ' + str(epochs_trained) + \" (loss \" + str(round(training_loss.numpy(), 3)) + \")\" \r\n epoch_iterator.set_description(loss_update)\r\n\r\n@joeddav Thanks again, Joe! Astromad's map function creates a `batch` inside of `TFTrainer` that is fed to `self.distributed_training_steps`. This is the same batch structure that results when you instead use `train_dataset = tf.data.Dataset.from_tensor_slices((train_encodings, labels))`, as outlined above. In both cases, what is fed to `self.distributed_training_steps` is a tuple containing: 1) a dictionary object with `input_ids`, `attention_mask` and `token_type_ids` as keys and tf tensors as values, and 2) tf tensor for `labels`. \r\n\r\nWhen testing model inputs outside of the context of `TFTrainer` like this:\r\n```\r\noutputs = model({key: val[:1] for key, val in train_encodings.items()}, labels=labels[:1])\r\nloss = outputs[0]\r\n```\r\nIt seems that the `labels` are not being registered correctly. Here are the outputs:\r\n- `output[0]` is `shape=(381,)` instead of `shape=(1,)`. `381` is the number of values in the `labels` tensor that do not equal `-100`, so it looks like this is a per token loss. \r\n- `output[1]` is `shape=(1, 511, 50257)` -- shouldn't this be `shape(1, 512, 50257)`?\r\n- `output[2]` is `shape=(2, 1, 16, 512, 64)`\r\n\r\nStrangely, inside of `TFTrainer` when I print out `training_loss = self.train_loss.result() / ((step + 1) * self.total_train_batch_size)`, it's correctly a `shape=(1,)` tensor.\r\n", "just wanna share if this is useful, to construct a prediction from arbitrary sentence this is what I am using:\r\n```\r\nsequence = \"my name is firstname lastname and my phone number is 111-222-3333\"\r\ntokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))\r\nprint('Tokens are',tokens)\r\ninputs = tokenizer.encode_plus(sequence, return_tensors=\"tf\")\r\nprint('Inputs are',inputs)\r\noutputs = model(inputs)\r\nprint('output shape',np.shape(outputs))\r\n**And the output**\r\nTokens are ['[CLS]', 'my', 'name', 'is', 'first', '##name', 'last', '##name', 'and', 'my', 'phone', 'number', 'is', '111', '##-', '##22', '##-', '##33', '##33', '[SEP]']\r\nInputs are {'input_ids': <tf.Tensor: shape=(1, 20), dtype=int32, numpy=\r\narray([[ 101, 2026, 2171, 2003, 2034, 18442, 2197, 18442, 1998,\r\n 2026, 3042, 2193, 2003, 11118, 29624, 19317, 29624, 22394,\r\n 22394, 102]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(1, 20), dtype=int32, numpy=\r\narray([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],\r\n dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(1, 20), dtype=int32, numpy=\r\narray([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]],\r\n dtype=int32)>}\r\noutput shape (1, 1, 20, 6)\r\n``` ", "@joeddav @astromad Very useful examples! It's training correctly using the methods outlined above. Just some kinks to work out. It also looks like the `model.generate` method does not currently support the use of `token_type_ids`. `temperature`, `top_k` and `top_p` do not seem to have any effect on outputs. The example provided in the documentation will not work. \r\n\r\nHere's an example of one that will work. It's a `gpt2-medium` model fine-tuned on Jane Austen's _Pride and Prejudice_:\r\n```\r\ntokenizer = tokenizer\r\ninput_sequence = \"Mr. Darcy was acting strangely, she thought to herself. What had happened?\"\r\ninputs = tokenizer.encode_plus(input_sequence , return_tensors=\"tf\")\r\noutputs = model.generate(inputs['input_ids'],\r\n attention_mask = inputs['attention_mask'],\r\n max_length = 200, \r\n repetition_penalty = 2.0, \r\n pad_token_id = pad_token_id,\r\n eos_token_id = eos_token_id)\r\n\r\nprint(tokenizer.decode(outputs.numpy().tolist()[0]))\r\n\r\n'Mr. Darcy was acting strangely, she thought to herself. What had happened? She looked at the clock \r\nand saw that it must be half-past ten; but as soon afterwards came a knock on her door from Miss \r\nLucas\\'s father, who said he would come in an hour or two with his daughter. \"Come here,\" cried \r\nElizabeth, when they entered into Mrs.—Lennox Hall ;—\"come now.\" Mr—Darcy followed them \r\nthrough their dining room towards another large hall where there were many other ladies of distinction \r\nseated round tables which formed one side by themselves before each table. The gentlemen sat down \r\nopposite him all together, while Jane stood nearby looking very pale for some reason unknown till then \r\nonly known among those present. When Lydia heard this news, however,,she immediately began to speak \r\naloud:\\n <|endoftext|>'\r\n```\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "\r\nI am currently attempting to train a TF model (using Keras) using a dataset pipeline constructed with tf.data where I use a tf.py_function that tokenizes batches of data _on the fly_.\r\n\r\n```python\r\ntrain_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).batch(2)\r\ndef py_func(x):\r\n x = x.numpy() \r\n x = [i.decode(\"utf-8\") for i in x] \r\n d = tokenizer(x, truncation=True, padding=True)\r\n return list(d.values())\r\n\r\ndef ds_map_fn(x,y): \r\n flattened_output = tf.py_function(py_func, [x], [tf.int32, tf.int32])\r\n return {\"input_ids\": flattened_output[0], \"attention_mask\": flattened_output[1]},y\r\n\r\ntrain_ds = train_ds.map(ds_map_fn)\r\nfor x,y in train_ds.take(2):\r\n print(x)\r\n```\r\n\r\nWhen I sample from this pipeline, the result looks sensible \r\n\r\n```\r\n{'input_ids': <tf.Tensor: shape=(2, 20), dtype=int32, numpy=\r\narray([[ 101, 19443, 23698, 7710, 1195, 1274, 189, 1138, 1609,\r\n 184, 102, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0],\r\n [ 101, 5837, 8752, 4163, 4654, 26063, 5077, 1611, 11644,\r\n 22234, 1394, 2822, 5077, 22449, 1606, 1482, 1106, 4609,\r\n 4039, 102]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 20), dtype=int32, numpy=\r\narray([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\r\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],\r\n dtype=int32)>}\r\n{'input_ids': <tf.Tensor: shape=(2, 43), dtype=int32, numpy=\r\narray([[ 101, 170, 2944, 2312, 7925, 16234, 11275, 1109, 7085,\r\n 13830, 142, 12323, 1658, 1144, 1508, 1164, 1479, 4252,\r\n 15304, 1116, 1107, 3315, 13826, 1120, 1103, 12548, 1104,\r\n 139, 23698, 7710, 6716, 18630, 189, 1884, 5539, 2240,\r\n 2924, 1775, 4426, 2064, 1559, 20923, 102],\r\n [ 101, 146, 182, 1136, 2157, 1128, 5380, 189, 6730,\r\n 190, 1197, 4438, 1106, 2992, 1252, 178, 1329, 1917,\r\n 178, 1138, 1106, 13803, 1128, 1335, 4847, 1358, 1110,\r\n 1103, 1211, 18630, 189, 1884, 124, 5822, 2036, 1186,\r\n 1658, 1545, 12649, 1403, 102, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 43), dtype=int32, numpy=\r\narray([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\r\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],\r\n dtype=int32)>}\r\n```\r\n\r\nHowever, when I try to train a model using this dataset pipeline, I get an error. \r\n\r\nTraining code\r\n```python\r\nmodel = TFBertForSequenceClassification.from_pretrained(\"bert-base-cased\", num_labels=2 )\r\n\r\noptimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)\r\nmodel.compile(optimizer=optimizer, \r\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\r\n metrics=tf.metrics.SparseCategoricalAccuracy()\r\n ) # can also use any keras loss fn\r\nmodel.fit(train_ds, epochs=3 )\r\n\r\n```\r\n\r\nerror\r\n\r\n```\r\nOperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.\r\n```\r\n\r\nAny pointers as to what I am doing wrong would be appreciated. \r\n\r\nIn the mean time, I have used the workaround where I first tokenize my data before I create a tf.data pipeline. \r\n\r\n```python\r\ntrain_encodings = tokenizer(X_train, truncation=True, padding=True)\r\ntrain_ds = tf.data.Dataset.from_tensor_slices((\r\n dict(train_encodings),\r\n y_train\r\n))\r\n```" ]
1,597
1,651
1,604
CONTRIBUTOR
null
Is there an example that uses TFTrainer to fine-tune a model with more than one input type? Encountering some difficulty in figuring out how TFTrainer wants the tensorflow dataset structured. It doesn't seem to like one constructed from conventional numpy slices, e.g. `train_dataset = tf.data.Dataset.from_tensor_slices((input_ids, attention_mask, token_type_ids))`. I've dug through the documentation and a two dozen notesbooks and can't find an example of what an appropriate dataset input looks like.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6551/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6551/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6550
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6550/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6550/comments
https://api.github.com/repos/huggingface/transformers/issues/6550/events
https://github.com/huggingface/transformers/issues/6550
680,584,024
MDU6SXNzdWU2ODA1ODQwMjQ=
6,550
504 Gateway Time-out when trying to access Uploaded Model page
{ "login": "paulowoicho", "id": 28223751, "node_id": "MDQ6VXNlcjI4MjIzNzUx", "avatar_url": "https://avatars.githubusercontent.com/u/28223751?v=4", "gravatar_id": "", "url": "https://api.github.com/users/paulowoicho", "html_url": "https://github.com/paulowoicho", "followers_url": "https://api.github.com/users/paulowoicho/followers", "following_url": "https://api.github.com/users/paulowoicho/following{/other_user}", "gists_url": "https://api.github.com/users/paulowoicho/gists{/gist_id}", "starred_url": "https://api.github.com/users/paulowoicho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/paulowoicho/subscriptions", "organizations_url": "https://api.github.com/users/paulowoicho/orgs", "repos_url": "https://api.github.com/users/paulowoicho/repos", "events_url": "https://api.github.com/users/paulowoicho/events{/privacy}", "received_events_url": "https://api.github.com/users/paulowoicho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @julien-c I'm also experiencing this.", "Yes, the reason is that your model card's metadata is not valid YAML.\r\n\r\nI had fixed it on S3 but you overrode it with the previous version – this is a known limitation of our model card system right now and why:\r\n- we recommend submitting the model cards to this git repo for now\r\n- we are working on a way better system to be released in the coming weeks/months\r\n\r\n(cc @sgugger @JetRunner and others for context ^^)\r\n\r\nI've fixed our website to handle those model pages by displaying an error notice", "https://huggingface.co/paulowoicho/t5-podcast-summarisation" ]
1,597
1,598
1,598
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I recently uploaded a model available at `https://huggingface.co/paulowoicho/t5-podcast-summarisation`. I made a few changes to the model card and reuploaded. Now, the model's page does not load at all. Instead, it shows a 504 Gateway Time-out error. ![image](https://user-images.githubusercontent.com/28223751/90789131-71616400-e2fe-11ea-928c-070da9ffb449.png) <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6550/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6549
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6549/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6549/comments
https://api.github.com/repos/huggingface/transformers/issues/6549/events
https://github.com/huggingface/transformers/pull/6549
680,576,421
MDExOlB1bGxSZXF1ZXN0NDY5MTEwMTg4
6,549
fixed fast tokenizer use in QA pipeline and added corresponding test
{ "login": "bdalal", "id": 3478378, "node_id": "MDQ6VXNlcjM0NzgzNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/3478378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bdalal", "html_url": "https://github.com/bdalal", "followers_url": "https://api.github.com/users/bdalal/followers", "following_url": "https://api.github.com/users/bdalal/following{/other_user}", "gists_url": "https://api.github.com/users/bdalal/gists{/gist_id}", "starred_url": "https://api.github.com/users/bdalal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bdalal/subscriptions", "organizations_url": "https://api.github.com/users/bdalal/orgs", "repos_url": "https://api.github.com/users/bdalal/repos", "events_url": "https://api.github.com/users/bdalal/events{/privacy}", "received_events_url": "https://api.github.com/users/bdalal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The failed test seems to be failing for merged PRs too. I'll fix up the formatting with black" ]
1,597
1,597
1,597
CONTRIBUTOR
null
This PR fixes https://github.com/huggingface/transformers/issues/6545 The problem is that the behavior of the python tokenizer and the rust based fast tokenizer is very different. The python tokenizer handles cases where inputs are in different formats (str tokens and int tokens and vice versa), where as the fast tokenizer is unable to do so. My modifications include pretokenizing the query and along side the context, but not encoding it as is done now. Both the query and context are encoded together by the the tokenizer's `encode_plus` method with `is_pretokenized` set to True. Some edge cases where the fast tokenizer fails because of limited functionality (compare to python tok) have been added and I've included comments in those places.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6549/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6549", "html_url": "https://github.com/huggingface/transformers/pull/6549", "diff_url": "https://github.com/huggingface/transformers/pull/6549.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6549.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6548
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6548/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6548/comments
https://api.github.com/repos/huggingface/transformers/issues/6548/events
https://github.com/huggingface/transformers/issues/6548
680,541,542
MDU6SXNzdWU2ODA1NDE1NDI=
6,548
Model Clipping reduce the size of the model.
{ "login": "snaik2016", "id": 18183245, "node_id": "MDQ6VXNlcjE4MTgzMjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/18183245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/snaik2016", "html_url": "https://github.com/snaik2016", "followers_url": "https://api.github.com/users/snaik2016/followers", "following_url": "https://api.github.com/users/snaik2016/following{/other_user}", "gists_url": "https://api.github.com/users/snaik2016/gists{/gist_id}", "starred_url": "https://api.github.com/users/snaik2016/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/snaik2016/subscriptions", "organizations_url": "https://api.github.com/users/snaik2016/orgs", "repos_url": "https://api.github.com/users/snaik2016/repos", "events_url": "https://api.github.com/users/snaik2016/events{/privacy}", "received_events_url": "https://api.github.com/users/snaik2016/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,603
1,603
NONE
null
# ❓ Questions & Help Is it possible to just reduce the size of a large model to by removing weights and preserving others. For example we can apply this concept to different dimensions Depth: Layer dropping this is well known an have been shown to work with huggingface repo. Width: For scenarios where we need small size of the model, is it possible to just drop all the weights beyond all the seq length L. For GPT2 kind of model with only left to right flow it shouldn't affect the model perf. How to implement this without disturbing other weights? Thanks in advance for help! <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6548/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6548/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6547
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6547/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6547/comments
https://api.github.com/repos/huggingface/transformers/issues/6547/events
https://github.com/huggingface/transformers/pull/6547
680,536,184
MDExOlB1bGxSZXF1ZXN0NDY5MDc2MzQ5
6,547
There is an error in the run_tf_ner.py script with the get_labels fun…
{ "login": "RodSernaPerez", "id": 37450380, "node_id": "MDQ6VXNlcjM3NDUwMzgw", "avatar_url": "https://avatars.githubusercontent.com/u/37450380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RodSernaPerez", "html_url": "https://github.com/RodSernaPerez", "followers_url": "https://api.github.com/users/RodSernaPerez/followers", "following_url": "https://api.github.com/users/RodSernaPerez/following{/other_user}", "gists_url": "https://api.github.com/users/RodSernaPerez/gists{/gist_id}", "starred_url": "https://api.github.com/users/RodSernaPerez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RodSernaPerez/subscriptions", "organizations_url": "https://api.github.com/users/RodSernaPerez/orgs", "repos_url": "https://api.github.com/users/RodSernaPerez/repos", "events_url": "https://api.github.com/users/RodSernaPerez/events{/privacy}", "received_events_url": "https://api.github.com/users/RodSernaPerez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the PR.\r\n\r\nThe issue with the `get_labels` has been fixed in a previous PR. Can you rebase on master and remove the `get_labels` part? We will keep the TPU fix :)", "Closing as already implemented. Thanks a lot for your contribution @RodSernaPerez, looking forward to the next one!" ]
1,597
1,599
1,599
NONE
null
There is an error in the run_tf_ner.py scripts with the get_labels function (does not exist). It does not work with TPU. Both problems are fixed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6547/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6547", "html_url": "https://github.com/huggingface/transformers/pull/6547", "diff_url": "https://github.com/huggingface/transformers/pull/6547.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6547.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6546
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6546/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6546/comments
https://api.github.com/repos/huggingface/transformers/issues/6546/events
https://github.com/huggingface/transformers/pull/6546
680,513,181
MDExOlB1bGxSZXF1ZXN0NDY5MDU3NDU4
6,546
[model_cards] update jimregan/BERTreach card with #s of sentences/tokens
{ "login": "jimregan", "id": 227350, "node_id": "MDQ6VXNlcjIyNzM1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/227350?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jimregan", "html_url": "https://github.com/jimregan", "followers_url": "https://api.github.com/users/jimregan/followers", "following_url": "https://api.github.com/users/jimregan/following{/other_user}", "gists_url": "https://api.github.com/users/jimregan/gists{/gist_id}", "starred_url": "https://api.github.com/users/jimregan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jimregan/subscriptions", "organizations_url": "https://api.github.com/users/jimregan/orgs", "repos_url": "https://api.github.com/users/jimregan/repos", "events_url": "https://api.github.com/users/jimregan/events{/privacy}", "received_events_url": "https://api.github.com/users/jimregan/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=h1) Report\n> Merging [#6546](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/63144701ed46d6423b5968db40b6d4469d7d9b87&el=desc) will **increase** coverage by `0.93%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6546/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6546 +/- ##\n==========================================\n+ Coverage 78.43% 79.37% +0.93% \n==========================================\n Files 156 156 \n Lines 28129 28129 \n==========================================\n+ Hits 22062 22326 +264 \n+ Misses 6067 5803 -264 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-0.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.71% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <0.00%> (+0.19%)` | :arrow_up: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=footer). Last update [6314470...3ceca88](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6546/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6546", "html_url": "https://github.com/huggingface/transformers/pull/6546", "diff_url": "https://github.com/huggingface/transformers/pull/6546.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6546.patch", "merged_at": 1597697285000 }
https://api.github.com/repos/huggingface/transformers/issues/6545
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6545/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6545/comments
https://api.github.com/repos/huggingface/transformers/issues/6545/events
https://github.com/huggingface/transformers/issues/6545
680,498,895
MDU6SXNzdWU2ODA0OTg4OTU=
6,545
QA pipeline fails when using fast tokenizer
{ "login": "bdalal", "id": 3478378, "node_id": "MDQ6VXNlcjM0NzgzNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/3478378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bdalal", "html_url": "https://github.com/bdalal", "followers_url": "https://api.github.com/users/bdalal/followers", "following_url": "https://api.github.com/users/bdalal/following{/other_user}", "gists_url": "https://api.github.com/users/bdalal/gists{/gist_id}", "starred_url": "https://api.github.com/users/bdalal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bdalal/subscriptions", "organizations_url": "https://api.github.com/users/bdalal/orgs", "repos_url": "https://api.github.com/users/bdalal/repos", "events_url": "https://api.github.com/users/bdalal/events{/privacy}", "received_events_url": "https://api.github.com/users/bdalal/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I also have run into this issue. @bdalal did you find a fix? I see the above PR, but seems you closed it. ", "The fix is not very straightforward because it seems like both the tokenizers expose vastly different APIs and support very different functionalities. You can check this in `tokenization_utils_base.py` and `tokenization_utils_fast.py`.\r\nIn the context of the pipeline therefore, the tokenizer change is not transparent and requires a fair bit of work to integrate correctly.\r\nI'll probably have a better fix up sometime next month if I do end up including the fast tokenizer in my project.", "I've got a hacky working version here: https://github.com/bdalal/transformers\r\nAlso contains additional performance optimizations", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@bdalal How does the link you provided help solving the issue?", "@MHDBST you can navigate to the [QA pipeline](https://github.com/bdalal/transformers/blob/36a19915ea4fc3dc337a310e4a1af43eb3c81c9a/src/transformers/pipelines.py#L1656) and look at the changes I made to get them working. This was a while ago, so I don't exactly recall what changes I made.\r\n\r\nThat being said, I believe my code is now redundant because HF has come a long way in integrating fast tokenizers and they're now the default I believe. So you'll be better off just playing around with their latest release." ]
1,597
1,621
1,604
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-1032-gcp-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu documentation: @sgugger --> @LysandreJik @mfuntowicz ## Information Model I am using (Bert, XLNet ...): "sshleifer/tiny-distilbert-base-cased-distilled-squad" The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Modify unit test https://github.com/huggingface/transformers/blob/master/tests/test_pipelines.py#L644-L647 to use fast tokenizer `nlp = pipeline(task="question-answering", model=model_name, tokenizer=(model_name, {"use_fast": True}))` 2. Run modified unit test <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> The test fails with error `ValueError: TextInputSequence must be str` Complete failure result: ``` Testing started at 16:06 ... Launching pytest with arguments test_pipelines.py::QAPipelineTests::test_torch_question_answering in /home/lttazz99/transformers_perf_tests/transformers/tests ============================= test session starts ============================== platform linux -- Python 3.7.7, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /home/lttazz99/miniconda3/envs/qna/bin/python cachedir: .pytest_cache rootdir: /home/lttazz99/transformers_perf_tests/transformers collected 1 item test_pipelines.py::QAPipelineTests::test_torch_question_answering FAILED [100%]HuggingFace None was None founded None in None Paris. None tests/test_pipelines.py:642 (QAPipelineTests.test_torch_question_answering) self = <tests.test_pipelines.QAPipelineTests testMethod=test_torch_question_answering> @require_torch def test_torch_question_answering(self): for model_name in QA_FINETUNED_MODELS: nlp = pipeline(task="question-answering", model=model_name, tokenizer=(model_name, {"use_fast": True})) > self._test_qa_pipeline(nlp) test_pipelines.py:647: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_pipelines.py:626: in _test_qa_pipeline mono_result = nlp(valid_inputs[0]) ../src/transformers/pipelines.py:1675: in __call__ tqdm_enabled=False, ../src/transformers/data/processors/squad.py:369: in squad_convert_examples_to_features is_training=is_training)) ../src/transformers/data/processors/squad.py:165: in squad_convert_example_to_features return_token_type_ids=True, ../src/transformers/tokenization_utils_base.py:2043: in encode_plus **kwargs, ../src/transformers/tokenization_utils_fast.py:458: in _encode_plus **kwargs, ../src/transformers/tokenization_utils_fast.py:369: in _batch_encode_plus is_pretokenized=is_pretokenized, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Tokenizer(vocabulary_size=28996, model=BertWordPiece, unk_token=[UNK], sep_token=[SEP], cls_token=[CLS], pad_token=[PA...sk_token=[MASK], clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=False, wordpieces_prefix=##) sequence = [2777, 1108, 20164, 10932, 2271, 7954, ...] pair = ['Hu', '##gging', '##F', '##ace', 'was', 'founded', ...] is_pretokenized = False, add_special_tokens = True def encode( self, sequence: InputSequence, pair: Optional[InputSequence] = None, is_pretokenized: bool = False, add_special_tokens: bool = True, ) -> Encoding: """ Encode the given sequence and pair. This method can process raw text sequences as well as already pre-tokenized sequences. Args: sequence: InputSequence: The sequence we want to encode. This sequence can be either raw text or pre-tokenized, according to the `is_pretokenized` argument: - If `is_pretokenized=False`: `InputSequence` is expected to be `str` - If `is_pretokenized=True`: `InputSequence` is expected to be `Union[List[str], Tuple[str]]` is_pretokenized: bool: Whether the input is already pre-tokenized. add_special_tokens: bool: Whether to add the special tokens while encoding. Returns: An Encoding """ if sequence is None: raise ValueError("encode: `sequence` can't be `None`") > return self._tokenizer.encode(sequence, pair, is_pretokenized, add_special_tokens) E ValueError: TextInputSequence must be str ../../../miniconda3/envs/qna/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py:212: ValueError Assertion failed Assertion failed =================================== FAILURES =================================== ________________ QAPipelineTests.test_torch_question_answering _________________ self = <tests.test_pipelines.QAPipelineTests testMethod=test_torch_question_answering> @require_torch def test_torch_question_answering(self): for model_name in QA_FINETUNED_MODELS: nlp = pipeline(task="question-answering", model=model_name, tokenizer=(model_name, {"use_fast": True})) > self._test_qa_pipeline(nlp) test_pipelines.py:647: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_pipelines.py:626: in _test_qa_pipeline mono_result = nlp(valid_inputs[0]) ../src/transformers/pipelines.py:1675: in __call__ tqdm_enabled=False, ../src/transformers/data/processors/squad.py:369: in squad_convert_examples_to_features is_training=is_training)) ../src/transformers/data/processors/squad.py:165: in squad_convert_example_to_features return_token_type_ids=True, ../src/transformers/tokenization_utils_base.py:2043: in encode_plus **kwargs, ../src/transformers/tokenization_utils_fast.py:458: in _encode_plus **kwargs, ../src/transformers/tokenization_utils_fast.py:369: in _batch_encode_plus is_pretokenized=is_pretokenized, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Tokenizer(vocabulary_size=28996, model=BertWordPiece, unk_token=[UNK], sep_token=[SEP], cls_token=[CLS], pad_token=[PA...sk_token=[MASK], clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=False, wordpieces_prefix=##) sequence = [2777, 1108, 20164, 10932, 2271, 7954, ...] pair = ['Hu', '##gging', '##F', '##ace', 'was', 'founded', ...] is_pretokenized = False, add_special_tokens = True def encode( self, sequence: InputSequence, pair: Optional[InputSequence] = None, is_pretokenized: bool = False, add_special_tokens: bool = True, ) -> Encoding: """ Encode the given sequence and pair. This method can process raw text sequences as well as already pre-tokenized sequences. Args: sequence: InputSequence: The sequence we want to encode. This sequence can be either raw text or pre-tokenized, according to the `is_pretokenized` argument: - If `is_pretokenized=False`: `InputSequence` is expected to be `str` - If `is_pretokenized=True`: `InputSequence` is expected to be `Union[List[str], Tuple[str]]` is_pretokenized: bool: Whether the input is already pre-tokenized. add_special_tokens: bool: Whether to add the special tokens while encoding. Returns: An Encoding """ if sequence is None: raise ValueError("encode: `sequence` can't be `None`") > return self._tokenizer.encode(sequence, pair, is_pretokenized, add_special_tokens) E ValueError: TextInputSequence must be str ../../../miniconda3/envs/qna/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py:212: ValueError ----------------------------- Captured stdout call ----------------------------- HuggingFace None was None founded None in None Paris. None =============================== warnings summary =============================== tests/test_pipelines.py::QAPipelineTests::test_torch_question_answering /home/lttazz99/transformers_perf_tests/transformers/src/transformers/tokenization_utils_base.py:1319: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. FutureWarning, -- Docs: https://docs.pytest.org/en/stable/warnings.html =========================== short test summary info ============================ FAILED test_pipelines.py::QAPipelineTests::test_torch_question_answering - Va... ========================= 1 failed, 1 warning in 2.64s ========================= Process finished with exit code 1 Assertion failed Assertion failed ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Test should pass. --- NOTES: This errors appears to be coming because [here](https://github.com/huggingface/transformers/blob/98ee802023a4db76879c761e7ce3677eb4555871/src/transformers/tokenization_utils_fast.py#L366) the inputs are the query tokens' ids and the context token text. This in turn is passed to the [fast tokenizer](https://github.com/huggingface/tokenizers/blob/5d8728a26b654784572612ae119edc451205c2ff/bindings/python/py_src/tokenizers/implementations/base_tokenizer.py#L211) which expects `str` for both the sequence and the pair but instead gets a sequence of ints for the query and the text for the context. The pipeline fails with when any Bert-like fast tokenizer is used irrespective of the model. The unit test is the easiest way to reproduce this error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6545/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6544
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6544/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6544/comments
https://api.github.com/repos/huggingface/transformers/issues/6544/events
https://github.com/huggingface/transformers/pull/6544
680,459,786
MDExOlB1bGxSZXF1ZXN0NDY5MDE0OTA5
6,544
[model_cards] Add a new model for Irish
{ "login": "jimregan", "id": 227350, "node_id": "MDQ6VXNlcjIyNzM1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/227350?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jimregan", "html_url": "https://github.com/jimregan", "followers_url": "https://api.github.com/users/jimregan/followers", "following_url": "https://api.github.com/users/jimregan/following{/other_user}", "gists_url": "https://api.github.com/users/jimregan/gists{/gist_id}", "starred_url": "https://api.github.com/users/jimregan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jimregan/subscriptions", "organizations_url": "https://api.github.com/users/jimregan/orgs", "repos_url": "https://api.github.com/users/jimregan/repos", "events_url": "https://api.github.com/users/jimregan/events{/privacy}", "received_events_url": "https://api.github.com/users/jimregan/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=h1) Report\n> Merging [#6544](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c9564f53433fb637cbd8eec0d902e80e30f91814&el=desc) will **decrease** coverage by `1.13%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6544/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6544 +/- ##\n==========================================\n- Coverage 80.52% 79.39% -1.14% \n==========================================\n Files 156 156 \n Lines 28108 28129 +21 \n==========================================\n- Hits 22633 22332 -301 \n- Misses 5475 5797 +322 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `96.73% <100.00%> (+0.96%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <0.00%> (-0.69%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=footer). Last update [407da12...d9608f8](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Really cool! If you get a chance, do you think you could add sample inputs for Irish to https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts?" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6544/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6544", "html_url": "https://github.com/huggingface/transformers/pull/6544", "diff_url": "https://github.com/huggingface/transformers/pull/6544.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6544.patch", "merged_at": 1597694217000 }
https://api.github.com/repos/huggingface/transformers/issues/6543
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6543/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6543/comments
https://api.github.com/repos/huggingface/transformers/issues/6543/events
https://github.com/huggingface/transformers/pull/6543
680,375,467
MDExOlB1bGxSZXF1ZXN0NDY4OTQ2ODAy
6,543
[BartTokenizerFast] add prepare_seq2seq_batch
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=h1) Report\n> Merging [#6543](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d0c2389f485a9defdc856871e8362add9b1377a3&el=desc) will **decrease** coverage by `2.36%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6543/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6543 +/- ##\n==========================================\n- Coverage 80.51% 78.14% -2.37% \n==========================================\n Files 156 156 \n Lines 28094 28120 +26 \n==========================================\n- Hits 22619 21975 -644 \n- Misses 5475 6145 +670 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `66.00% <0.00%> (-32.38%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=footer). Last update [7ca6ab6...fbfe89e](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
MEMBER
null
This PR adds `prepare_seq2seq_batch` batch method for `BartTokenizerFast` as per the proposal in #6080 @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6543/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6543", "html_url": "https://github.com/huggingface/transformers/pull/6543", "diff_url": "https://github.com/huggingface/transformers/pull/6543.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6543.patch", "merged_at": 1597847869000 }
https://api.github.com/repos/huggingface/transformers/issues/6542
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6542/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6542/comments
https://api.github.com/repos/huggingface/transformers/issues/6542/events
https://github.com/huggingface/transformers/issues/6542
680,372,353
MDU6SXNzdWU2ODAzNzIzNTM=
6,542
WNUT17 TF example stuck at first epoch
{ "login": "Jordy-VL", "id": 16034009, "node_id": "MDQ6VXNlcjE2MDM0MDA5", "avatar_url": "https://avatars.githubusercontent.com/u/16034009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jordy-VL", "html_url": "https://github.com/Jordy-VL", "followers_url": "https://api.github.com/users/Jordy-VL/followers", "following_url": "https://api.github.com/users/Jordy-VL/following{/other_user}", "gists_url": "https://api.github.com/users/Jordy-VL/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jordy-VL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jordy-VL/subscriptions", "organizations_url": "https://api.github.com/users/Jordy-VL/orgs", "repos_url": "https://api.github.com/users/Jordy-VL/repos", "events_url": "https://api.github.com/users/Jordy-VL/events{/privacy}", "received_events_url": "https://api.github.com/users/Jordy-VL/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834054694, "node_id": "MDU6TGFiZWwxODM0MDU0Njk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow", "name": "TensorFlow", "color": "FF6F00", "default": false, "description": "Anything TensorFlow" }, { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" } ]
closed
false
null
[]
[ "Hello!\r\n\r\nCan you try with the https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_tf_ner.py script example, with a usual CoNLL format for your dataset? Then let me know if you still get this issue. Thanks!!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,604
1,604
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux Ubuntu 16.04 - Python version: 3.6.9 - PyTorch version (GPU?): / - Tensorflow version (GPU?): 2.2.0 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: not that I know ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu documentation: @sgugger --> @jplu ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X, WNUT] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce ``` # Token Classification with W-NUT Emerging Entities """ Next we will look at token classification. Rather than classifying an entire sequence, this task classifies token by token. We'll demonstrate how to do this with [Named Entity Recognition](http://nlpprogress.com/english/named_entity_recognition.html), which involves identifying tokens which correspond to a predefined set of "entities". Specifically, we'll use the [W-NUT Emerging and Rare entities](http://noisy-text.github.io/2017/emerging-rare-entities.html) corpus. The data is given as a collection of pre-tokenized documents where each token is assigned a tag. Let's start by downloading the data. ! wget http://noisy - text.github.io / 2017 / files / wnut17train.conll """ """In this case, we'll just download the train set, which is a single text file. Each line of the file contains either (1) a word and tag separated by a tab, or (2) a blank line indicating the end of a document. Let's write a function to read this in. We'll take in the file path and return `token_docs` which is a list of lists of token strings, and `token_tags` which is a list of lists of tag strings. """ from pathlib import Path import re import numpy as np from sklearn.model_selection import train_test_split import tensorflow as tf from transformers import BertTokenizer, TFBertModel, BertConfig from transformers import DistilBertTokenizerFast from transformers import TFDistilBertForTokenClassification from tensorflow.keras.layers import Input, Dense, Activation, Dropout, LSTM, GlobalMaxPool1D from tensorflow.keras.models import Model from tensorflow.keras.utils import to_categorical from tensorflow.keras.preprocessing.sequence import pad_sequences def read_wnut(file_path): file_path = Path(file_path) raw_text = file_path.read_text().strip() raw_docs = re.split(r'\n\t?\n', raw_text) token_docs = [] tag_docs = [] for doc in raw_docs: tokens = [] tags = [] for line in doc.split('\n'): token, tag = line.split('\t') tokens.append(token) tags.append(tag) token_docs.append(tokens) tag_docs.append(tags) return token_docs, tag_docs texts, tags = read_wnut('wnut17train.conll') """Just to see what this data looks like, let's take a look at a segment of the first document.""" print(texts[0][10:17], tags[0][10:17], sep='\n') """`location` is an entity type, `B-` indicates the beginning of an entity, and `I-` indicates consecutive positions of the same entity ("Empire State Building" is considered one entity). `O` indicates the token does not correspond to any entity. Now that we've read the data in, let's create a train/validation split: """ train_texts, val_texts, train_tags, val_tags = train_test_split(texts, tags, test_size=.2) """Next, let's create encodings for our tokens and tags. For the tags, we can start by just create a simple mapping which we'll use in a moment: """ unique_tags = set(tag for doc in tags for tag in doc) tag2id = {tag: id for id, tag in enumerate(unique_tags)} id2tag = {id: tag for tag, id in tag2id.items()} """To encode the tokens, we'll use a pre-trained DistilBert tokenizer. We can tell the tokenizer that we're dealing with ready-split tokens rather than full sentence strings by passing `is_pretokenized=True`. We'll also pass `padding=True` and `truncation=True` to pad the sequences to be the same length. Lastly, we can tell the model to return information about the tokens which are split by the wordpiece tokenization process, which we will need in a moment. """ tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-cased') train_encodings = tokenizer(train_texts, is_pretokenized=True, return_offsets_mapping=True, padding=True, truncation=True) val_encodings = tokenizer(val_texts, is_pretokenized=True, return_offsets_mapping=True, padding=True, truncation=True) """Great, so now our tokens are nicely encoded in the format that they need to be in to feed them into our DistilBert model below. Now we arrive at a common obstacle with using pre-trained models for token-level classification: many of the tokens in the W-NUT corpus are not in DistilBert's vocabulary. Bert and many models like it use a method called WordPiece Tokenization, meaning that single words are split into multiple tokens such that each token is likely to be in the vocabulary. For example, DistilBert's tokenizer would split the Twitter handle `@huggingface` into the tokens `['@', 'hugging', '##face']`. This is a problem for us because we have exactly one tag per token. If the tokenizer splits a token into multiple sub-tokens, then we will end up with a mismatch between our tokens and our labels. One way to handle this is to only train on the tag labels for the first subtoken of a split token. We can do this in 🤗 Transformers by setting the labels we wish to ignore to `-100`. In the example above, if the label for `@HuggingFace` is `3` (indexing `B-corporation`), we would set the labels of `['@', 'hugging', '##face']` to `[3, -100, -100]`. Let's write a function to do this. This is where we will use the `offset_mapping` from the tokenizer as mentioned above. For each sub-token returned by the tokenizer, the offset mapping gives us a tuple indicating the sub-token's start position and end position relative to the original token it was split from. That means that if the first position in the tuple is anything other than `0`, we will set its corresponding label to `-100`. While we're at it, we can also set labels to `-100` if the second position of the offset mapping is `0`, since this means it must be a special token like `[PAD]` or `[CLS]`. > **NOTE:** Due to a recently fixed bug, -1 must be used instead of -100 when using TensorFlow in 🤗 Transformers <= 3.02. """ def encode_tags(tags, encodings): labels = [[tag2id[tag] for tag in doc] for doc in tags] encoded_labels = [] for doc_labels, doc_offset in zip(labels, encodings.offset_mapping): # create an empty array of -100 doc_enc_labels = np.ones(len(doc_offset), dtype=int) * -1 # 00 arr_offset = np.array(doc_offset) # set labels whose first offset position is 0 and the second is not 0 doc_enc_labels[(arr_offset[:, 0] == 0) & (arr_offset[:, 1] != 0)] = doc_labels encoded_labels.append(doc_enc_labels.tolist()) return encoded_labels train_labels = encode_tags(train_tags, train_encodings) val_labels = encode_tags(val_tags, val_encodings) train_encodings.pop("offset_mapping") # we don't want to pass this to the model val_encodings.pop("offset_mapping") train_dataset = tf.data.Dataset.from_tensor_slices(( dict(train_encodings), train_labels )) val_dataset = tf.data.Dataset.from_tensor_slices(( dict(val_encodings), val_labels )) """Now load in a token classification model and specify the number of labels:""" model = TFDistilBertForTokenClassification.from_pretrained('distilbert-base-cased', num_labels=len(unique_tags)) optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=["accuracy"]) # can also use any keras loss fn model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16, verbose=3) ``` ## Expected behavior the example runs and does not get stuck at the first epoch :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6542/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6541
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6541/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6541/comments
https://api.github.com/repos/huggingface/transformers/issues/6541/events
https://github.com/huggingface/transformers/pull/6541
680,349,978
MDExOlB1bGxSZXF1ZXN0NDY4OTI1Njgw
6,541
replace _ with __ rst links
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Failure is not linked to the PR. Thanks for replacing these!" ]
1,597
1,597
1,597
CONTRIBUTOR
null
Apply the suggested https://github.com/huggingface/transformers/pull/6509#issuecomment-674960166 correction - I fixed it for other links too.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6541/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6541/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6541", "html_url": "https://github.com/huggingface/transformers/pull/6541", "diff_url": "https://github.com/huggingface/transformers/pull/6541.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6541.patch", "merged_at": 1597681623000 }
https://api.github.com/repos/huggingface/transformers/issues/6540
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6540/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6540/comments
https://api.github.com/repos/huggingface/transformers/issues/6540/events
https://github.com/huggingface/transformers/issues/6540
680,347,583
MDU6SXNzdWU2ODAzNDc1ODM=
6,540
bart-base config.*attention_heads (should be 12 was 16)
{ "login": "ibeltagy", "id": 2287797, "node_id": "MDQ6VXNlcjIyODc3OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ibeltagy", "html_url": "https://github.com/ibeltagy", "followers_url": "https://api.github.com/users/ibeltagy/followers", "following_url": "https://api.github.com/users/ibeltagy/following{/other_user}", "gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}", "starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions", "organizations_url": "https://api.github.com/users/ibeltagy/orgs", "repos_url": "https://api.github.com/users/ibeltagy/repos", "events_url": "https://api.github.com/users/ibeltagy/events{/privacy}", "received_events_url": "https://api.github.com/users/ibeltagy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "thx!", "Should it be 12?", "yes", "Fixed, great catch, sorry for the annoyance.", "cc @VictorSanh if you are still using." ]
1,597
1,597
1,597
CONTRIBUTOR
null
[Bart-base configuration](https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-base/config.json) has the wrong values for `decoder_attention_heads` and `encoder_attention_heads`. This messes up the self-attention computation and make model totally unusable. ### Who can help @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6540/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6540/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6539
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6539/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6539/comments
https://api.github.com/repos/huggingface/transformers/issues/6539/events
https://github.com/huggingface/transformers/issues/6539
680,337,014
MDU6SXNzdWU2ODAzMzcwMTQ=
6,539
Widget can't load model
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@julien-c @mfuntowicz Any idea of what is going wrong?\r\nI'm just wondering if it's anything from my side.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,603
1,603
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.27 - Python version: 3.8.2 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu documentation: @sgugger --> @julien-c @mfuntowicz (not sure I tagged correctly because related to inference widget, not listed category) ## Information Model I am using (Bert, XLNet ...): huggingtweets/borisdayma (GPT-2) The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Try to do inference from [huggingtweets/borisdayma model page](https://huggingface.co/huggingtweets/borisdayma) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Error displayed in interface: > ⚠️ Can't load weights for 'huggingtweets/borisdayma'. Make sure that: - 'huggingtweets/borisdayma' is a correct model identifier listed on 'https://huggingface.co/models' - or 'huggingtweets/borisdayma' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. ![image](https://user-images.githubusercontent.com/715491/90415619-f9cdd380-e076-11ea-879b-7dde7e1601cf.png) ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> This works locally: ```python generator = pipeline('text-generation', model='huggingtweets/borisdayma') generator("<|endoftext|>I really like") ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6539/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6538
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6538/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6538/comments
https://api.github.com/repos/huggingface/transformers/issues/6538/events
https://github.com/huggingface/transformers/pull/6538
680,248,516
MDExOlB1bGxSZXF1ZXN0NDY4ODQyNjcz
6,538
[EncoderDecoder] Add functionality to tie encoder decoder weights
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=h1) Report\n> Merging [#6538](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/37709b59099bd984858ca1884c6c70403420347d&el=desc) will **increase** coverage by `0.81%`.\n> The diff coverage is `98.18%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6538/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6538 +/- ##\n==========================================\n+ Coverage 79.26% 80.07% +0.81% \n==========================================\n Files 156 156 \n Lines 28073 28124 +51 \n==========================================\n+ Hits 22252 22521 +269 \n+ Misses 5821 5603 -218 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <97.43%> (+0.69%)` | :arrow_up: |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.62% <100.00%> (+0.70%)` | :arrow_up: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.39% <100.00%> (+0.72%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+0.97%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=footer). Last update [37709b5...1c77a34](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Are the bart differences just layernorms?\r\n[`embed_positions`, 'embed_tokens`, `self_attn`] all the same afaict.", "@sshleifer - great review, thanks a lot!\r\nRegarding Bart, yeah the naming is different for some layers \"final_layer_norm\" for example and the objects are not of the same type, so I think the best idea is to add a `_tie_encoder_decoder_weights` function to `BartPretrainedModel` that overwrites the default function - what do you think? ", "great idea", "> Look good to me and a great addition!\r\n> \r\n> Should the `tie_encoder_to_decoder_recursively` method be also used in the general encoder-decoder models?\r\n\r\nYes, it's added for the encoder-decoder models as well :-) \r\n\r\nSee: \r\n```python\r\n# Encoder Decoder\r\nfrom transformers import EncoderDecoderModel\r\nshare_bert2bert = EncoderDecoderModel.from_pretrained_encoder_decoder(\"bert-base-cased\", \"bert-base-cased\", tie_encoder_decoder=True)\r\nbert2bert = EncoderDecoderModel.from_pretrained_encoder_decoder(\"bert-base-cased\", \"bert-base-cased\")\r\nassert sum(p.numel() for p in share_bert2bert.parameters()) < sum(p.numel() for p in bert2bert.parameters())\r\n```\r\n" ]
1,597
1,597
1,597
MEMBER
null
This PR adds the functionality to tie encoder / decoder weights for encoder decoder models, by adding a new configuration parameters flag: `tie_encoder_decoder` to the config. If this parameter is set to `True` and the model is an `encoder_decoder` model, then all weights of the encoder layer are set to their respective weights of the decoder layer. This can save a lot of memory for big models, such as shared `t5-3b` models. Tests are added to both `T5Model, T5ForConditionalGeneration` and the `EncoderDecoderModel`. Because BART uses different parameter names for the encoder and decoder, the generic weight tying function as defined in modeling_utils.py cannot be used and a customized function should be implemented if needed in a further PR. The API looks as follows: ```python # T5 from transformers import T5ForConditionalGeneration, T5Config share_t5 = T5ForConditionalGeneration(T5Config(tie_encoder_decoder=True)) t5 = T5ForConditionalGeneration(T5Config()) assert sum(p.numel() for p in share_t5.parameters()) < sum(p.numel() for p in t5.parameters()) # Encoder Decoder from transformers import EncoderDecoderModel share_bert2bert = EncoderDecoderModel.from_pretrained_encoder_decoder("bert-base-cased", "bert-base-cased", tie_encoder_decoder=True) bert2bert = EncoderDecoderModel.from_pretrained_encoder_decoder("bert-base-cased", "bert-base-cased") assert sum(p.numel() for p in share_bert2bert.parameters()) < sum(p.numel() for p in bert2bert.parameters()) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6538/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6538/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6538", "html_url": "https://github.com/huggingface/transformers/pull/6538", "diff_url": "https://github.com/huggingface/transformers/pull/6538.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6538.patch", "merged_at": 1597839826000 }
https://api.github.com/repos/huggingface/transformers/issues/6537
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6537/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6537/comments
https://api.github.com/repos/huggingface/transformers/issues/6537/events
https://github.com/huggingface/transformers/issues/6537
680,195,574
MDU6SXNzdWU2ODAxOTU1NzQ=
6,537
tokenizers/tokenizers.cpython-36m-darwin.so, 2): Symbol not found: ____chkstk_darwin
{ "login": "orenpapers", "id": 28626773, "node_id": "MDQ6VXNlcjI4NjI2Nzcz", "avatar_url": "https://avatars.githubusercontent.com/u/28626773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orenpapers", "html_url": "https://github.com/orenpapers", "followers_url": "https://api.github.com/users/orenpapers/followers", "following_url": "https://api.github.com/users/orenpapers/following{/other_user}", "gists_url": "https://api.github.com/users/orenpapers/gists{/gist_id}", "starred_url": "https://api.github.com/users/orenpapers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orenpapers/subscriptions", "organizations_url": "https://api.github.com/users/orenpapers/orgs", "repos_url": "https://api.github.com/users/orenpapers/repos", "events_url": "https://api.github.com/users/orenpapers/events{/privacy}", "received_events_url": "https://api.github.com/users/orenpapers/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,597
1,603
1,603
NONE
null
Hello I am using jupyter notebook and installed tranformers with: `!{sys.executable} -m pip install transformers ` I have MacOS 10.13 When trying to import the tranformers: `from transformers import BertTokenizer, BertModel ` I get the error: ``` ImportError: dlopen(/Users/user1/anaconda3/lib/python3.6/site-packages/tokenizers/tokenizers.cpython-36m-darwin.so, 2): Symbol not found: ____chkstk_darwin Referenced from: /Users/user1/anaconda3/lib/python3.6/site-packages/tokenizers/tokenizers.cpython-36m-darwin.so (which was built for Mac OS X 10.15) Expected in: /usr/lib/libSystem.B.dylib in /Users/user1/anaconda3/lib/python3.6/site-packages/tokenizers/tokenizers.cpython-36m-darwin.so ``` Any idea how to fix it?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6537/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6536
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6536/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6536/comments
https://api.github.com/repos/huggingface/transformers/issues/6536/events
https://github.com/huggingface/transformers/pull/6536
680,168,277
MDExOlB1bGxSZXF1ZXN0NDY4Nzc2MzQx
6,536
[model_cards] Add model cards for Urduhack model (roberta-urdu-small)
{ "login": "akkefa", "id": 7104938, "node_id": "MDQ6VXNlcjcxMDQ5Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/7104938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akkefa", "html_url": "https://github.com/akkefa", "followers_url": "https://api.github.com/users/akkefa/followers", "following_url": "https://api.github.com/users/akkefa/following{/other_user}", "gists_url": "https://api.github.com/users/akkefa/gists{/gist_id}", "starred_url": "https://api.github.com/users/akkefa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akkefa/subscriptions", "organizations_url": "https://api.github.com/users/akkefa/orgs", "repos_url": "https://api.github.com/users/akkefa/repos", "events_url": "https://api.github.com/users/akkefa/events{/privacy}", "received_events_url": "https://api.github.com/users/akkefa/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=h1) Report\n> Merging [#6536](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/37709b59099bd984858ca1884c6c70403420347d&el=desc) will **decrease** coverage by `1.10%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6536/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6536 +/- ##\n==========================================\n- Coverage 79.26% 78.16% -1.11% \n==========================================\n Files 156 156 \n Lines 28073 28073 \n==========================================\n- Hits 22252 21942 -310 \n- Misses 5821 6131 +310 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `66.00% <0.00%> (-32.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-29.33%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=footer). Last update [37709b5...4cba09e](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for sharing! First model on https://huggingface.co/languages for Urdu :)\r\n\r\nIf you'd like, would you mind adding sample inputs for Urdu to https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts ? (they don't have to be translations of the existing ones)" ]
1,597
1,597
1,597
CONTRIBUTOR
null
Model card added for roberta-urdu-small.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6536/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6536/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6536", "html_url": "https://github.com/huggingface/transformers/pull/6536", "diff_url": "https://github.com/huggingface/transformers/pull/6536.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6536.patch", "merged_at": 1597694670000 }
https://api.github.com/repos/huggingface/transformers/issues/6535
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6535/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6535/comments
https://api.github.com/repos/huggingface/transformers/issues/6535/events
https://github.com/huggingface/transformers/issues/6535
680,143,835
MDU6SXNzdWU2ODAxNDM4MzU=
6,535
Passing inputs_embeds into GenerationMixin.generate()
{ "login": "ymfa", "id": 6981180, "node_id": "MDQ6VXNlcjY5ODExODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6981180?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ymfa", "html_url": "https://github.com/ymfa", "followers_url": "https://api.github.com/users/ymfa/followers", "following_url": "https://api.github.com/users/ymfa/following{/other_user}", "gists_url": "https://api.github.com/users/ymfa/gists{/gist_id}", "starred_url": "https://api.github.com/users/ymfa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ymfa/subscriptions", "organizations_url": "https://api.github.com/users/ymfa/orgs", "repos_url": "https://api.github.com/users/ymfa/repos", "events_url": "https://api.github.com/users/ymfa/events{/privacy}", "received_events_url": "https://api.github.com/users/ymfa/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }, { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hey @ymfa, \r\n\r\nthanks for the feature request :-) I'll put it on the To-Do list. Not sure how soon we will work on this though. If you have a good idea of how to design this new feature, feel free to open a PR :-) ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi @patrickvonplaten, I'm interested in this feature as well as I'm using GPT-2 with custom input embedding. Is there currently a way to pass the inputs_embeds to the generate function instead of input_ids?", "I've started working on a small PR that provides the flexibility of passing _encoder outputs_ into GenerationMixin.generate(). I chose `encoder_outputs` over `inputs_embeds` because they are more fundamental, thus the fix would be more generally useful. However, it might not satisfy @umbertopietroni's need as GPT-2 is not an encoder-decoder model.", "Is there any update on any kind of solution to it yet or any work around to pass encoder_outputs to generate ? ", "Any update on this issue?", "Any update on this issue?", "It is possible to run `inputs_embeds` for an encoder-decoder framework. See https://github.com/huggingface/transformers/pull/14443 . This does assume however that we know the word embedding matrix of the decoder.\r\n\r\nHowever for models like GPT2 this is not as straight-forward - see: https://github.com/huggingface/transformers/pull/14443#discussion_r753167493\r\n\r\nIn general, what is the exact use-case people are interested in here? ", "@patrickvonplaten for example in the recent NeurIPS paper [\"Multimodal Few-Shot Learning with Frozen Language Models\"](https://papers.nips.cc/paper/2021/file/01b7575c38dac42f3cfb7d500438b875-Paper.pdf), the output of a non-trained CNN is directly fed into a pre-trained and frozen language model. In this scenario, the CNN learns how to generate input embeddings such that the pre-trained language model can generate the right caption.", "I see - this makes sense! We should probably adapt the generate function then to allow this scenario. I'll put it on my TODO! ", "> \r\n\r\nI am trying to generate with a decoder-only model using inputs_embeds. Does anyone know useful resources on how to achieve this? \r\n", "This should already be possible - will try to put it in the big generate doc refactor that I'm working on at the moment - see https://github.com/huggingface/transformers/issues/15552", "Hi @patrickvonplaten, \r\nI am glad to hear there will be doc refactor for generation, thanks for working on this!\r\n\r\n> This should already be possible\r\n\r\nI am using version 4.16.2, and when I try to generate with DialoGPT (a decoder only model) as follows\r\n\r\n`outputs = model.generate(inputs_embeds=inputs_embeds)`\r\n\r\nI get the following error:\r\n`\r\nValueError: If inputs_embeds is passed as model-specific keyword input then model has to be an encoder-decoder and not a GPT2LMHeadModel.\r\n`\r\n \r\n ", "Hi @patrickvonplaten,\r\nI would like to know if there is any updates. I just really need the generate function with parameter `inputs_embeds` for GPT model\r\n\r\n> I see - this makes sense! We should probably adapt the generate function then to allow this scenario. I'll put it on my TODO!\r\n\r\nThank you", "@Tuan-Lee-23 - would you like to open a PR for this to give it a try? :-)\r\n\r\nThis would also help me understand the use case better", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi, I would also really love to see this. Just tried to generate from inputs_embeds on OPT and got the error message. Thanks! ", "@chaddech , could you explain your use-case in a bit more detail here?\r\n\r\n1) Why do you want to use word embeddings?\r\n2) Are you not using at all the word embeddings of OPT?\r\n3) Are your OPT model's input embeddings tied to the output embeddings? \r\n\r\nIn general I'm not completely against adding this feature, but only if the use case is solid since it requires lots of changes to `generate()`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @patrickvonplaten \r\n\r\nAn example of use case (for me) is an open-ended text generation after **soft-prompt tuning**.\r\n\r\nDuring the tuning, only the embeddings of n_tokens prompt is learnable. Other parameters are being frozen.\r\nSo the input of forward() function is the concatenated embeddings of n_tokens prompt and the embeddings of actual input (discrete tokens). Prompt is represented as dummy -- no actual discrete token (word) linked to it. \r\n\r\nSee [https://github.com/corolla-johnson/mkultra](https://github.com/corolla-johnson/mkultra ) or [https://github.com/kipgparker/soft-prompt-tuning](https://github.com/kipgparker/soft-prompt-tuning) for practicality.\r\n\r\n\r\nIt would be a lot easier if [generation_utils](https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L101) allows for **input_embs**\r\n\r\nThank you.\r\n\r\n\r\n> @chaddech , could you explain your use-case in a bit more detail here?\r\n> \r\n> 1. Why do you want to use word embeddings?\r\n> 2. Are you not using at all the word embeddings of OPT?\r\n> 3. Are your OPT model's input embeddings tied to the output embeddings?\r\n> \r\n> In general I'm not completely against adding this feature, but only if the use case is solid since it requires lots of changes to `generate()`\r\n\r\n", "@ymfa could you maybe open a PR to show how a solution could look like (maybe just a super quick no dirty PR?)\r\n\r\nSorry I sadly won't have the time to dive into the other codebases or paper, but would be super happy to guide through a PR! \r\n\r\nAlso cc @patil-suraj @gante ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I'm running into the same `ValueError` above when trying to replicate the paper \"[Locating and Editing Factual Associations in GPT](https://arxiv.org/abs/2202.05262)\". This technique relies on injecting noise into the word embeddings to corrupt them. Having this feature added would be very useful. Thanks!\r\n\r\n**Edit**: I found a workaround, allowing me to extract the next word from GPT2 given a custom embedding:\r\n```\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\n\r\ncustom_embeds = torch.randn(1, 5, 768)\r\noutputs = model(inputs_embeds=custom_embeds)\r\nprobs = outputs.logits[:, -1, :].flatten()\r\nnext_token = probs.argmax()\r\ntokenizer.decode(next_token)\r\n```", "Hi @mkyl (and other participants in this thread) 👋 \r\n\r\nAs written above, passing `inputs_embeds` with decoder-only models is not possible at the moment. I see from the number of comments and likes above that this would be a somewhat appreciated functionality, so I want to help the folks here. \r\n\r\nHere's the issue -- `generate()` does a LOT of lifting to keep its interface simple. To enable calls with `inputs_embeds` we would need to greatly increase the complexity of an already complex piece of code, hurting everyone in the long run 🙅 Thankfully, there is an alternative: we can manually prepare a few inputs and call the generation methods directly, which support passing `inputs_embeds`. The catch is that a critical component of the models, `prepare_inputs_for_generation`, is not expecting `inputs_embeds`, so we will have to monkey patch it. But it works, as you can see in the example below 🙌 (I hope this example helps!)\r\n\r\nThe monkey patch is inconvenient, but I'm not entirely convinced that adding this feature is worth modifying tens of models. I make the following pact with y'all:\r\n⚠️ if this obscure comment in a closed issue reaches 10 reactions, I will implement the change to `prepare_inputs_for_generation` on all text-generation models. (Whoever does the 10th reaction, please tag me; cc @patrickvonplaten ) \r\n\r\n____________________________________________________\r\n(Note: prior to v4.26, you have to replace `past_key_values` by `past` in the code below)\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, MaxLengthCriteria, StoppingCriteriaList\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"gpt2\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\r\n\r\ntext = \"Hello world\"\r\ninput_ids = tokenizer.encode(text, return_tensors=\"pt\")\r\n\r\n# Traditional way of generating text\r\noutputs = model.generate(input_ids)\r\nprint(\"\\ngenerate + input_ids:\", tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n\r\n# Generating with decoder models from inputs_embeds\r\n# Step 1: monkey patch \"prepare_inputs_for_generation\" to pass inputs_embeds when they are available\r\ndef prepare_inputs_for_generation(input_ids, past_key_values=None, **kwargs):\r\n token_type_ids = kwargs.get(\"token_type_ids\", None)\r\n # only last token for inputs_ids if past_key_values is defined in kwargs\r\n if past_key_values:\r\n input_ids = input_ids[:, -1].unsqueeze(-1)\r\n if token_type_ids is not None:\r\n token_type_ids = token_type_ids[:, -1].unsqueeze(-1)\r\n\r\n attention_mask = kwargs.get(\"attention_mask\", None)\r\n position_ids = kwargs.get(\"position_ids\", None)\r\n\r\n if attention_mask is not None and position_ids is None:\r\n # create position_ids on the fly for batch generation\r\n position_ids = attention_mask.long().cumsum(-1) - 1\r\n position_ids.masked_fill_(attention_mask == 0, 1)\r\n if past_key_values:\r\n position_ids = position_ids[:, -1].unsqueeze(-1)\r\n else:\r\n position_ids = None\r\n\r\n # !!!!!!!!!!!!!!!!!!! start: modified vs original, to pass inputs_embeds when they are available\r\n if \"inputs_embeds\" in kwargs and past_key_values is None: # we only want to use them in the 1st generation step\r\n model_inputs = {\"inputs_embeds\": inputs_embeds}\r\n else:\r\n model_inputs = {\"input_ids\": input_ids}\r\n model_inputs.update({\r\n \"past_key_values\": past_key_values,\r\n \"use_cache\": kwargs.get(\"use_cache\"),\r\n \"position_ids\": position_ids,\r\n \"attention_mask\": attention_mask,\r\n \"token_type_ids\": token_type_ids,\r\n })\r\n return model_inputs\r\n # !!!!!!!!!!!!!!!!!!! end: modified vs original, to pass inputs_embeds when they are available\r\nmodel.prepare_inputs_for_generation = prepare_inputs_for_generation\r\n\r\n# Step 2: prepare the inputs for the generation method manually and call it\r\ninputs_embeds = model.transformer.wte(input_ids)\r\n# empty input ids -> the output will NOT include the input prompt, but will generate the same text (because of\r\n# inputs_embeds)\r\ninput_ids = torch.LongTensor([[model.config.bos_token_id]])\r\nstopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)])\r\noutputs = model.greedy_search(\r\n input_ids, inputs_embeds=inputs_embeds, stopping_criteria=stopping_criteria, pad_token_id=model.config.eos_token_id\r\n)\r\nprint(\"\\ngreedy + inputs_embeds:\", tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n```", "@gante We hit ~10~ 11!", "@gante In this way, it seems that only the first token is based on inputs_embeds.", "Oh damn, this exceeded my expectations 🙈 Added to my todo list! Keep in mind that my queue is long at the moment, so this might take a few months.", "@BugApe in the example above, only the first forward pass will have `inputs_embeds` as input, but you can have more than one token there. If your target application requires manipulating `inputs_embeds` at each generation step, then you'd need to monkey-patch `prepare_inputs_for_generation` to embed the newly generated tokens and then manipulate it as you wish. That will not be included in the planned changes.\r\n\r\nHowever, in theory, I could make `generate` accept a dictionary of arbitrary functions to be applied to each input in `prepare_inputs_for_generation` (e.g. a function that embeds `input_ids` and then add some noise, to be applied at each step before the forward pass). \r\n\r\nI'll make the same pact as above: if this comment reaches 10 reactions, I will add the functionality to my todo list. (whoever does the 10th reaction, please tag me)", "Hi, how's the state of this feature? It would add a lot of flexibility to model construction", "@gante The monkey patch works nice for `model.greedy_search` and `model.contrastive_search`, but cannot work with `model.generate`, which has more utilities. Could you provide a monkey patch that works with `model.generate`? Many thanks! " ]
1,597
1,701
1,675
CONTRIBUTOR
null
# 🚀 Feature request Currently `GenerationMixin.generate()` only accepts `input_ids` but not `inputs_embeds`. Therefore this method is not usable when custom input embeddings are required. In contrast, many models do accept `inputs_embeds` as input. Additionally, for models that have both an encoder and a decoder, it is not possible to run `encoder.forward()` and `decoder.generate()` separately, because `generate()` does not accept `encoder_outputs` as input. ## Motivation Having the flexibility to input `inputs_embeds` or `encoder_outputs` is essential for many tasks. For example, the input can be the concatenation of a sequence of word embeddings and an image embedding or style embedding (of the same embedding size). I want to use `generate()` with a T5 model fine-tuned for such as task, where the input sequence contains both word and non-word embeddings.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6535/reactions", "total_count": 9, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6535/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6534
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6534/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6534/comments
https://api.github.com/repos/huggingface/transformers/issues/6534/events
https://github.com/huggingface/transformers/issues/6534
680,127,756
MDU6SXNzdWU2ODAxMjc3NTY=
6,534
How to fine-tune GPT2 on Arithmetic Problem
{ "login": "KYRIEZX", "id": 56826566, "node_id": "MDQ6VXNlcjU2ODI2NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/56826566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KYRIEZX", "html_url": "https://github.com/KYRIEZX", "followers_url": "https://api.github.com/users/KYRIEZX/followers", "following_url": "https://api.github.com/users/KYRIEZX/following{/other_user}", "gists_url": "https://api.github.com/users/KYRIEZX/gists{/gist_id}", "starred_url": "https://api.github.com/users/KYRIEZX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KYRIEZX/subscriptions", "organizations_url": "https://api.github.com/users/KYRIEZX/orgs", "repos_url": "https://api.github.com/users/KYRIEZX/repos", "events_url": "https://api.github.com/users/KYRIEZX/events{/privacy}", "received_events_url": "https://api.github.com/users/KYRIEZX/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @KYRIEZX, \r\n\r\nwe are trying to move \"non-bug\" questions to our forum - would you mind posting the question there again: https://discuss.huggingface.co/ ? Thanks :-) " ]
1,597
1,597
1,597
NONE
null
# # I want to use the GPT2 to solve simple Arithmetic Problem, how should I make the dataset?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6534/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6533
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6533/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6533/comments
https://api.github.com/repos/huggingface/transformers/issues/6533/events
https://github.com/huggingface/transformers/pull/6533
680,123,846
MDExOlB1bGxSZXF1ZXN0NDY4NzQwMzI5
6,533
[Docs Pegasus] Correct Pegasus Link in doc
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "thx!" ]
1,597
1,597
1,597
MEMBER
null
@sshleifer - Link to Bart paper was accidentally used. Correcting it here.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6533/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6533/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6533", "html_url": "https://github.com/huggingface/transformers/pull/6533", "diff_url": "https://github.com/huggingface/transformers/pull/6533.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6533.patch", "merged_at": 1597659883000 }
https://api.github.com/repos/huggingface/transformers/issues/6532
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6532/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6532/comments
https://api.github.com/repos/huggingface/transformers/issues/6532/events
https://github.com/huggingface/transformers/pull/6532
680,056,069
MDExOlB1bGxSZXF1ZXN0NDY4NjgzODQy
6,532
Remove deprecated assertEquals
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,597
1,597
1,597
CONTRIBUTOR
null
`assertEquals` is deprecated: https://stackoverflow.com/questions/930995/assertequals-vs-assertequal-in-python/931011 This PR replaces these deprecated methods.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6532/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6532/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6532", "html_url": "https://github.com/huggingface/transformers/pull/6532", "diff_url": "https://github.com/huggingface/transformers/pull/6532.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6532.patch", "merged_at": 1597655639000 }
https://api.github.com/repos/huggingface/transformers/issues/6531
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6531/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6531/comments
https://api.github.com/repos/huggingface/transformers/issues/6531/events
https://github.com/huggingface/transformers/pull/6531
680,015,327
MDExOlB1bGxSZXF1ZXN0NDY4NjUwNDkw
6,531
Fix flaky ONNX tests
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=h1) Report\n> Merging [#6531](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/48c6c6139fbb2881ef16ac5d8afb6287467bf66e&el=desc) will **increase** coverage by `0.42%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6531/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6531 +/- ##\n==========================================\n+ Coverage 80.13% 80.55% +0.42% \n==========================================\n Files 156 156 \n Lines 28073 28073 \n==========================================\n+ Hits 22495 22615 +120 \n+ Misses 5578 5458 -120 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <0.00%> (+39.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.83% <0.00%> (+72.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=footer). Last update [48c6c61...bc493a2](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Cool, thanks for taking care of it!" ]
1,597
1,598
1,597
MEMBER
null
Bad indent was causing the `_test_export()` method to return None instead of a path. Use `pathlib.Path` everywhere in `_test_export()`. Should fix #6336 and #6529
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6531/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6531", "html_url": "https://github.com/huggingface/transformers/pull/6531", "diff_url": "https://github.com/huggingface/transformers/pull/6531.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6531.patch", "merged_at": 1597669476000 }
https://api.github.com/repos/huggingface/transformers/issues/6530
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6530/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6530/comments
https://api.github.com/repos/huggingface/transformers/issues/6530/events
https://github.com/huggingface/transformers/pull/6530
680,008,589
MDExOlB1bGxSZXF1ZXN0NDY4NjQ1MDM4
6,530
Added first model card
{ "login": "onepointconsulting", "id": 35300398, "node_id": "MDQ6VXNlcjM1MzAwMzk4", "avatar_url": "https://avatars.githubusercontent.com/u/35300398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/onepointconsulting", "html_url": "https://github.com/onepointconsulting", "followers_url": "https://api.github.com/users/onepointconsulting/followers", "following_url": "https://api.github.com/users/onepointconsulting/following{/other_user}", "gists_url": "https://api.github.com/users/onepointconsulting/gists{/gist_id}", "starred_url": "https://api.github.com/users/onepointconsulting/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/onepointconsulting/subscriptions", "organizations_url": "https://api.github.com/users/onepointconsulting/orgs", "repos_url": "https://api.github.com/users/onepointconsulting/repos", "events_url": "https://api.github.com/users/onepointconsulting/events{/privacy}", "received_events_url": "https://api.github.com/users/onepointconsulting/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=h1) Report\n> Merging [#6530](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/48c6c6139fbb2881ef16ac5d8afb6287467bf66e&el=desc) will **decrease** coverage by `0.69%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6530/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6530 +/- ##\n==========================================\n- Coverage 80.13% 79.43% -0.70% \n==========================================\n Files 156 156 \n Lines 28073 28073 \n==========================================\n- Hits 22495 22301 -194 \n- Misses 5578 5772 +194 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.01%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <0.00%> (+39.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.83% <0.00%> (+72.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=footer). Last update [48c6c61...0820568](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@onepointconsulting Thanks for sharing! You can control the default inputs to https://huggingface.co/gilf/french-camembert-postag-model with the `widget:` metadata attribute", "@julien-c Thank you for pointing out the `widget:` metadata attribute. I was not aware of that." ]
1,597
1,597
1,597
CONTRIBUTOR
null
Added a model card which lists the tags used in the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6530/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6530", "html_url": "https://github.com/huggingface/transformers/pull/6530", "diff_url": "https://github.com/huggingface/transformers/pull/6530.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6530.patch", "merged_at": 1597695851000 }
https://api.github.com/repos/huggingface/transformers/issues/6529
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6529/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6529/comments
https://api.github.com/repos/huggingface/transformers/issues/6529/events
https://github.com/huggingface/transformers/pull/6529
679,864,187
MDExOlB1bGxSZXF1ZXN0NDY4NTI0NzM0
6,529
skip onnx test until morgan comes back
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=h1) Report\n> Merging [#6529](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/72add6c98f2c0607f088fa0c78d40f11e2efa4c4&el=desc) will **decrease** coverage by `1.14%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6529/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6529 +/- ##\n==========================================\n- Coverage 80.38% 79.23% -1.15% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22554 22233 -321 \n- Misses 5504 5825 +321 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.24% <0.00%> (-3.53%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=footer). Last update [72add6c...885985a](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
temporary solution for #6181
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6529/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6529/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6529", "html_url": "https://github.com/huggingface/transformers/pull/6529", "diff_url": "https://github.com/huggingface/transformers/pull/6529.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6529.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6528
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6528/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6528/comments
https://api.github.com/repos/huggingface/transformers/issues/6528/events
https://github.com/huggingface/transformers/pull/6528
679,862,307
MDExOlB1bGxSZXF1ZXN0NDY4NTIzMTg3
6,528
attempt to fix onnx test
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=h1) Report\n> Merging [#6528](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/72add6c98f2c0607f088fa0c78d40f11e2efa4c4&el=desc) will **increase** coverage by `0.20%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6528/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6528 +/- ##\n==========================================\n+ Coverage 80.38% 80.59% +0.20% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n+ Hits 22554 22612 +58 \n+ Misses 5504 5446 -58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6528/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6528/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6528/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6528/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (+29.31%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=footer). Last update [72add6c...821c990](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6528/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6528", "html_url": "https://github.com/huggingface/transformers/pull/6528", "diff_url": "https://github.com/huggingface/transformers/pull/6528.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6528.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6527
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6527/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6527/comments
https://api.github.com/repos/huggingface/transformers/issues/6527/events
https://github.com/huggingface/transformers/pull/6527
679,839,695
MDExOlB1bGxSZXF1ZXN0NDY4NTA1NjE2
6,527
Update bert-base-portuguese-cased and bert-large-portuguese-cased cards
{ "login": "fabiocapsouza", "id": 15973165, "node_id": "MDQ6VXNlcjE1OTczMTY1", "avatar_url": "https://avatars.githubusercontent.com/u/15973165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fabiocapsouza", "html_url": "https://github.com/fabiocapsouza", "followers_url": "https://api.github.com/users/fabiocapsouza/followers", "following_url": "https://api.github.com/users/fabiocapsouza/following{/other_user}", "gists_url": "https://api.github.com/users/fabiocapsouza/gists{/gist_id}", "starred_url": "https://api.github.com/users/fabiocapsouza/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fabiocapsouza/subscriptions", "organizations_url": "https://api.github.com/users/fabiocapsouza/orgs", "repos_url": "https://api.github.com/users/fabiocapsouza/repos", "events_url": "https://api.github.com/users/fabiocapsouza/events{/privacy}", "received_events_url": "https://api.github.com/users/fabiocapsouza/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=h1) Report\n> Merging [#6527](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/00bb0b25ed66a4878f2e0ffdd1ca65b7684db57e&el=desc) will **decrease** coverage by `0.27%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6527/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6527 +/- ##\n==========================================\n- Coverage 80.24% 79.97% -0.28% \n==========================================\n Files 149 149 \n Lines 27680 27680 \n==========================================\n- Hits 22211 22136 -75 \n- Misses 5469 5544 +75 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6527/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6527/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6527/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6527/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6527/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `87.73% <0.00%> (+63.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=footer). Last update [00bb0b2...1d585e3](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6527/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6527", "html_url": "https://github.com/huggingface/transformers/pull/6527", "diff_url": "https://github.com/huggingface/transformers/pull/6527.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6527.patch", "merged_at": 1597632590000 }
https://api.github.com/repos/huggingface/transformers/issues/6526
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6526/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6526/comments
https://api.github.com/repos/huggingface/transformers/issues/6526/events
https://github.com/huggingface/transformers/pull/6526
679,820,006
MDExOlB1bGxSZXF1ZXN0NDY4NDkxNzEy
6,526
add BartConfig.force_bos_token_to_be_generated
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=h1) Report\n> Merging [#6526](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe61c05b85f98846779bb490a747875e7d54ec2a&el=desc) will **decrease** coverage by `2.39%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6526/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6526 +/- ##\n==========================================\n- Coverage 80.59% 78.19% -2.40% \n==========================================\n Files 156 156 \n Lines 28058 28055 -3 \n==========================================\n- Hits 22612 21939 -673 \n- Misses 5446 6116 +670 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.56% <100.00%> (-0.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <100.00%> (ø)` | |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `66.00% <0.00%> (-32.38%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=footer). Last update [2060181...5e951e1](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
This PR adds a config flag that makes a generation hack from bart's `adjust_logits_during_generation`, optional. This change (setting the flag to False) improves metrics for bart-large-xsum and mbart-large-en-ro, but not bart-large-cnn (hence the need for the flag). I remember @patrickvonplaten asked me about this in February, and I ran on CNN and then decided the hack needed to stay, could have been more careful. cc @patil-suraj ### Todo [x] test xsum [x] test cnn [x] test en-ro [x] update cnn config [x] test pegasus [x] update distilbart configs [] should max_length be changed in xsum config? ### TODO: - check pegasus - should max_length be 1 lower for configs where hack is removed? Yes. ### Metrics (all on val.source vs. val.target) the first number is runtime over the whole val set. "noforce" means this PR without any config change. (so BOS token is not forced to be generated.) **bart-large-cnn** **master: 86:23 {"rouge1": 44.79, "rouge2": 21.64, "rougeL": 31.18}** noforce: 87:40 {'rouge1': 44.26, 'rouge2': 21.22, 'rougeL': 30.72} **bart-large-xsum** master: 41:59, {'rouge1': 45.16, 'rouge2': 21.77, 'rougeL': 36.35} **noforce: 34:12, {'rouge1': 45.45, 'rouge2': 22.38, 'rougeL': 37.25}** **mbart-large-en-ro** master: 04:58 BLEU=27.83 **noforce: 04:42, BLEU=28.15** **pegasus-xsum** master: 56:12 {'rouge1': 46.69, 'rouge2': 24.13, 'rougeL': 38.79} **noforce: 54:15 {'rouge1': 46.98, 'rouge2': 24.43, 'rougeL': 39.11}** #### Commands ```bash export DATA_DIR=wmt_en_ro python run_eval.py facebook/mbart-large-en-ro \ $DATA_DIR/val.source gens/mbart-enro-branch-gens.txt \ --reference_path $DATA_DIR/val.target \ --score_path gens/mbart-enro-master-bleu.json \ --task translation_en_to_ro \ --device cuda \ --fp16 \ --bs 32 export DATA_DIR=$CNN_DIR python run_eval.py facebook/bart-large-cnn \ $DATA_DIR/val.source gens/cnn_val_generations_no_force.txt \ --reference_path $DATA_DIR/val.target \ --score_path gens/cnn_gen_no_force_rouge.txt \ --device cuda \ --fp16 \ --bs 32 export DATA_DIR=$CNN_DIR python run_eval.py facebook/bart-large-cnn \ $DATA_DIR/val.source gens/cnn_val_generations_master.txt \ --reference_path $DATA_DIR/val.target \ --score_path gens/cnn_gen_master_rouge.txt \ --device cuda \ --fp16 \ --bs 32 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6526/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6526", "html_url": "https://github.com/huggingface/transformers/pull/6526", "diff_url": "https://github.com/huggingface/transformers/pull/6526.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6526.patch", "merged_at": 1597792550000 }
https://api.github.com/repos/huggingface/transformers/issues/6525
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6525/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6525/comments
https://api.github.com/repos/huggingface/transformers/issues/6525/events
https://github.com/huggingface/transformers/pull/6525
679,819,977
MDExOlB1bGxSZXF1ZXN0NDY4NDkxNjkw
6,525
[s2s] docs, document desired filenames nicely
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=h1) Report\n> Merging [#6525](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe61c05b85f98846779bb490a747875e7d54ec2a&el=desc) will **decrease** coverage by `0.55%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6525/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6525 +/- ##\n==========================================\n- Coverage 80.59% 80.03% -0.56% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22612 22457 -155 \n- Misses 5446 5601 +155 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `48.80% <0.00%> (-46.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <0.00%> (-0.69%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=footer). Last update [2060181...f5a5373](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6525/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6525", "html_url": "https://github.com/huggingface/transformers/pull/6525", "diff_url": "https://github.com/huggingface/transformers/pull/6525.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6525.patch", "merged_at": 1597624283000 }
https://api.github.com/repos/huggingface/transformers/issues/6524
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6524/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6524/comments
https://api.github.com/repos/huggingface/transformers/issues/6524/events
https://github.com/huggingface/transformers/pull/6524
679,817,706
MDExOlB1bGxSZXF1ZXN0NDY4NDkwMDQ0
6,524
[wip] experiment with mbart special tokens
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,597
1,598
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6524/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6524", "html_url": "https://github.com/huggingface/transformers/pull/6524", "diff_url": "https://github.com/huggingface/transformers/pull/6524.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6524.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6523
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6523/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6523/comments
https://api.github.com/repos/huggingface/transformers/issues/6523/events
https://github.com/huggingface/transformers/pull/6523
679,800,676
MDExOlB1bGxSZXF1ZXN0NDY4NDc3NzY4
6,523
[testing] replace hardcoded paths to allow running tests from anywhere
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "CI failure is unrelated to this PR", "so dope!" ]
1,597
1,598
1,598
CONTRIBUTOR
null
Currently, some tests can be run only from the root of the project, since they hardcode relative paths. With this PR, it's now possible to run tests from the root of the project, from inside `tests`, or really anywhere. the gist of the change: ``` - PATH_SAMPLE_TEXT = "./tests/fixtures/sample_text.txt" + PATH_SAMPLE_TEXT = f"{get_tests_dir()}/fixtures/sample_text.txt" ``` now can do: ``` pytest tests/test_trainer.py ``` and: ``` cd tests pytest ./test_trainer.py ``` or even: ``` cd examples pytest ../tests/test_trainer.py ``` i.e. you no longer need to go to the root of the project to run tests (I'm trying to figure out the codecov issue, running multiple tests, so it's easier to run tests from within `tests` dir) p.s. CI failures are unrelated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6523/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6523", "html_url": "https://github.com/huggingface/transformers/pull/6523", "diff_url": "https://github.com/huggingface/transformers/pull/6523.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6523.patch", "merged_at": 1598545338000 }
https://api.github.com/repos/huggingface/transformers/issues/6522
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6522/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6522/comments
https://api.github.com/repos/huggingface/transformers/issues/6522/events
https://github.com/huggingface/transformers/pull/6522
679,796,803
MDExOlB1bGxSZXF1ZXN0NDY4NDc1MTA0
6,522
Create model cards for indonesian models
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=h1) Report\n> Merging [#6522](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe61c05b85f98846779bb490a747875e7d54ec2a&el=desc) will **decrease** coverage by `1.35%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6522/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6522 +/- ##\n==========================================\n- Coverage 80.59% 79.23% -1.36% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22612 22233 -379 \n- Misses 5446 5825 +379 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `89.97% <0.00%> (-3.80%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <0.00%> (-0.69%)` | :arrow_down: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=footer). Last update [fe61c05...c056197](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,597
1,597
1,597
CONTRIBUTOR
null
Added model cards for indonesian gpt2-small, bert-base and roberta-base models. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6522/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6522", "html_url": "https://github.com/huggingface/transformers/pull/6522", "diff_url": "https://github.com/huggingface/transformers/pull/6522.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6522.patch", "merged_at": 1597650146000 }