url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/2312 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2312/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2312/comments | https://api.github.com/repos/huggingface/transformers/issues/2312/events | https://github.com/huggingface/transformers/pull/2312 | 542,364,419 | MDExOlB1bGxSZXF1ZXN0MzU2ODIyMDEw | 2,312 | Correct tokenization for special and added tokens | {
"login": "vitaliyradchenko",
"id": 13647822,
"node_id": "MDQ6VXNlcjEzNjQ3ODIy",
"avatar_url": "https://avatars.githubusercontent.com/u/13647822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitaliyradchenko",
"html_url": "https://github.com/vitaliyradchenko",
"followers_url": "https://api.github.com/users/vitaliyradchenko/followers",
"following_url": "https://api.github.com/users/vitaliyradchenko/following{/other_user}",
"gists_url": "https://api.github.com/users/vitaliyradchenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitaliyradchenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitaliyradchenko/subscriptions",
"organizations_url": "https://api.github.com/users/vitaliyradchenko/orgs",
"repos_url": "https://api.github.com/users/vitaliyradchenko/repos",
"events_url": "https://api.github.com/users/vitaliyradchenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitaliyradchenko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=h1) Report\n> Merging [#2312](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cea04a244351a7c5bce44e1cfc01abde0ceb60fd?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2312 +/- ##\n==========================================\n+ Coverage 73.54% 73.54% +<.01% \n==========================================\n Files 87 87 \n Lines 14789 14791 +2 \n==========================================\n+ Hits 10876 10878 +2 \n Misses 3913 3913\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `92.11% <100%> (+0.03%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.91% <0%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=footer). Last update [cea04a2...b262577](https://codecov.io/gh/huggingface/transformers/pull/2312?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is awesome, thanks a lot @vitaliyradchenko "
] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | When a tokenizer is being loaded with `PreTrainedTokenizer._from_pretrained`, it should set `added_tokens` and `all_special_tokens` to `unique_added_tokens_encoder`.
If we don't do it, it will corrupt the tokenization.
Example:
```
import transformers
tokenizer = transformers.BertTokenizer.from_pretrained("bert-base-uncased")
tokenizer.tokenize("[CLS] token should not be splitted.")
# correct output
# ['[CLS]', 'token', 'should', 'not', 'be', 'split', '##ted', '.']
# incorrect output
# ['[', '[UNK]', ']', 'token', 'should', 'not', 'be', 'split', '##ted', '.']
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2312/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2312/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2312",
"html_url": "https://github.com/huggingface/transformers/pull/2312",
"diff_url": "https://github.com/huggingface/transformers/pull/2312.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2312.patch",
"merged_at": 1577303551000
} |
https://api.github.com/repos/huggingface/transformers/issues/2311 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2311/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2311/comments | https://api.github.com/repos/huggingface/transformers/issues/2311/events | https://github.com/huggingface/transformers/issues/2311 | 542,342,605 | MDU6SXNzdWU1NDIzNDI2MDU= | 2,311 | Can I use BERT / gpt-2 for text generation | {
"login": "orenpapers",
"id": 28626773,
"node_id": "MDQ6VXNlcjI4NjI2Nzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/28626773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orenpapers",
"html_url": "https://github.com/orenpapers",
"followers_url": "https://api.github.com/users/orenpapers/followers",
"following_url": "https://api.github.com/users/orenpapers/following{/other_user}",
"gists_url": "https://api.github.com/users/orenpapers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orenpapers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orenpapers/subscriptions",
"organizations_url": "https://api.github.com/users/orenpapers/orgs",
"repos_url": "https://api.github.com/users/orenpapers/repos",
"events_url": "https://api.github.com/users/orenpapers/events{/privacy}",
"received_events_url": "https://api.github.com/users/orenpapers/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You could do something like this when using gpt2\r\n\r\n```\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\nfrom torch.nn import functional as F\r\nimport torch\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2-medium')\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')\r\n\r\n# encode input context\r\ninput_ids = torch.tensor(tokenizer.encode('I put the glass of the')).unsqueeze(0)\r\n# get logits of last predicted token\r\nnext_word_logits = model(input_ids)[0][0, -1].detach()\r\nnext_word_probs = F.softmax(next_word_logits, dim=0)\r\n\r\nnext_words = ['desk', 'table', 'car', 'shirt']\r\nnext_words_probs = []\r\n# encode tokens for which prob is to be estimated\r\nnext_word_ids = [tokenizer.encode(next_word) for next_word in next_words]\r\n\r\nfor next_word_id in next_word_ids:\r\n next_word_input_ids = input_ids.clone()\r\n next_word_prob = next_word_probs[next_word_id[0]].item()\r\n # We need a while loop here because a single word can be composed of multiple tokens\r\n # 'desk' is encoded to 2 tokens so that we have to call the model another time\r\n while(len(next_word_id) > 1):\r\n next_word_input_ids = torch.cat((next_word_input_ids, torch.tensor([next_word_id[0]]).unsqueeze(0)), dim=1)\r\n # get logits of last predicted token\r\n next_word_logits = model(next_word_input_ids)[0][0, -1].detach()\r\n # multiply prob of next token to prob of previous tokens\r\n next_word_prob *= F.softmax(next_word_logits, dim=0)[next_word_id[1]].item()\r\n # remove first token since already used\r\n next_word_id = next_word_id[1:]\r\n next_words_probs.append(next_word_prob)\r\n\r\n# print result\r\nfor next_word, next_word_prob in zip(next_words, next_words_probs):\r\n print('{} = {}'.format(next_word, next_word_prob))\r\n```\r\n \r\n",
"Yes it is possible u need to take the topk of lm_logits (it will be output[0] in case of gpt)which essentially gives to 50257 probabilities (highest to lowest) which is the vocab size then you need to take top k which gives indices and values, values are nothing but urs scores(0.8, 0.1) and the indices which correspond to the 50257 vocabulary words which u can decode using tokenize decode.",
"@patrickvonplaten Amazing thanks!\r\nAnd if I want the rank of these words from all the word in the vocab?\r\ne.g. desk is the most probable word , table in #12 , etc. ?",
"Since GPT-2's output is based on byte-pair-encoding tokens and not on words you would have to define your own vocabulary. Having defined your vocabulary, I would simply calculate the probability for each word using the above procedure and then sort the tensor. \r\nTo better understand how byte-pair-encoding works [this](https://leimao.github.io/blog/Byte-Pair-Encoding/) might help. \r\nTo sort the tensor [this](https://stackoverflow.com/questions/56176439/pytorch-argsort-ordered-with-duplicate-elements-in-the-tensor) might help.",
"@patrickvonplaten Thanks, you think it will be possible to do it for all (or at least most) of the words in English in my personal MAC?",
"Yeah, I think that should definitely be feasible. \r\nMany words will consists of two tokens or less and will therefore need at most one additional forward pass (because the first forward pass is the same for all words and need to be calculated only once). \r\n\r\nSo if you have a vocabulary of say 300.000 words, I'd estimate that you would have to compute around 200.000 forward passes. You can calculate how much time a forward pass would take by averaging the computation time for 100 times calculating the probability for the word 'desk'. \r\n\r\nConcerning memory, there should not be a problem.",
"And the final vector giving the probabilities over your defined vocabulary should be normalized to make a prob distribution.",
"@patrickvonplaten You mean using softmax?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I was thinking to just normalize like this:\r\nhttps://stackoverflow.com/questions/26785354/normalizing-a-list-of-numbers-in-python\r\n\r\nbut you could also use softmax again - depends on what you want and what works better for you! ",
"@patrickvonplaten is it possible with BERT pre-trained model?\r\nThanks!",
"You might take a look at masked language modeling :-) https://huggingface.co/transformers/usage.html#masked-language-modeling",
"@patrickvonplaten Nice! Thanks for the pointer!\r\nAnd let's say I want to check a specific word in a masked location (What is the probability of the word \"`package` \" in the sequence \"`HuggingFace is creating a { } that the community uses to`\"? Is this possible?"
] | 1,577 | 1,589 | 1,583 | NONE | null | ## β Questions & Help
I want to get a list of possible completions and their probabilities.
For example,
For the sentence "I put the glass of the _"
I want to get a vector with word and probabilities from a pre-trained model, such as :
desk = 0.1
table = 0.2
car = 0.05
shirt = 0.001
Is that possible? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2311/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2310 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2310/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2310/comments | https://api.github.com/repos/huggingface/transformers/issues/2310/events | https://github.com/huggingface/transformers/pull/2310 | 542,268,571 | MDExOlB1bGxSZXF1ZXN0MzU2NzQ1MTIz | 2,310 | revert erroneous fix #2276 | {
"login": "ShnitzelKiller",
"id": 6132502,
"node_id": "MDQ6VXNlcjYxMzI1MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6132502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShnitzelKiller",
"html_url": "https://github.com/ShnitzelKiller",
"followers_url": "https://api.github.com/users/ShnitzelKiller/followers",
"following_url": "https://api.github.com/users/ShnitzelKiller/following{/other_user}",
"gists_url": "https://api.github.com/users/ShnitzelKiller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShnitzelKiller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShnitzelKiller/subscriptions",
"organizations_url": "https://api.github.com/users/ShnitzelKiller/orgs",
"repos_url": "https://api.github.com/users/ShnitzelKiller/repos",
"events_url": "https://api.github.com/users/ShnitzelKiller/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShnitzelKiller/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=h1) Report\n> Merging [#2310](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81db12c3ba0c2067f43c4a63edf5e45f54161042?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2310 +/- ##\n=======================================\n Coverage 73.54% 73.54% \n=======================================\n Files 87 87 \n Lines 14789 14789 \n=======================================\n Hits 10876 10876 \n Misses 3913 3913\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.91% <0%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=footer). Last update [81db12c...e1844d9](https://codecov.io/gh/huggingface/transformers/pull/2310?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, probably best to just use positional arguments here (instead of keywords) then, don't you think?",
"Great, thanks!"
] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | I based #2276 on having an error pop up on an older pytorch version, and also on the erroneous (current!) documentation for pytorch.Tensor.scatter():
> `scatter(dim, index, source)` β Tensor
>
> Out-of-place version of torch.Tensor.scatter_()
>
> `scatter_(dim, index, src)` β Tensor
> ...
The argument was called `source`, inconsistently, in the version I was using, but somewhere along the way it went back to being `src` without the docs changing, which caused this confusion... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2310/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2310",
"html_url": "https://github.com/huggingface/transformers/pull/2310",
"diff_url": "https://github.com/huggingface/transformers/pull/2310.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2310.patch",
"merged_at": 1577274203000
} |
https://api.github.com/repos/huggingface/transformers/issues/2309 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2309/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2309/comments | https://api.github.com/repos/huggingface/transformers/issues/2309/events | https://github.com/huggingface/transformers/issues/2309 | 542,253,499 | MDU6SXNzdWU1NDIyNTM0OTk= | 2,309 | Bug: Tokenization of Special Tokens | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | ## π Bug
The commit https://github.com/huggingface/transformers/commit/deceb001616995199a6a5dca866ffec95c3ebe74 introduces a bug in the tokenization of special tokens when using `from_pretrained` to initialize the tokenizer.
```
from transformers import AutoTokenizer
bert_tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
bert_text = "[CLS] An example for [MASK] change. [SEP]"
bert_tokenizer.tokenize(bert_text)
# Before: ['[CLS]', 'an', 'example', 'for', '[MASK]', 'change', '.', '[SEP]']
# After: ['[', '[UNK]', ']', 'an', 'example', 'for', '[', '[UNK]', ']', 'change', '.', '[', '[UNK]', ']']
roberta_tokenizer = AutoTokenizer.from_pretrained('roberta-base')
roberta_text = "<s> An example for <mask> change. </s>"
roberta_tokenizer.tokenize(roberta_text)
# Before: ['<s>', 'An', 'Δ example', 'Δ for', '<mask>', 'change', '.', '</s>']
# After: ['<', 's', '>', 'Δ An', 'Δ example', 'Δ for', 'Δ <', 'mask', '>', 'Δ change', '.', 'Δ </', 's', '>']
````
Fixed by https://github.com/huggingface/transformers/pull/2312. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2309/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2308 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2308/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2308/comments | https://api.github.com/repos/huggingface/transformers/issues/2308/events | https://github.com/huggingface/transformers/issues/2308 | 542,242,412 | MDU6SXNzdWU1NDIyNDI0MTI= | 2,308 | pytorch_pretrained_bert giving different scores for BertForNextSentencePrediction | {
"login": "LiZhengArsenal",
"id": 58454577,
"node_id": "MDQ6VXNlcjU4NDU0NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/58454577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LiZhengArsenal",
"html_url": "https://github.com/LiZhengArsenal",
"followers_url": "https://api.github.com/users/LiZhengArsenal/followers",
"following_url": "https://api.github.com/users/LiZhengArsenal/following{/other_user}",
"gists_url": "https://api.github.com/users/LiZhengArsenal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LiZhengArsenal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LiZhengArsenal/subscriptions",
"organizations_url": "https://api.github.com/users/LiZhengArsenal/orgs",
"repos_url": "https://api.github.com/users/LiZhengArsenal/repos",
"events_url": "https://api.github.com/users/LiZhengArsenal/events{/privacy}",
"received_events_url": "https://api.github.com/users/LiZhengArsenal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,577 | 1,578 | 1,577 | NONE | null | ## β Questions & Help
from pytorch_transformers.modeling_bert import BertForNextSentencePrediction
from pytorch_transformers import BertTokenizer, BertConfig
import torch
#Load pretrained model from local
config = BertConfig.from_json_file('resources/bert_config.json')
token = BertTokenizer('resources/vocab.txt')
model = BertForNextSentencePrediction.from_pretrained('resources/pytorch_model.bin', config=config)
model.eval()
textA_ids = token.tokenize("How old are you?")
textB_ids = token.tokenize("The Eiffel Tower is in Paris")
text_ids = token.convert_tokens_to_ids(["[CLS]"] + textA_ids + ["[SEP]"] + textB_ids + ["[SEP]"])
segments_ids = [0]\*(len(textA_ids)+2) + [1]\*(len(textB_ids)+1)
text_inputs = torch.tensor([text_ids])
segments_inputs = torch.tensor([segments_ids])
with torch.no_grad():
outputs = model(text_inputs, token_type_ids=segments_inputs)
print(outputs)
Outputs changed everytime when I ran the code. I followed many ways in other issues to solve the problem, but they didin't work. And I've used local pretrained model for many other tasks, but never happened this thing before. So I don't think model caused this problem.
Version Information:
torch 1.1.0
pytorch-transformers 1.2.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2308/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2307 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2307/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2307/comments | https://api.github.com/repos/huggingface/transformers/issues/2307/events | https://github.com/huggingface/transformers/issues/2307 | 542,228,368 | MDU6SXNzdWU1NDIyMjgzNjg= | 2,307 | What's the exact name of BERT large in results ( GermEval 2014)? | {
"login": "zhipengChen",
"id": 13817269,
"node_id": "MDQ6VXNlcjEzODE3MjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/13817269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhipengChen",
"html_url": "https://github.com/zhipengChen",
"followers_url": "https://api.github.com/users/zhipengChen/followers",
"following_url": "https://api.github.com/users/zhipengChen/following{/other_user}",
"gists_url": "https://api.github.com/users/zhipengChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhipengChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhipengChen/subscriptions",
"organizations_url": "https://api.github.com/users/zhipengChen/orgs",
"repos_url": "https://api.github.com/users/zhipengChen/repos",
"events_url": "https://api.github.com/users/zhipengChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhipengChen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | ## β Questions & Help
I use BERT large cased model downloaded by run_ner.py script. But I can't get the result in the table below.
<!-- A clear and concise description of the question. -->

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2307/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2306 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2306/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2306/comments | https://api.github.com/repos/huggingface/transformers/issues/2306/events | https://github.com/huggingface/transformers/issues/2306 | 542,226,506 | MDU6SXNzdWU1NDIyMjY1MDY= | 2,306 | Non-Deterministic Behavior in BertTokenizer | {
"login": "4tywon",
"id": 23411400,
"node_id": "MDQ6VXNlcjIzNDExNDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/23411400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4tywon",
"html_url": "https://github.com/4tywon",
"followers_url": "https://api.github.com/users/4tywon/followers",
"following_url": "https://api.github.com/users/4tywon/following{/other_user}",
"gists_url": "https://api.github.com/users/4tywon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/4tywon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/4tywon/subscriptions",
"organizations_url": "https://api.github.com/users/4tywon/orgs",
"repos_url": "https://api.github.com/users/4tywon/repos",
"events_url": "https://api.github.com/users/4tywon/events{/privacy}",
"received_events_url": "https://api.github.com/users/4tywon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As an update I've been able to reproduce this problem using a python script",
"I cannot reproduce this on Windows, PT1.4, latest transformers master.",
"I think I was using an old version of transformers :( \r\n\r\nThis seems to have been fixed in v2.2.2 - After upgrading to latest I haven't observed this anymore so I'll close this issue.\r\n\r\nThanks!"
] | 1,577 | 1,584 | 1,584 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): BertTokenizer only
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X] my own modified scripts: Jupyter Notebook
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details) tokenize a string
## To Reproduce
Jupyter Notebook with one cell, with a cloned version of the transformers repo.
```
import sys
sys.path.insert(0, 'transformers')
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', sep_token='[SEP]', do_lower_case=True)
tokenizer.tokenize("[PAD] [SEP] [SEP] [PAD]")
```
with outputs (varying between kernel restarts and runs):
`['[PAD]', '[', 'sep', ']', '[', 'sep', ']', '[PAD]']`
`['[PAD]', '[SEP]', '[SEP]','[PAD]']`
`['[PAD]', '[SEP]', '[SEP]', '[', 'pad', ']']`
## Expected behavior
Expected the output to be
`['[PAD]', '[SEP]', '[SEP]','[PAD]']`
and have deterministic behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Platform Linux-3.10.0-1062.el7.x86_64-x86_64-with-redhat-7.7-Verona
Python 3.7.4 (default, Aug 13 2019, 20:35:49)
[GCC 7.3.0]
PyTorch 1.3.1
Tensorflow 1.15.0
* PyTorch Transformers version (or branch): on master branch of repo
* Using GPU ? Yes
## Additional context
This happens sometimes when the notebook kernel is restarted and the cell is re-run. I haven't observed this happening when running a python script.
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2306/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2305 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2305/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2305/comments | https://api.github.com/repos/huggingface/transformers/issues/2305/events | https://github.com/huggingface/transformers/issues/2305 | 542,225,339 | MDU6SXNzdWU1NDIyMjUzMzk= | 2,305 | [CLS] token / is used as the aggregate sequence representation for classification tasks | {
"login": "cherepanovic",
"id": 10064548,
"node_id": "MDQ6VXNlcjEwMDY0NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10064548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cherepanovic",
"html_url": "https://github.com/cherepanovic",
"followers_url": "https://api.github.com/users/cherepanovic/followers",
"following_url": "https://api.github.com/users/cherepanovic/following{/other_user}",
"gists_url": "https://api.github.com/users/cherepanovic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cherepanovic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cherepanovic/subscriptions",
"organizations_url": "https://api.github.com/users/cherepanovic/orgs",
"repos_url": "https://api.github.com/users/cherepanovic/repos",
"events_url": "https://api.github.com/users/cherepanovic/events{/privacy}",
"received_events_url": "https://api.github.com/users/cherepanovic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"[CLS], [SEP] are \"special token\" of bert. Its included in the vocabulary of size 30522. Its starts with like 101, 103 something.",
"of course these two tokens are special tokens. \r\n\r\n **is used as the aggregate sequence representation for classification tasks.**\r\n\r\nin which way does happen this aggregations?\r\n\r\n>>The first token of every sequence is always a special classification token ([CLS]). The final hidden state corresponding to this token is used as the aggregate sequence representation for classification tasks.\r\n\r\nfrom papert https://arxiv.org/pdf/1810.04805.pdf",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | [CLS] is fed into an output layer for classification. How will be built this token. Is there something special done for it during training? in which way does happen this aggregations of sequences?
Thanks for response | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2305/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2304 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2304/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2304/comments | https://api.github.com/repos/huggingface/transformers/issues/2304/events | https://github.com/huggingface/transformers/issues/2304 | 542,216,248 | MDU6SXNzdWU1NDIyMTYyNDg= | 2,304 | Why are you getting just the last encoder states in the summarization code? | {
"login": "ohmeow",
"id": 14000,
"node_id": "MDQ6VXNlcjE0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohmeow",
"html_url": "https://github.com/ohmeow",
"followers_url": "https://api.github.com/users/ohmeow/followers",
"following_url": "https://api.github.com/users/ohmeow/following{/other_user}",
"gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions",
"organizations_url": "https://api.github.com/users/ohmeow/orgs",
"repos_url": "https://api.github.com/users/ohmeow/repos",
"events_url": "https://api.github.com/users/ohmeow/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohmeow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | CONTRIBUTOR | null | The line is here:
https://github.com/huggingface/transformers/blob/v2.3.0/examples/summarization/modeling_bertabs.py#L142
By changing the line to `encoder_hidden_states = encoder_output` I was able to fine-tune the model successfully, as well as, run the inference code from the `run_summarization.py` script.
So just wondering why you're indexing into the the encoder output rather than passing all of it along to the decoder? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2304/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2303 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2303/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2303/comments | https://api.github.com/repos/huggingface/transformers/issues/2303/events | https://github.com/huggingface/transformers/pull/2303 | 542,169,501 | MDExOlB1bGxSZXF1ZXN0MzU2NjY4ODUw | 2,303 | fix repetition penalty error in modeling_utils.py | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=h1) Report\n> Merging [#2303](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81db12c3ba0c2067f43c4a63edf5e45f54161042?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2303 +/- ##\n==========================================\n- Coverage 73.54% 73.52% -0.02% \n==========================================\n Files 87 87 \n Lines 14789 14793 +4 \n==========================================\n Hits 10876 10876 \n- Misses 3913 3917 +4\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2303/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.45% <0%> (-0.46%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=footer). Last update [81db12c...18e5bdb](https://codecov.io/gh/huggingface/transformers/pull/2303?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good catch.\r\nBut this is actually the technique mentioned in http://arxiv.org/abs/1909.05858.\r\nSo to fix it we should check the code of Nitish (https://github.com/salesforce/ctrl) and apply the same behavior here.",
"I checked the code in https://github.com/salesforce/ctrl/blob/0f30306a8947ce0ede62e79c7e1f05a585cc56c9/generation.py#L217: \r\n`prompt_logits[_token][generated_token] /= penalty`\r\n\r\nSo in the original code division is always used no matter what sign the `prompt_logit` of the previously generated tokens. \r\n\r\nWhen going a bit deeper and looking at the actual values of the logit in \r\nhttps://github.com/huggingface/transformers/blob/81db12c3ba0c2067f43c4a63edf5e45f54161042/src/transformers/modeling_utils.py#L731\r\nfor different models the following can be observed:\r\n\r\nFor the models: **ctrl**, **xlm** the logit values tend to be positive, which explains why division by the `repetition penalty` is used. BUT, the values don't have to be positive, there were also very rare cases when using **ctrl** where the logit was actually negative in which case a division increases the probability of that word to be sampled. \r\n\r\nFor the models: **gpt2**, **openai-gpt**, **xlnet** the logit values tend to be negative, in which case dividing by a `repetition penalty` increases the probability of previously generated tokens to be sampled. \r\n\r\nIn the proposed PR, both cases would be correctly handled from a logical point of view. \r\nIf we want to stick to the original code on the other hand (only using division) we could add a warning that the `repetition penalty` should only be used in combination with **ctrl**.",
"Ok, I see, thanks for documenting this. Let's go for this solution for now.",
"Is this fix added to the pip package? So if we use pip install package this will be covered or not yet I have to install from source? ",
"Reading this after it was mentioned in the PPLM example PR.\r\nThe fix makes total sense, but I have a concern: the amount by which a negative number is diminished is greater than the amount a positive number is diminished.\r\nIf we have two values, say -2 and 2 this happens:\r\n```\r\nx = np.array([-2, 2])\r\nsx = np.exp(x)/sum(np.exp(x))\r\nprint(sx) # array([0.01798621, 0.98201379])\r\n```\r\nif we apply the same penalty to both, we would want the probabilities to stay the same, but this is what happens:\r\n```\r\np = [1/1.2, 1.2]\r\nspx = np.exp(x/p)/sum(np.exp(x/p))\r\nprint(spx) # array([0.01684577, 0.98315423])\r\n```\r\nOn the other hand, if we apply the penalty to the probabilities after the softmax (and we renormalize) this is what happens:\r\n```\r\np2 = [1.2, 1.2]\r\nsp2x = (sx/p2)/sum(sx/p2)\r\nprint(sp2x) # array([0.01798621, 0.98201379])\r\n```\r\nThe probabilities are intact, as we want, because we don't want to penalize negative values more than we penalize positive values.\r\nSo my proposal is to perform the penalty after the softmax, on probability values, always dividing, rather than on the logits.\r\nWhat do you think?\r\n\r\nEdit:\r\nIn math i propose to move from:\r\n\r\nto:\r\n\r\n",
"Sorry for the late response @w4nderlust ! \r\n\r\nI think you it makes a lot of sense what you are saying! \r\n\r\nTo implement your solution with minimal code change one could simply change Eq. (1): \r\n\r\n\r\n\r\nto the equivalent Eq. (2)\r\n\r\n\r\n\r\nOne question that remains is how the new repetition penalties  in Eq. (1) & (2) will have to differ from the old repetition penalties  in Eq. (3):\r\n\r\n\r\n\r\nto have a similar effect on the softmax. It is quite obvious that  reduces the prob of its token much more than \r\n\r\nFor the different LMHead models, I calculated  for different values of  . I simply generated randomly sampled sentences from the pretrained models and averaged the effect of the tokens for 5 runs with `max_length=100` so that the averaged is formed of ca.  tokens. \r\n\r\nThe following values show by how much  scales down the prob after the softmax which is equivalent of what  would have been set to:\r\n\r\n```\r\nGenerate repetition penalty comparison for ctrl\r\nPenalty factor: 1.1 - Without penalty / penalty ratio avg: 4e0\r\nPenalty factor: 1.2 - Without penalty / penalty ratio avg: 31e0\r\nPenalty factor: 1.3 - Without penalty / penalty ratio avg: 149e0\r\nPenalty factor: 1.4 - Without penalty / penalty ratio avg: 25e3\r\nPenalty factor: 1.5 - Without penalty / penalty ratio avg: 286e3\r\nGenerate repetition penalty comparison for distilgpt2\r\nPenalty factor: 1.1 - Without penalty / penalty ratio avg: 23e3\r\nPenalty factor: 1.2 - Without penalty / penalty ratio avg: 2e9\r\nPenalty factor: 1.3 - Without penalty / penalty ratio avg: 223e9\r\nPenalty factor: 1.4 - Without penalty / penalty ratio avg: 3e24\r\nGenerate repetition penalty comparison for gpt2\r\nPenalty factor: 1.1 - Without penalty / penalty ratio avg: 1e9\r\nPenalty factor: 1.2 - Without penalty / penalty ratio avg: 742e18\r\nGenerate repetition penalty comparison for xlm-clm-enfr-1024\r\nPenalty factor: 1.1 - Without penalty / penalty ratio avg: 2e0\r\nPenalty factor: 1.2 - Without penalty / penalty ratio avg: 3e0\r\nPenalty factor: 1.3 - Without penalty / penalty ratio avg: 5e0\r\nPenalty factor: 1.4 - Without penalty / penalty ratio avg: 9e0\r\nPenalty factor: 1.5 - Without penalty / penalty ratio avg: 13e0\r\nGenerate repetition penalty comparison for openai-gpt\r\nPenalty factor: 1.1 - Without penalty / penalty ratio avg: 1e0\r\nPenalty factor: 1.2 - Without penalty / penalty ratio avg: 2e0\r\nPenalty factor: 1.3 - Without penalty / penalty ratio avg: 4e0\r\nPenalty factor: 1.4 - Without penalty / penalty ratio avg: 15e0\r\nPenalty factor: 1.5 - Without penalty / penalty ratio avg: 19e0\r\nGenerate repetition penalty comparison for xlnet-base-cased\r\nPenalty factor: 1.1 - Without penalty / penalty ratio avg: 5e0\r\nPenalty factor: 1.2 - Without penalty / penalty ratio avg: 34e0\r\nPenalty factor: 1.3 - Without penalty / penalty ratio avg: 2e3\r\nPenalty factor: 1.4 - Without penalty / penalty ratio avg: 47e3\r\nPenalty factor: 1.5 - Without penalty / penalty ratio avg: 8e6\r\n```\r\n\r\nIt can be seen that `gpt2` for example produces much larger logit values which lead to much more drastic reductions in the prob after softmax. The repetition penalty was originally introduced for `ctrl` so it's probably best to look at its behaviour.\r\n\r\n\r\n\r\n",
"So I think there are three possibilities:\r\n\r\n1) Follow the proposed solution from @w4nderlust implementing Eq.(1). \r\nThis would mean though that the proposed repetition penalty of 1.3 in the ctrl paper would have to be changed to something around 150 which is quite a large value. \r\n\r\n2) Instead of using substracting by the log(rep_penalty) as in: \r\n, \r\none could only substract by the rep_penalty to give the equation: \r\n,\r\nThis way the values for  would equal  and thus be much smaller. The repetition penalty in `ctlr` would thus only have to be around 5 to equal the behavior of the old penalty of 1.3. One disadvantage would be that the neutral element in this case is 0 instead of 1 which might be a bit confusing. \r\n\r\n3) Just leave as it is now since from what I seen most logits almost always all either positive or either all negative, so that the current behavior is not very prone to lead to errors. \r\n\r\nI would tend to solution 2, giving a clear explanation of the variable in the argument section of the language generation function. \r\n\r\nWhat do you think @w4nderlust and @thomwolf ?\r\n\r\n\r\n\r\n",
"Thank you for the thorough analysis @patrickvonplaten ! I believe 2 would be fine. The nog just scales things differently, but there's no specific reason to have it, as it is a user tunable parameter anyway. The fact that the default would be 0 instead of one I think could be explained and one could point to this conversation in a comment to give the full picture. Although I understand this is not a huge issue (because of what you say in 3), I kinda believe 2 is better as the could potentially be in the future a different model that actually outputs both positive and negative logits and it that case this could make a substantial difference in the quality of the sampling. "
] | 1,577 | 1,698 | 1,577 | MEMBER | null | fix bug mention in #2302 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2303/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2303",
"html_url": "https://github.com/huggingface/transformers/pull/2303",
"diff_url": "https://github.com/huggingface/transformers/pull/2303.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2303.patch",
"merged_at": 1577309960000
} |
https://api.github.com/repos/huggingface/transformers/issues/2302 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2302/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2302/comments | https://api.github.com/repos/huggingface/transformers/issues/2302/events | https://github.com/huggingface/transformers/issues/2302 | 542,169,302 | MDU6SXNzdWU1NDIxNjkzMDI= | 2,302 | Repetition penalty work falsely in case the logit of the token is negativ | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Propsed fix in PR #2303 "
] | 1,577 | 1,577 | 1,577 | MEMBER | null | ## π Bug
<!-- Important information -->
Model I am using (LMHeadModels; distilgpt2 in this example but holds true for all LMHeadModels):
Language I am using the model on English:
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: language genaration
## To Reproduce
Run the following code:
```
input_sentence = 'The dog'
tokenizer = AutoTokenizer.from_pretrained('distilgpt2')
model = AutoModelWithLMHead.from_pretrained('distilgpt2')
input_ids = torch.tensor(tokenizer.encode(input_sentence)).unsqueeze(0)
outputs = model.generate(input_ids=input_ids, do_sample=True, bos_token_id=tokenizer.bos_token_id, eos_token_ids=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, repetition_penalty=1.5)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Expected behavior
Output:
`"The dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog"`
In the output, the word dog is repeated multiple times. It can be noticed that the higher the `repetition_penalty`, the more likely already occurring words are to be repeated. Thus, the penalty achieves exactly the opposite of what it is supposed to do.
## Environment
* OS: Linux
* Python version: 3.6.8
* PyTorch version: 1.2.0
* PyTorch Transformers version (or branch): main branch v.2.3.0
* Using GPU ? No
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
The reason for this behavior can be understood when looking at line https://github.com/huggingface/transformers/blob/81db12c3ba0c2067f43c4a63edf5e45f54161042/src/transformers/modeling_utils.py#L731 :
If the logit `next_token_logits[i, previous_tokens]` is < 0, then dividing by a number > 1 is actually going to increase the probability of sampling that token the next time instead of reducing it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2302/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2301 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2301/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2301/comments | https://api.github.com/repos/huggingface/transformers/issues/2301/events | https://github.com/huggingface/transformers/issues/2301 | 542,129,271 | MDU6SXNzdWU1NDIxMjkyNzE= | 2,301 | Can I use run_lm_finetuning.py for training models in an uncovered language? | {
"login": "cppntn",
"id": 26765504,
"node_id": "MDQ6VXNlcjI2NzY1NTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26765504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cppntn",
"html_url": "https://github.com/cppntn",
"followers_url": "https://api.github.com/users/cppntn/followers",
"following_url": "https://api.github.com/users/cppntn/following{/other_user}",
"gists_url": "https://api.github.com/users/cppntn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cppntn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cppntn/subscriptions",
"organizations_url": "https://api.github.com/users/cppntn/orgs",
"repos_url": "https://api.github.com/users/cppntn/repos",
"events_url": "https://api.github.com/users/cppntn/events{/privacy}",
"received_events_url": "https://api.github.com/users/cppntn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can now leave `--model_name_or_path` to None in `run_language_modeling.py` to train a model from scratch.\r\n\r\nSee also https://huggingface.co/blog/how-to-train"
] | 1,577 | 1,581 | 1,581 | NONE | null | Is it possibile to use run_lm_finetuning.py script to train one of the models from scratch in a language not covered by the available pretrained models? (like spanish, italian, german).
My idea is to replicate something like camemBERT for a language different from french, given that I have the corpora needed for the training.
What are some suggestions that you could give me? What are the changes to make in the script in order to run it correctly for this purpose? How can I deal with a corpus of ~150GB?
Thanks for any help | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2301/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2300 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2300/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2300/comments | https://api.github.com/repos/huggingface/transformers/issues/2300/events | https://github.com/huggingface/transformers/issues/2300 | 542,114,005 | MDU6SXNzdWU1NDIxMTQwMDU= | 2,300 | run_ner.py RobertaForTokenClassification.from_pretrained "size mismatch for classifier.bias" | {
"login": "paulthemagno",
"id": 38130299,
"node_id": "MDQ6VXNlcjM4MTMwMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38130299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulthemagno",
"html_url": "https://github.com/paulthemagno",
"followers_url": "https://api.github.com/users/paulthemagno/followers",
"following_url": "https://api.github.com/users/paulthemagno/following{/other_user}",
"gists_url": "https://api.github.com/users/paulthemagno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulthemagno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulthemagno/subscriptions",
"organizations_url": "https://api.github.com/users/paulthemagno/orgs",
"repos_url": "https://api.github.com/users/paulthemagno/repos",
"events_url": "https://api.github.com/users/paulthemagno/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulthemagno/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,577 | 1,577 | 1,577 | NONE | null | ## β Questions & Help
I have a trouble on run_ner.py([https://github.com/huggingface/transformers/blob/master/examples/run_ner.py](url)) evaluation using **Roberta**. My error is in this snippet:
```python
# Load pretrained model and tokenizer
if args.local_rank not in [-1, 0]:
torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab
args.model_type = args.model_type.lower()
config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type]
config = RobertaConfig.from_pretrained(args.config_name if args.config_name else args.model_name_or_path,
num_labels=num_labels,
cache_dir=args.cache_dir if args.cache_dir else None)
tokenizer = RobertaTokenizer.from_pretrained(args.tokenizer_name if args.tokenizer_name else args.model_name_or_path,
do_lower_case=args.do_lower_case,
cache_dir=args.cache_dir if args.cache_dir else None)
model = RobertaForTokenClassification.from_pretrained(args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
cache_dir=args.cache_dir if args.cache_dir else None)
```
If I run with both training and evaluation, it works fine.
If I want only to evaluate my model I get this error:
```
Traceback (most recent call last):
File "run_pos.py", line 560, in <module>
main()
File "run_pos.py", line 477, in main
cache_dir=args.cache_dir if args.cache_dir else None)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 479, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for RobertaForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([19, 768]) from checkpoint, the shape in current model is torch.Size([18, 768]).
size mismatch for classifier.bias: copying a param with shape torch.Size([19]) from checkpoint, the shape in current model is torch.Size([18]).
```
I used a different set of labels (18 labels rather than CoNLL format) passing them through the flag `--labels /path/to/labels.txt`. As you can see, when it loads the model, it sees 19 labels and not the expected 18. I think the 19th is added during the training to tag the subwords. In particular it should be:
```python
# Use cross entropy ignore index as padding label id so that only real label ids contribute to the loss later
pad_token_label_id = CrossEntropyLoss().ignore_index
```
I don't know if I have to remove a label from its mapping (how to do it?) or if there are other solutions. I also don't know why this error doesn't occur if I train and evaluate sequentially in the same process.
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2300/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2299 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2299/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2299/comments | https://api.github.com/repos/huggingface/transformers/issues/2299/events | https://github.com/huggingface/transformers/issues/2299 | 542,052,247 | MDU6SXNzdWU1NDIwNTIyNDc= | 2,299 | Model2Model inference | {
"login": "berzentine",
"id": 8656336,
"node_id": "MDQ6VXNlcjg2NTYzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8656336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/berzentine",
"html_url": "https://github.com/berzentine",
"followers_url": "https://api.github.com/users/berzentine/followers",
"following_url": "https://api.github.com/users/berzentine/following{/other_user}",
"gists_url": "https://api.github.com/users/berzentine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/berzentine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/berzentine/subscriptions",
"organizations_url": "https://api.github.com/users/berzentine/orgs",
"repos_url": "https://api.github.com/users/berzentine/repos",
"events_url": "https://api.github.com/users/berzentine/events{/privacy}",
"received_events_url": "https://api.github.com/users/berzentine/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,582 | 1,582 | NONE | null | ## β Questions & Help
I am trying to implement a simple Model2Model question-answer task, where the input is question and the answer needs to be generated. At inference time I feed in the "[CLS]" token to let it generate but it only generates a single token instead of the entire sentence. The ppl at which I do the inference is ~5 ppl on valid set.
Is there a fundamental issue with my model?
Training time
`outputs = model(input_ids,
batch['speakableAnswer'],
decoder_lm_labels=batch['speakableAnswer'])`
print(outputs.size() -> batch x seq_len x vocab_size)
Test time
`outputs = model(input_ids,
batch['speakableAnswer'])`
print(outputs.size() -> batch x 1 x vocab_size)
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2299/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2299/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2298 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2298/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2298/comments | https://api.github.com/repos/huggingface/transformers/issues/2298/events | https://github.com/huggingface/transformers/issues/2298 | 542,043,285 | MDU6SXNzdWU1NDIwNDMyODU= | 2,298 | Why cosine similarity of BERT, ALBERT, Robert is so big, almost near 1.0? | {
"login": "lowoodz",
"id": 3724423,
"node_id": "MDQ6VXNlcjM3MjQ0MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3724423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lowoodz",
"html_url": "https://github.com/lowoodz",
"followers_url": "https://api.github.com/users/lowoodz/followers",
"following_url": "https://api.github.com/users/lowoodz/following{/other_user}",
"gists_url": "https://api.github.com/users/lowoodz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lowoodz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lowoodz/subscriptions",
"organizations_url": "https://api.github.com/users/lowoodz/orgs",
"repos_url": "https://api.github.com/users/lowoodz/repos",
"events_url": "https://api.github.com/users/lowoodz/events{/privacy}",
"received_events_url": "https://api.github.com/users/lowoodz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"BERT was not designed to produce useful word / sentence embeddings that can be used with cosine similarities. Cosine-similarity treats all dimensions equally which puts high requirements for the created embeddings.\r\n\r\nBERT as not intended for this. See this post by Jacob Devlin:\r\nhttps://github.com/UKPLab/sentence-transformers/issues/80#issuecomment-565388257\r\n\r\nIf you want to use BERT with cosine similarities, you need to fine-tune it on suitable data. You can find data, code and examples in our repository:\r\nhttps://github.com/UKPLab/sentence-transformers",
"@nreimers I have read your paper, it's great and thanks for the answer!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
I tried to use bert models to do similarity comparison of words/sentences, but I found that the cosine similarities are all very high, even for very different words/sentences in meaning. Why?
Does all the vector are located in a small portion the vector-space? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2298/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2297 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2297/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2297/comments | https://api.github.com/repos/huggingface/transformers/issues/2297/events | https://github.com/huggingface/transformers/issues/2297 | 542,033,367 | MDU6SXNzdWU1NDIwMzMzNjc= | 2,297 | RunTimeError in "run_summarization": expected device cuda:0 and dtype byte but got device cuda: 0 and dtype Bool | {
"login": "junxu-ai",
"id": 11970592,
"node_id": "MDQ6VXNlcjExOTcwNTky",
"avatar_url": "https://avatars.githubusercontent.com/u/11970592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junxu-ai",
"html_url": "https://github.com/junxu-ai",
"followers_url": "https://api.github.com/users/junxu-ai/followers",
"following_url": "https://api.github.com/users/junxu-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/junxu-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junxu-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junxu-ai/subscriptions",
"organizations_url": "https://api.github.com/users/junxu-ai/orgs",
"repos_url": "https://api.github.com/users/junxu-ai/repos",
"events_url": "https://api.github.com/users/junxu-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/junxu-ai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Someone suggested to use Pytorch v1.1.0 instead of 1.2.0. But not sure if it is ok. ",
"it's a version inconsistent issue. \r\nin ver 1.1.0, torch.gt outputs:\r\ntorch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\r\ntensor([[ 0, 1],\r\n [ 0, 0]], dtype=torch.uint8)\r\nwhile in ver 1.2.0, it outputs:\r\n>>> torch.ge(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\r\ntensor([[True, True], [False, True]])\r\n",
"Please see the pull request #2369"
] | 1,577 | 1,577 | 1,577 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):
bertabs-finetuned-cnndm-extractive-abstractive-summarization-pytorch_model
Language I am using the model on (, Chinese....):
English
The problem arise when using:
* [ ] the official example scripts: (give details) run_summarization.py in examples
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: summarization
## To Reproduce
Steps to reproduce the behavior:
1. python run_summarization.py --documents_dir .\data --sumaries_output_dir .\output
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Win10
* Python version: 3.6.1
* PyTorch version:1.2.0
* PyTorch Transformers version (or branch): master at 24 Dec
* Using GPU ? Yes
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
errors come from modeling_berabs.py, line 328, in forward
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2297/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2296 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2296/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2296/comments | https://api.github.com/repos/huggingface/transformers/issues/2296/events | https://github.com/huggingface/transformers/issues/2296 | 541,974,120 | MDU6SXNzdWU1NDE5NzQxMjA= | 2,296 | A question about BERT position embedding. | {
"login": "DrDavidS",
"id": 20372610,
"node_id": "MDQ6VXNlcjIwMzcyNjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/20372610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrDavidS",
"html_url": "https://github.com/DrDavidS",
"followers_url": "https://api.github.com/users/DrDavidS/followers",
"following_url": "https://api.github.com/users/DrDavidS/following{/other_user}",
"gists_url": "https://api.github.com/users/DrDavidS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrDavidS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrDavidS/subscriptions",
"organizations_url": "https://api.github.com/users/DrDavidS/orgs",
"repos_url": "https://api.github.com/users/DrDavidS/repos",
"events_url": "https://api.github.com/users/DrDavidS/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrDavidS/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The sentence \"Today is a nice day\" will already be padded to 512 because of the tokenization process, so the input will remain 512 in the embedding layer",
"> The sentence \"Today is a nice day\" will already be padded to 512 because of the tokenization process, so the input will remain 512 in the embedding layer\r\n\r\nThanks a lot! "
] | 1,577 | 1,577 | 1,577 | NONE | null | ## β Questions & Help
I noticed that in Transformers, it turn position_ids into position_embedding through `nn.Embedding(config.max_position_embeddings, config.hidden_size)`
Here `config.max_position_embeddings` is 512 ,and `config.hidden_size` is 768.
So, when I input a sentence shorter than 512 ,such as "Today is a nice day", will this sentence's position embedding still be 512?
Or just as long as position_ids? Here is 7 with [CLS] and [SEP].
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2296/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2295 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2295/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2295/comments | https://api.github.com/repos/huggingface/transformers/issues/2295/events | https://github.com/huggingface/transformers/issues/2295 | 541,958,983 | MDU6SXNzdWU1NDE5NTg5ODM= | 2,295 | How do you handle large documents? | {
"login": "zbloss",
"id": 7165947,
"node_id": "MDQ6VXNlcjcxNjU5NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7165947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zbloss",
"html_url": "https://github.com/zbloss",
"followers_url": "https://api.github.com/users/zbloss/followers",
"following_url": "https://api.github.com/users/zbloss/following{/other_user}",
"gists_url": "https://api.github.com/users/zbloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zbloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zbloss/subscriptions",
"organizations_url": "https://api.github.com/users/zbloss/orgs",
"repos_url": "https://api.github.com/users/zbloss/repos",
"events_url": "https://api.github.com/users/zbloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/zbloss/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"apparently the answer may be to feed smaller sequences of tokens and use the past input keyword itn pytorch models or hidden states in tensorflow. models both this past input and the stateful nature of models aren't documented. it would be interesting to have methods to manage big inputs",
"Recent models like Transformers-XL and XLNet already support longer sequences. Although, the available pretrained models are imho only using 512 tokens. \r\n\r\nSome additional pointers:\r\n- Long-form document classification with BERT. [Blogpost](https://andriymulyar.com/blog/bert-document-classification), [Code](https://github.com/AndriyMulyar/bert_document_classification)\r\n- See ICLR 2020 reviews: \r\n - [BERT-AL: BERT for Arbitrarily Long Document Understanding](https://openreview.net/forum?id=SklnVAEFDB)\r\n - [Blockwise Self-Attention for Long Document Understanding](https://openreview.net/forum?id=H1gpET4YDB)\r\n- [Easy-to-use interface to fine-tuned BERT models for computing semantic similarity](https://github.com/AndriyMulyar/semantic-text-similarity)\r\n- Ye, Z. et al. 2019. BP-Transformer: Modelling Long-Range Context via Binary Partitioning. (2019). [Paper](https://arxiv.org/pdf/1911.04070.pdf) [Code](https://github.com/yzh119/BPT)\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"There are two main methods:\r\n- Concatenating 'short' BERT altogether (which consists of 512 tokens max)\r\n- Constructing a real long BERT (CogLTX, Blockwise BERT, Longformer, Big Bird)\r\n\r\nI resumed some typical papers of BERT for long text in this post : [Paper Dissected and Recap #4 : which BERT for long text ?](https://lethienhoablog.wordpress.com/2020/11/19/paper-dissected-and-recap-4-which-bert-for-long-text/)\r\nYou can have an overview of all methods there.",
"@lethienhoa does `all long BERT` is capable of any length text?"
] | 1,577 | 1,645 | 1,584 | NONE | null | ## β Questions & Help
I have been a huge fan of this library for a while now. I've used it to accomplish things like sentence classification, a chat bot, and even stock market price prediction, this is truly a fantastic library. But I have not yet learned how to tackle large documents (e.g. documents 10x the size of the model's max length).
An example. A task I would love to accomplish is document abstraction, however the documents I am dealing with are upwards of 3,000+ words long and I'm afraid that taking the first 512 or 768 tokens will not yield a quality summary.
One idea that I was kicking around, but have not put code to yet, involved taking a window of 512-tokens to produce a model output and then repeating this process, shifting the window of 512-tokens, until I have covered my entire corpus. Then I will repeat the process until I have an input that can fit into my model.
There must be a better way. I have heard of developers using these NLP models to summarize large legal documents and legislation, which can be hundreds of pages, let alone thousands of words. Am I missing something, am I overthinking this problem?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2295/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2295/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2294 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2294/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2294/comments | https://api.github.com/repos/huggingface/transformers/issues/2294/events | https://github.com/huggingface/transformers/issues/2294 | 541,956,928 | MDU6SXNzdWU1NDE5NTY5Mjg= | 2,294 | Is there any efficient way to convert BERT outputs to fit token-level tasks? | {
"login": "jianliu-ml",
"id": 57948262,
"node_id": "MDQ6VXNlcjU3OTQ4MjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/57948262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianliu-ml",
"html_url": "https://github.com/jianliu-ml",
"followers_url": "https://api.github.com/users/jianliu-ml/followers",
"following_url": "https://api.github.com/users/jianliu-ml/following{/other_user}",
"gists_url": "https://api.github.com/users/jianliu-ml/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianliu-ml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianliu-ml/subscriptions",
"organizations_url": "https://api.github.com/users/jianliu-ml/orgs",
"repos_url": "https://api.github.com/users/jianliu-ml/repos",
"events_url": "https://api.github.com/users/jianliu-ml/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianliu-ml/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"i am also curious about it. ;;\r\nhttps://github.com/dsindex/etagger/blob/master/feed.py\r\nthis is not efficient way. just pooling.",
"> i am also curious about it. ;;\r\n> https://github.com/dsindex/etagger/blob/master/feed.py\r\n> this is not efficient way. just pooling.\r\n\r\nHi, I think I get the solution. The problem can be solved via simple matrix computation.\r\nFor example, let the BERT representation of the above example be B, which has a size of (5, 100). \r\nWe should first construct a matrix according to the sub-words:\r\nm = \r\n[ 1, 1, 1, 1, 0\r\n 0, 0, 0, 0, 1] \r\nThen we can simply compute m.dot(B), which is exactly the result.\r\n\r\n",
"I have solved this issue."
] | 1,577 | 1,577 | 1,577 | NONE | null | Say I have a sentence consisting of two words: S = [βDefinitelyβ, βnotβ], and what I want is to transfer S into an embedding matrix T with a size of (2, 100), where each row represents a word.
I want to adopt BERT embeddings. But in BERT, each word is represented as a sub-word unit. This means that S will be represented as [βDefβ, β##inβ, β##iteβ, β##lyβ, βnotβ] ( βDefinitelyβ is tokenized as βDefβ, β##inβ, β##iteβ, β##lyβ). BERT will output an embedding matrix H with a size of (5, 100) :(.
My goal is to merge some rows of H according to the sub-word units.
For example, for βDefinitelyβ, I should merge the embeddings of [βDefβ, β##inβ, β##iteβ, β##lyβ] to get its presentation.
In my current method, I use a head mask vector h = [1, 0, 0, 0, 1] to record the βheadβ of each word, where 1 indicates the head position:
h = [
1, -> βDefβ
0, -> β##inβ
0, -> β##iteβ
0, -> β##lyβ
1 -> βnotβ
]
So I should merge rows which have a head mask of 0 to that having a head mask of 1. I have to use the `for` computation to enumerate each element in h, which is slow and can not batchfy.
Is there any efficient method to do the above computation?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2294/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2293 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2293/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2293/comments | https://api.github.com/repos/huggingface/transformers/issues/2293/events | https://github.com/huggingface/transformers/issues/2293 | 541,935,686 | MDU6SXNzdWU1NDE5MzU2ODY= | 2,293 | Train custom NER model with new Pipeline | {
"login": "kormilitzin",
"id": 20815939,
"node_id": "MDQ6VXNlcjIwODE1OTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/20815939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kormilitzin",
"html_url": "https://github.com/kormilitzin",
"followers_url": "https://api.github.com/users/kormilitzin/followers",
"following_url": "https://api.github.com/users/kormilitzin/following{/other_user}",
"gists_url": "https://api.github.com/users/kormilitzin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kormilitzin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kormilitzin/subscriptions",
"organizations_url": "https://api.github.com/users/kormilitzin/orgs",
"repos_url": "https://api.github.com/users/kormilitzin/repos",
"events_url": "https://api.github.com/users/kormilitzin/events{/privacy}",
"received_events_url": "https://api.github.com/users/kormilitzin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | NONE | null | ## π Feature
New `Pipelines` feature is great! I am wondering whether it will be possible to implement a pre-training on domain specific data (similar to ULMFiT approach, unsupervised encoder-decoder) and then train a custom NER model with annotated data (similar to spaCy)? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2293/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2292 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2292/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2292/comments | https://api.github.com/repos/huggingface/transformers/issues/2292/events | https://github.com/huggingface/transformers/pull/2292 | 541,918,709 | MDExOlB1bGxSZXF1ZXN0MzU2NDY0NTc4 | 2,292 | Add cached past for language generation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That a great idea, @patrickvonplaten \r\n\r\nI'll let you finish this PR when you have time and ping me for review or questions.",
"True, I will implement this tomorrow!",
"tested for transfo_xl, gpt2, openai-gpt and xlnet in combination with PR #2289 ",
"This looks great!\r\nTo pass the code quality test, you can use `make style`.\r\nPlease read this section of the (new) CONTRIBUTING guidelines: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=h1) Report\n> Merging [#2292](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **decrease** coverage by `0.17%`.\n> The diff coverage is `8.51%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2292 +/- ##\n==========================================\n- Coverage 73.49% 73.32% -0.18% \n==========================================\n Files 87 87 \n Lines 14793 14833 +40 \n==========================================\n+ Hits 10872 10876 +4 \n- Misses 3921 3957 +36\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `72.9% <0%> (-0.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `83.17% <16.66%> (-1.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.22% <16.66%> (-2.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `74.68% <20%> (-0.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2292/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.44% <3.84%> (-2.02%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=footer). Last update [aeef482...fc84bd5](https://codecov.io/gh/huggingface/transformers/pull/2292?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok good for now, merging"
] | 1,577 | 1,577 | 1,577 | MEMBER | null | add past input for gpt2 and ctrl for faster decoding for language generation.
1. add `prepare_inputs_for_generation` fn for gpt2 and ctrl
2. add private `_do_output_past` fn for PretrainModel class to check whether model outputs past key-value states
- fn only covers cases for gpt2 and ctrl for the moment and needs to add 'xlnet' and 'transfo_xl' via `mem_len`
- might be better to move `_do_output_past` to each individual LMHeadModel
3. rename `pasts` to `past`
can also add dummy tests for language generation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2292/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2292",
"html_url": "https://github.com/huggingface/transformers/pull/2292",
"diff_url": "https://github.com/huggingface/transformers/pull/2292.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2292.patch",
"merged_at": 1577439207000
} |
https://api.github.com/repos/huggingface/transformers/issues/2291 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2291/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2291/comments | https://api.github.com/repos/huggingface/transformers/issues/2291/events | https://github.com/huggingface/transformers/pull/2291 | 541,912,412 | MDExOlB1bGxSZXF1ZXN0MzU2NDU5NDgz | 2,291 | Fix F841 flake8 warning | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2291?src=pr&el=h1) Report\n> Merging [#2291](https://codecov.io/gh/huggingface/transformers/pull/2291?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/072750f4dc4f586cb53f0face4b4a448bb0cdcac?src=pr&el=desc) will **decrease** coverage by `1.18%`.\n> The diff coverage is `50%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2291?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2291 +/- ##\n==========================================\n- Coverage 74.45% 73.26% -1.19% \n==========================================\n Files 85 85 \n Lines 14608 14603 -5 \n==========================================\n- Hits 10876 10699 -177 \n- Misses 3732 3904 +172\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2291?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/2291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `86.13% <ΓΈ> (+0.84%)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `78.86% <ΓΈ> (-0.18%)` | :arrow_down: |\n| [src/transformers/data/metrics/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3MvX19pbml0X18ucHk=) | `27.9% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.05% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.54% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.12% <66.66%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `54.1% <0%> (-10.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2291/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `71.19% <0%> (-2.32%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/2291/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2291?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2291?src=pr&el=footer). Last update [072750f...3e0cf49](https://codecov.io/gh/huggingface/transformers/pull/2291?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | This PR completes the "fix all flake8 warnings" effort of the last few days.
There's a lot of judgment in the fixes here: when the result of an expression is assigned to a variable that isn't used:
- if the expression has no side effect, then it can safely be removed
- if the expression has side effects, then it must be kept and only the assignment to a variable must be removed
- or it may be a coding / refactoring mistake that results in a badly named variable
I'm not sure I made the right call in all cases, so I would appreciate a review.
E203, E501, W503 are still ignored because they're debatable, black disagrees with flake8, and black wins (by not being configurable). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2291/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2291",
"html_url": "https://github.com/huggingface/transformers/pull/2291",
"diff_url": "https://github.com/huggingface/transformers/pull/2291.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2291.patch",
"merged_at": 1577309863000
} |
https://api.github.com/repos/huggingface/transformers/issues/2290 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2290/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2290/comments | https://api.github.com/repos/huggingface/transformers/issues/2290/events | https://github.com/huggingface/transformers/pull/2290 | 541,899,530 | MDExOlB1bGxSZXF1ZXN0MzU2NDQ4ODg3 | 2,290 | duplicated line for repeating_words_penalty_for_language_generation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2290?src=pr&el=h1) Report\n> Merging [#2290](https://codecov.io/gh/huggingface/transformers/pull/2290?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aeef4823ab6099249679756182700e6800024c36?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2290?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2290 +/- ##\n==========================================\n- Coverage 73.49% 73.48% -0.01% \n==========================================\n Files 87 87 \n Lines 14793 14794 +1 \n==========================================\n Hits 10872 10872 \n- Misses 3921 3922 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2290?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.34% <0%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2290?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2290?src=pr&el=footer). Last update [aeef482...0f6017b](https://codecov.io/gh/huggingface/transformers/pull/2290?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yes, actually the doc is still not complete for this new feature.\r\n\r\nWe should add some examples and double-check all. Feel free to clean this up if you feel like it.",
"clean up documentation, add examples for documentation and rename some variables",
"checked example generation for openai-gpt, gpt2, xlnet and xlm in combination with #2289.\r\n",
"Also checked for ctrl",
"also checked for transfo-xl",
"Awesome, merging!"
] | 1,577 | 1,577 | 1,577 | MEMBER | null | length_penalty has a duplicated wrong documentation for language generation -> delete two lines | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2290/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2290",
"html_url": "https://github.com/huggingface/transformers/pull/2290",
"diff_url": "https://github.com/huggingface/transformers/pull/2290.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2290.patch",
"merged_at": 1577438947000
} |
https://api.github.com/repos/huggingface/transformers/issues/2289 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2289/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2289/comments | https://api.github.com/repos/huggingface/transformers/issues/2289/events | https://github.com/huggingface/transformers/pull/2289 | 541,898,397 | MDExOlB1bGxSZXF1ZXN0MzU2NDQ4MDAy | 2,289 | fix bug in prepare inputs for language generation for xlm for effective batch_size > 1 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2289?src=pr&el=h1) Report\n> Merging [#2289](https://codecov.io/gh/huggingface/transformers/pull/2289?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81db12c3ba0c2067f43c4a63edf5e45f54161042?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2289?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2289 +/- ##\n==========================================\n- Coverage 73.54% 73.52% -0.02% \n==========================================\n Files 87 87 \n Lines 14789 14792 +3 \n==========================================\n Hits 10876 10876 \n- Misses 3913 3916 +3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2289?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `86.23% <0%> (-0.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2289/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.26% <0%> (-0.25%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2289?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2289?src=pr&el=footer). Last update [81db12c...f18ac4c](https://codecov.io/gh/huggingface/transformers/pull/2289?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Indeed, thanks @patrickvonplaten "
] | 1,577 | 1,577 | 1,577 | MEMBER | null | if multiple sentence are to be generated the masked tokens to be appended have to equal the effective batch size | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2289/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2289",
"html_url": "https://github.com/huggingface/transformers/pull/2289",
"diff_url": "https://github.com/huggingface/transformers/pull/2289.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2289.patch",
"merged_at": 1577309447000
} |
https://api.github.com/repos/huggingface/transformers/issues/2288 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2288/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2288/comments | https://api.github.com/repos/huggingface/transformers/issues/2288/events | https://github.com/huggingface/transformers/pull/2288 | 541,894,003 | MDExOlB1bGxSZXF1ZXN0MzU2NDQ0Mjg0 | 2,288 | Improve handling of optional imports | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2288?src=pr&el=h1) Report\n> Merging [#2288](https://codecov.io/gh/huggingface/transformers/pull/2288?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/23dad8447c8db53682abc3c53d1b90f85d222e4b?src=pr&el=desc) will **increase** coverage by `0.2%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2288?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2288 +/- ##\n=========================================\n+ Coverage 74.27% 74.47% +0.2% \n=========================================\n Files 85 85 \n Lines 14610 14608 -2 \n=========================================\n+ Hits 10851 10879 +28 \n+ Misses 3759 3729 -30\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2288?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `93.93% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.91% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/commands/train.py](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFpbi5weQ==) | `0% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `89.1% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.31% <ΓΈ> (+2.32%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `29.37% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `19.6% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.26% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/data/metrics/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3MvX19pbml0X18ucHk=) | `27.9% <ΓΈ> (-3.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `89.9% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/2288/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2288?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2288?src=pr&el=footer). Last update [23dad84...4621ad6](https://codecov.io/gh/huggingface/transformers/pull/2288?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2288/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2288",
"html_url": "https://github.com/huggingface/transformers/pull/2288",
"diff_url": "https://github.com/huggingface/transformers/pull/2288.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2288.patch",
"merged_at": 1577136527000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2287 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2287/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2287/comments | https://api.github.com/repos/huggingface/transformers/issues/2287/events | https://github.com/huggingface/transformers/issues/2287 | 541,863,648 | MDU6SXNzdWU1NDE4NjM2NDg= | 2,287 | Do Hugging Face GPT-2 Transformer Models Automatically Does the Absolute Position Embedding for Users? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, as you can see from the source code [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py#L414-L417), when no position ids are passed, they are created as absolute position embeddings.\r\n\r\nYou could have trained a model with a GPT-2 architecture that was using an other type of position embeddings, in which case passing your specific embeddings would be necessary. I'm sure several other use-cases would make sure of specific position embeddings.",
"Ooohh, ok,\r\n\r\nso to clarify, absolute position embedding _**is automatically done**_ by the ```model( )``` statement, but if we want to use our custom position embedding (i.e. other than the absolute position embedding), we can use the ```position_ids``` option inside the ```model( )``` statement......is what I said above correct?\r\n\r\nThank you,",
"Yes, that is correct!",
"Thank you :) !"
] | 1,577 | 1,578 | 1,578 | NONE | null | Hello,
According to Hugging Face ```GPT2DoubleHeadsModel``` documentation (https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel)
```
"Indices of input sequence tokens in the vocabulary.
GPT-2 is a model with absolute position embeddings"
```
So does this mean that, when we implement any Hugging Face GPT-2 Models (```GPT2DoubleHeadsModel```,```GPT2LMHeadsModel```, etc.) via the ```model( )``` statement, the 'absolute position embedding' is _automatically_ done for the user, so that the user actually does not need to specify anything in the ```model( )``` statement to ensure the absolute position embedding?
If the answer is 'yes', then why do we have an option of specifying ```position_ids``` in the ```model( )``` statement?
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2287/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2287/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2286 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2286/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2286/comments | https://api.github.com/repos/huggingface/transformers/issues/2286/events | https://github.com/huggingface/transformers/pull/2286 | 541,837,667 | MDExOlB1bGxSZXF1ZXN0MzU2Mzk3Nzg2 | 2,286 | Typo in tokenization_utils.py | {
"login": "adelevie",
"id": 86790,
"node_id": "MDQ6VXNlcjg2Nzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/86790?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adelevie",
"html_url": "https://github.com/adelevie",
"followers_url": "https://api.github.com/users/adelevie/followers",
"following_url": "https://api.github.com/users/adelevie/following{/other_user}",
"gists_url": "https://api.github.com/users/adelevie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adelevie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adelevie/subscriptions",
"organizations_url": "https://api.github.com/users/adelevie/orgs",
"repos_url": "https://api.github.com/users/adelevie/repos",
"events_url": "https://api.github.com/users/adelevie/events{/privacy}",
"received_events_url": "https://api.github.com/users/adelevie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2286?src=pr&el=h1) Report\n> Merging [#2286](https://codecov.io/gh/huggingface/transformers/pull/2286?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/23dad8447c8db53682abc3c53d1b90f85d222e4b?src=pr&el=desc) will **increase** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2286?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2286 +/- ##\n==========================================\n+ Coverage 74.27% 74.45% +0.18% \n==========================================\n Files 85 85 \n Lines 14610 14610 \n==========================================\n+ Hits 10851 10878 +27 \n+ Misses 3759 3732 -27\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2286?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `92.08% <ΓΈ> (+0.77%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.12% <0%> (+1.58%)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.31% <0%> (+2.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `32.14% <0%> (+7.14%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2286?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2286?src=pr&el=footer). Last update [23dad84...7cef764](https://codecov.io/gh/huggingface/transformers/pull/2286?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | avoir -> avoid | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2286/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2286",
"html_url": "https://github.com/huggingface/transformers/pull/2286",
"diff_url": "https://github.com/huggingface/transformers/pull/2286.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2286.patch",
"merged_at": 1577440247000
} |
https://api.github.com/repos/huggingface/transformers/issues/2285 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2285/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2285/comments | https://api.github.com/repos/huggingface/transformers/issues/2285/events | https://github.com/huggingface/transformers/issues/2285 | 541,794,129 | MDU6SXNzdWU1NDE3OTQxMjk= | 2,285 | BertTokenizer custom UNK unexpected behavior | {
"login": "idocal",
"id": 7427177,
"node_id": "MDQ6VXNlcjc0MjcxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7427177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idocal",
"html_url": "https://github.com/idocal",
"followers_url": "https://api.github.com/users/idocal/followers",
"following_url": "https://api.github.com/users/idocal/following{/other_user}",
"gists_url": "https://api.github.com/users/idocal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idocal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idocal/subscriptions",
"organizations_url": "https://api.github.com/users/idocal/orgs",
"repos_url": "https://api.github.com/users/idocal/repos",
"events_url": "https://api.github.com/users/idocal/events{/privacy}",
"received_events_url": "https://api.github.com/users/idocal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I actually have the same problem with GPT-2 tokenizer. Is this the expected behavior?"
] | 1,577 | 1,596 | 1,582 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X] my own modified scripts: (give details)
Importing transformers to my own project but using BertTokenizer and BertModel with pretrained weights, using 'bert-base-multilingual-cased' for both tokenizer and model.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details)
Fine tuning BERT for English NER on a custom news dataset.
## To Reproduce
Steps to reproduce the behavior:
1. Initialize a tokenizer with custom UNK
2. Try to convert the custom UNK to ID
3. Receive None as
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```
>>>tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased', do_lower_case=False, pad_token="<pad>", unk_token="<unk>")
>>> tokenizer.tokenize("<unk>")
['<unk>']
>>> tokenizer.convert_tokens_to_ids(["<unk>"])
[None]
```
## Expected behavior
My custom UNK should have an ID. (instead, I get None)
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Mac OSX
* Python version: 3.7.3
* PyTorch version: 1.1.0.post2
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU ? No
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2285/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2284 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2284/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2284/comments | https://api.github.com/repos/huggingface/transformers/issues/2284/events | https://github.com/huggingface/transformers/issues/2284 | 541,785,206 | MDU6SXNzdWU1NDE3ODUyMDY= | 2,284 | [ALBERT]: Albert base model itself consuming 32 GB GPU memory.. | {
"login": "jonanem",
"id": 14140685,
"node_id": "MDQ6VXNlcjE0MTQwNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/14140685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonanem",
"html_url": "https://github.com/jonanem",
"followers_url": "https://api.github.com/users/jonanem/followers",
"following_url": "https://api.github.com/users/jonanem/following{/other_user}",
"gists_url": "https://api.github.com/users/jonanem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonanem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonanem/subscriptions",
"organizations_url": "https://api.github.com/users/jonanem/orgs",
"repos_url": "https://api.github.com/users/jonanem/repos",
"events_url": "https://api.github.com/users/jonanem/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonanem/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"i have similar situation.\r\n\r\nhttps://github.com/dsindex/iclassifier#emb_classalbert\r\n\r\nin the paper( https://arxiv.org/pdf/1909.11942.pdf ), ALBERT xlarge has just 60M parameters which is much less than BERT large(334M)'s. \r\nbut, we are unable to load albert-xlarge-v2 on 32G GPU memory.\r\n(no problem on bert-large-uncased, bert-large-cased)",
"A similar situation happened to me too.\r\nWhile fine-tuning Albert base on SQuAD 2.0, I had to lower the train batch size to manage to fit the model on 2x NVIDIA 1080 Ti, for a total of about 19 GB used.\r\nI find it quite interesting and weird as the same time, as I managed to fine-tune BERT base on the same dataset and the same GPUs using less memory...",
"Same for the pytorch version of ALBERT, where my 8/11GB GPU could run BERT_base and RoBERTa.",
"Interesting, I started to hesitate on using this ALBERT implementation but hope it will be fixed soon.",
"Indeed, I can reproduce for the TensorFlow version. I'm looking into it, thanks for raising this issue.",
"@jonanem, if you do this at the beginning of your script, does it change the amount of memory used?\r\n\r\n```py\r\ngpus = tf.config.experimental.list_physical_devices('GPU')\r\ntf.config.experimental.set_virtual_device_configuration(\r\n gpus[0],\r\n [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)]\r\n)\r\n```\r\n\r\nThis should keep the amount of memory allocated to the model to 1024MB, with possibility to grow if need be. Initializing the model after this only uses 1.3GB of VRAM on my side. Can you reproduce?\r\n\r\nSee this for more information: [limiting gpu memory growth](https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth)",
"@LysandreJik I just did some investigation and I found a similar problem with the Pytorch implementation.\r\nModel: ALBERT base v2, fine tuning on SQuAD v2 task\r\n\r\nI used the official code from Google Tensorflow repository and I managed to fine tune it on a single GTX 1080 Ti, with batch size 16 and memory consumption of about 10 GB.\r\nThen, I used transformers Pytorch implementation and did the same task on 4x V100 on AWS, with total batch size 48 and memory consumption of 52 GB (about 13 GB per GPU).\r\n\r\nNow, putting it in perspective, I guess the memory consumption of the Pytorch implementation is 10/15 GB above what I was expecting. Is this normal?\r\nIn particular, where in the code is there the Embedding Factorization technique proposed in the official paper?",
"Hi @matteodelv, I ran a fine-tuning task on ALBERT (base-v2) with the parameters you mentioned: batch size of 16. I end up with a VRAM usage of 11.4GB, which is slightly more than the official Google Tensorflow implementation you mention. The usage is lower than when using BERT, which has a total usage of 14GB.\r\n\r\nHowever, when loading the model on its own without any other tensors, taking into account the pytorch memory overhead, it only takes about 66MB of VRAM.\r\n\r\nConcerning your second question, here is the definition of the Embedding Factorization technique proposed in the official paper: `[...] The first one is a factorized embedding parameterization. By decomposing\r\nthe large vocabulary embedding matrix into two small matrices, we separate the size of the hidden\r\nlayers from the size of vocabulary embedding.`\r\n\r\nIn this PyTorch implementation, there are indeed two smaller matrices so that the two sizes may be separate. The first embedding layer is visible in [the `AlbertEmbeddings` class](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L172), and is of size `(vocab_size, embedding_size)`, whereas the second layer is visible in [the `AlbertTransformer` class](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L317), with size `(embedding_size, hidden_size)`.",
"Thanks for your comment @LysandreJik... I haven't looked in the `AlbertTransformer` class for the embedding factorization.\r\n\r\nHowever, regarding the VRAM consumption, I'm still a bit confused about it.\r\nI don't get why the same model with a batch size 16 consumes about 10/11 GB on a single GPU while the same training, on 4 GPUs (total batch size 48, so it's 12 per GPUs) requires more memory.\r\n\r\nCould you please check this? May it be related to Pytorch's `DataParallel`?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Did @matteodelv @LysandreJik find any issue or solution for this? The memory consumption given the parameter is insane",
"Unfortunately not. I had to tune hyperparameters or use other hardware with more memory. But I was using an older version... I haven't checked if the situation has changed since then.",
"Hey,\r\nI tried running on GTX 1080 (10GB) bert-base-uncased with **sucess** on IMDB dataset with a batch-size equal to 16 and sequence length equal to 128.\r\nRunning albert-base-v2 with the same sequence length and same batch size is giving me Out-of-memory issues.\r\n\r\nI am using pytorch, so I guess I have the same problem as you guys here.",
"Same issue. ALBERT raises OOM requiring 32G. ",
"ALBERT repeats the same parameters for each layer but increases each layer size, so even though it have fewer parameters than BERT, the memory needs are greater due to the much larger activations in each layer.",
"> ALBERT repeats the same parameters for each layer but increases each layer size, so even though it have fewer parameters than BERT, the memory needs are greater due to the much larger activations in each layer.\r\n\r\nThat is true, still there is need for more computation, but BERT can fit into 16G memory. I had my albert reimplemented differently and I could fit its weights on a 24G gpu.",
"> ALBERT repeats the same parameters for each layer but increases each layer size, so even though it have fewer parameters than BERT, the memory needs are greater due to the much larger activations in each layer.\r\n\r\nThanks for this explanation, which saves my life."
] | 1,577 | 1,683 | 1,584 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using TFALBERT:
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts:
`from transformers import TFAlbertForSequenceClassification`
`model = TFAlbertForSequenceClassification.from_pretrained('albert-base-v2')`
After this GPU memory is consumed almost 32 GB.... base V2 is model is roughly around 50 MB which is occupying 32 GB on GPU

## Environment
* OS: Linux
* Python version: 3.7
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2284/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2283 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2283/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2283/comments | https://api.github.com/repos/huggingface/transformers/issues/2283/events | https://github.com/huggingface/transformers/issues/2283 | 541,775,253 | MDU6SXNzdWU1NDE3NzUyNTM= | 2,283 | Loading sciBERT failed | {
"login": "broken-dream",
"id": 34561534,
"node_id": "MDQ6VXNlcjM0NTYxNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/34561534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/broken-dream",
"html_url": "https://github.com/broken-dream",
"followers_url": "https://api.github.com/users/broken-dream/followers",
"following_url": "https://api.github.com/users/broken-dream/following{/other_user}",
"gists_url": "https://api.github.com/users/broken-dream/gists{/gist_id}",
"starred_url": "https://api.github.com/users/broken-dream/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/broken-dream/subscriptions",
"organizations_url": "https://api.github.com/users/broken-dream/orgs",
"repos_url": "https://api.github.com/users/broken-dream/repos",
"events_url": "https://api.github.com/users/broken-dream/events{/privacy}",
"received_events_url": "https://api.github.com/users/broken-dream/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Try to use the following commands:\r\n\r\n```bash\r\n$ wget \"https://s3-us-west-2.amazonaws.com/ai2-s2-research/scibert/huggingface_pytorch/scibert_scivocab_uncased.tar\"\r\n$ tar -xf scibert_scivocab_uncased.tar\r\n\r\nThe sciBERT model is now extracted and located under: `./scibert_scivocab_uncased`.\r\n\r\nTo load it:\r\n\r\n```python\r\nfrom transformers import BertModel\r\n\r\nmodel = BertModel.from_pretrained(\"./scibert_scivocab_uncased\")\r\nmodel.eval()\r\n```\r\n\r\nThis should work π€",
"> Try to use the following commands:\r\n> \r\n> ```shell\r\n> $ wget \"https://s3-us-west-2.amazonaws.com/ai2-s2-research/scibert/huggingface_pytorch/scibert_scivocab_uncased.tar\"\r\n> $ tar -xf scibert_scivocab_uncased.tar\r\n> \r\n> The sciBERT model is now extracted and located under: `./scibert_scivocab_uncased`.\r\n> \r\n> To load it:\r\n> \r\n> ```python\r\n> from transformers import BertModel\r\n> \r\n> model = BertModel.from_pretrained(\"./scibert_scivocab_uncased\")\r\n> model.eval()\r\n> ```\r\n> \r\n> This should work π€\r\n\r\nIt works! Thank you very much!"
] | 1,577 | 1,577 | 1,577 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
I am trying to compare the effect of different pre-trained models on RE, the code to load bert is:
`self.bert = BertModel.from_pretrained(pretrain_path)`
When the "pretrain_path" is "pretrain/bert-base-uncased" , everything is fine, but after i changed it to "pretrain/scibert-uncased", i got error:
`-OSError: Model name 'pretrain/scibert-uncased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'pretrain/scibert-uncased/config.json' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.`
The scibert is pytorch model and the two directories are same in structure.
It seems that if the model name is not in the name list, it won't work.
Thank you very much! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2283/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2282 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2282/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2282/comments | https://api.github.com/repos/huggingface/transformers/issues/2282/events | https://github.com/huggingface/transformers/issues/2282 | 541,747,239 | MDU6SXNzdWU1NDE3NDcyMzk= | 2,282 | Maybe some parameters are error in document for distributed training ? | {
"login": "ljch2018",
"id": 22562546,
"node_id": "MDQ6VXNlcjIyNTYyNTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/22562546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ljch2018",
"html_url": "https://github.com/ljch2018",
"followers_url": "https://api.github.com/users/ljch2018/followers",
"following_url": "https://api.github.com/users/ljch2018/following{/other_user}",
"gists_url": "https://api.github.com/users/ljch2018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ljch2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljch2018/subscriptions",
"organizations_url": "https://api.github.com/users/ljch2018/orgs",
"repos_url": "https://api.github.com/users/ljch2018/repos",
"events_url": "https://api.github.com/users/ljch2018/events{/privacy}",
"received_events_url": "https://api.github.com/users/ljch2018/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,577 | 1,577 | 1,577 | NONE | null | Based on [Distributed training document](https://huggingface.co/transformers/examples.html#id1) , one can use `bert-base-cased` model to fine-tune MR model and reaches very high score.
> Here is an example using distributed training on 8 V100 GPUs and Bert Whole Word Masking uncased model to reach a F1 > 93 on SQuAD1.0:
```
python -m torch.distributed.launch --nproc_per_node=8 run_squad.py \
--model_type bert \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ../models/wwm_uncased_finetuned_squad/ \
--per_gpu_train_batch_size 24 \
--gradient_accumulation_steps 12
```
```
f1 = 93.15
exact_match = 86.91
```
**But based on [google bert repo](https://github.com/google-research/bert#squad-11) , the model `bert-base-cased` perfermance is**
```
{"f1": 88.41249612335034, "exact_match": 81.2488174077578}
```
Maybe the right pretrained model is `bert-large-uncased` ?
Thanks~ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2282/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2281 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2281/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2281/comments | https://api.github.com/repos/huggingface/transformers/issues/2281/events | https://github.com/huggingface/transformers/pull/2281 | 541,738,119 | MDExOlB1bGxSZXF1ZXN0MzU2MzE0NzIw | 2,281 | Add Dutch pre-trained BERT model | {
"login": "wietsedv",
"id": 13139101,
"node_id": "MDQ6VXNlcjEzMTM5MTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/13139101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wietsedv",
"html_url": "https://github.com/wietsedv",
"followers_url": "https://api.github.com/users/wietsedv/followers",
"following_url": "https://api.github.com/users/wietsedv/following{/other_user}",
"gists_url": "https://api.github.com/users/wietsedv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wietsedv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wietsedv/subscriptions",
"organizations_url": "https://api.github.com/users/wietsedv/orgs",
"repos_url": "https://api.github.com/users/wietsedv/repos",
"events_url": "https://api.github.com/users/wietsedv/events{/privacy}",
"received_events_url": "https://api.github.com/users/wietsedv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2281?src=pr&el=h1) Report\n> Merging [#2281](https://codecov.io/gh/huggingface/transformers/pull/2281?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ba2378ced560c12f8ee97ca7998fd28b93fcfb47?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2281?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2281 +/- ##\n=======================================\n Coverage 74.45% 74.45% \n=======================================\n Files 85 85 \n Lines 14610 14610 \n=======================================\n Hits 10878 10878 \n Misses 3732 3732\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2281?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2281/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.34% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2281/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2281/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.26% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2281/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.7% <ΓΈ> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2281?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2281?src=pr&el=footer). Last update [ba2378c...5eb71e6](https://codecov.io/gh/huggingface/transformers/pull/2281?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @wietsedv! Did you try loading your tokenizer/model directly, cf. https://huggingface.co/wietsedv/bert-base-dutch-cased\r\n\r\ni.e. It should work out-of-the-box using:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(\"wietsedv/bert-base-dutch-cased\")\r\n\r\nmodel = AutoModel.from_pretrained(\"wietsedv/bert-base-dutch-cased\")\r\ntf_model = TFAutoModel.from_pretrained(\"wietsedv/bert-base-dutch-cased\")\r\n```\r\n\r\nLet us know if it's not the case (we can still merge this PR to have a nice shortcut inside the code but feature-wise it should be equivalent)",
"Hi! Thanks for your response. I did notice that I can use that snippet and I can confirm it works. I am however not sure whether cased tokenization works correctly that way. Correct me if I am wrong, but it seems that Transformers always lowercases unless that is explicitly disabled. I did disable it in this PR, which makes it work correctly out of the box. But I think lowercasing is enabled by default if people use the snippet above.\r\n\r\nPlease correct my if I am wrong.",
"We'll check, thanks for the report.",
"Just a note: I absolutely love seeing a Dutch version of BERT but this isn't the only BERT model out there. As you mention in your paper, there's also [BERT-NL](http://textdata.nl/). You seem to claim that it performs a lot worse than your version and that it is even outperformed by multilingual BERT. At first glance I don't see any written-down experiments confirming that claim - a comparison between BERTje, BERT-NL, and multilingual BERT on down-stream tasks would've been much more informative. (BERT-NL will be presented at the largest computational linguistics conference on Dutch (Computational Linguistics in the Netherlands; CLIN) at the end of the month, so presumably it does carry some weight.)\r\n\r\nAll this to say: why does your version deserve to be \"the\" `bert-base-dutch-cased` model if there is an alternative? Don't get me wrong, I really value your research, but a fair and full comparison is missing.",
"It is correct that there is another Dutch model and we do full fine-tuning results of this model. These numbers were included in an earlier draft of our paper, but we removed it since they did not add any value. For instance for named entity recognition (conll2002), multilingual BERT achieves about 80% accuracy, our BERT model about 88% and their BERT model just 41%.\r\n\r\nMore detailed comparison with their model would therefore not add any value since the scores are too low. The authors have not made any claims about model performance yet, so it would have been unfair to be too negative about their model before they have even released any paper.\r\n\r\nSomeone at Leiden has confirmed that their experiments also showed that they were outperformed by multilingual BERT. Therefore I think that our BERT model is the only _effective_ Dutch BERT model.\r\n\r\nPS: The entry barrier for CLIN is not extremely high. The reason we are not presenting at CLIN is that we missed the deadline. ",
"I think it is in fact very useful information to see such huge difference. This way readers are not confronted with the same question that I posed before: there are two Dutch BERT models - but which one should I use/which one is better? You now clarify it to me, for which I am grateful, but a broader audience won't know. I think the added value is great. However, I do agree that is hard and perhaps unfair to compare to a model that hasn't been published/discussed yet. (Then again, their model is available online so they should be open to criticism.)\r\n\r\nThe CLIN acceptance rate is indeed high, and the follow-up CLIN Journal is also not bad. Still, what I was aiming for is peer review. If I were to review the BERT-NL (assuming they will submit after conference and assuming I was a reviewer this year), then I would also mention your model and ask for a comparison. To be honest, I put more faith in a peer reviewed journal/conference than arXiv papers.\r\n\r\nI really don't want to come off as arrogant and I very much value your work, but I am trying to approach this from a person who is just getting started with this kind of stuff and doesn't follow the trends or what is going on in the field. They might find this model easily available in the Transformers hub, but then they might read in a journal (possibly) about BERT-NL - which then is apparently different from the version in Transformers. On top of that, neither paper (presumably) refers or compares to the other! Those people _must be confused_ by that.\r\n\r\nThe above wall of text just to say that I have no problem with your model being \"the\" Dutch BERT model because it seems to clearly be the best one, but that I would very much like to see reference/comparison to the other model in your paper so that it is clear to the community what is going on with these two models. I hope that the authors of BERT-NL do the same. Do you have any plans to submit a paper somewhere?",
"Thanks for your feedback and clear explanation. We do indeed intend to submit a long paper somewhere. The short paper on arxiv is mainly intended for reference and to demonstrate that the model is effective with some benchmarks. Further evaluation would be included in a longer paper.",
"The authors of BERT-NL have reported results which can be compared to Bertje, see https://twitter.com/suzan/status/1200361620398125056\r\nAlso see the results in this thread https://twitter.com/danieldekok/status/1213378688563253249 and https://twitter.com/danieldekok/status/1213741132863156224\r\nIn both cases, Bertje does outperform BERT-NL. On the other hand, the results about Bertje vs multilingual BERT are different, so this needs to be investigated further.",
"I think @BramVanroy raises some good points about naming convention.\r\n\r\nIn my opinion the organization name or author name should come after the \"bert-base-<language>\" skeleton.\r\n\r\nSo it seems that BERTje is current SOTA now. But: on the next conference maybe another BERT model for Dutch is better... I'm not a fan of the \"First come, first served\" principle here π
\r\n\r\n/cc @thomwolf , @julien-c ",
"> On the other hand, the results about Bertje vs multilingual BERT are different, so this needs to be investigated further.\r\n\r\nAgreed. Tweeting results of doing \"tests\" is one thing, but actual thorough investigating and reporting is something else. It has happened to all of us that you quickly wanted to check something and only later realized that you made a silly mistake. (My last one was forgetting the `-` in my learning rate and not noticing it, oh boy what a day.) As I said before I would really like to a see a thorough, reproducible comparison of BERTje, BERT-NL, and multilingual BERT, and I believe that that should be the basis of any new model. Many new models sprout from the community grounds - and that's great! - but without having at least _some_ reference and comparison, it is guess-work trying to figure out which one is best or which one you should use.\r\n\r\n> I think @BramVanroy raises some good points about naming convention.\r\n> \r\n> In my opinion the organization name or author name should come after the \"bert-base-\" skeleton.\r\n> \r\n> So it seems that BERTje is current SOTA now. But: on the next conference maybe another BERT model for Dutch is better... I'm not a fan of the \"First come, first served\" principle here π
\r\n> \r\n> /cc @thomwolf , @julien-c\r\n\r\nPerhaps it's better to just make the model available through the user and that's all? In this case, only make it available through `wietsedv/bert-base-dutch-cased` and not `bert-base-dutch-cased`? That being said, where do you draw the line of course. Hypothetical question: why does _a Google_ get the rights to make its weights available without a `google/` prefix, acting as a \"standard\"? I don't know how to answer that question, so ultimately it's up to the HuggingFace people.\r\n\r\nI'm also not sure how diverging models would then work. If for instance you bring out a German BERT-derivative that has a slightly different architecture, or e.g. a different tokenizer, how would that then get integrated in Transformers? (For example, IIRC BERTje uses SOP instead of NSP, so that may lead to more structural changing in the available heads than just different weights.)\r\n",
"I inherently completely agree with your points. I think the people at Huggingface are trying to figure out how to do this, but they have not been really consistent. Initially, the \"official\" models within Transformers were only original models (Google/Facebook) and it is unlikely that there would be competetition for better models with exactly the same architecture in English. But for other monolingual models this is different.\r\n\r\nI prefer a curated list with pre-trained general models that has more structure than the long community models list. But existing shortcuts should be renamed to be consistent. German for instance has a regular named german and there is one with the `dbmdz` infix. And Finnish has for some reason the `v1` suffix?\r\n\r\nMy preference would be to always use `institution/` or `institution-` prefixes in the curated list. In Transformers 2.x, the current shortcuts could be kept for backward compatibility but a structured format should be used in the documentation. I think this may prevent many frustrations if even more non-english models are trained and even more people are wanting to use and trust (!) these models.",
"200% agree with that. That would be the fairest and probably clearest way of doing this. Curating the list might not be easy, though, unless it is curated by the community (like a wiki)? Perhaps requiring a description, website, paper, any other meta information might help to distinguish models as well, giving the authors a chance to explain, e.g., which data their model was trained on, which hyperparameters were used, and how their model differs from others.\r\n\r\nI really like http://nlpprogress.com/ which \"tracks\" the SOTA across different NLP tasks. It is an open source list and anyone can contribute through github. Some kind of lists like this might be useful, but instead discussing the models. ",
"You all raise excellent questions, many (most?) of which we donβt have a definitive answer to right now π€\r\n\r\nSome kind of structured evaluation results (declarative or automated) could be a part of the solution. In addition to nlpprogress, sotabench/paperswithcode is also a good source of inspiration.\r\n\r\n",
"On the previous point of being able to load the tokenizer's (remote) config correctly:\r\n- I've added a `tokenizer_config.json` to your user namespace on S3: https://s3.amazonaws.com/models.huggingface.co/bert/wietsedv/bert-base-dutch-cased/tokenizer_config.json\r\n- We're fixing the support for those remote tokenizer configs in https://github.com/huggingface/transformers/pull/2535 (you'll see that the unit test uses your model). Feedback welcome.",
"Merging this as we haven't seen other \"better\" BERT models for Dutch (coincidentally, [`RobBERT`](https://people.cs.kuleuven.be/~pieter.delobelle/robbert/) from @iPieter looks like a great RoBERTa-like model)\r\n\r\nPlease see [this discussion on model descriptions/README.md](https://github.com/huggingface/transformers/issues/2520#issuecomment-579009439). If you can upload a README.md with eval results/training methods, that'd be awesome.\r\n\r\nThanks!"
] | 1,577 | 1,580 | 1,580 | CONTRIBUTOR | null | We trained a Dutch cased BERT model at the University of Groningen.
Details are on [Github](https://github.com/wietsedv/bertje/) and [Arxiv](https://arxiv.org/abs/1912.09582). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2281/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2281/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2281",
"html_url": "https://github.com/huggingface/transformers/pull/2281",
"diff_url": "https://github.com/huggingface/transformers/pull/2281.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2281.patch",
"merged_at": 1580176835000
} |
https://api.github.com/repos/huggingface/transformers/issues/2280 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2280/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2280/comments | https://api.github.com/repos/huggingface/transformers/issues/2280/events | https://github.com/huggingface/transformers/issues/2280 | 541,699,278 | MDU6SXNzdWU1NDE2OTkyNzg= | 2,280 | Do anyone have solution for this | {
"login": "Aditi-Bhole",
"id": 59166830,
"node_id": "MDQ6VXNlcjU5MTY2ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/59166830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aditi-Bhole",
"html_url": "https://github.com/Aditi-Bhole",
"followers_url": "https://api.github.com/users/Aditi-Bhole/followers",
"following_url": "https://api.github.com/users/Aditi-Bhole/following{/other_user}",
"gists_url": "https://api.github.com/users/Aditi-Bhole/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aditi-Bhole/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aditi-Bhole/subscriptions",
"organizations_url": "https://api.github.com/users/Aditi-Bhole/orgs",
"repos_url": "https://api.github.com/users/Aditi-Bhole/repos",
"events_url": "https://api.github.com/users/Aditi-Bhole/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aditi-Bhole/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,577 | 1,577 | 1,577 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2280/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2279 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2279/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2279/comments | https://api.github.com/repos/huggingface/transformers/issues/2279/events | https://github.com/huggingface/transformers/issues/2279 | 541,692,724 | MDU6SXNzdWU1NDE2OTI3MjQ= | 2,279 | Help with finetune BERT pretraining | {
"login": "yes1234man",
"id": 59166627,
"node_id": "MDQ6VXNlcjU5MTY2NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/59166627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yes1234man",
"html_url": "https://github.com/yes1234man",
"followers_url": "https://api.github.com/users/yes1234man/followers",
"following_url": "https://api.github.com/users/yes1234man/following{/other_user}",
"gists_url": "https://api.github.com/users/yes1234man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yes1234man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yes1234man/subscriptions",
"organizations_url": "https://api.github.com/users/yes1234man/orgs",
"repos_url": "https://api.github.com/users/yes1234man/repos",
"events_url": "https://api.github.com/users/yes1234man/events{/privacy}",
"received_events_url": "https://api.github.com/users/yes1234man/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I suggest you follow the [run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) script. Instead of downloading a pretrained model, simply start with a fresh one. Here's an example:\r\n\r\n```\r\nfrom transformers import BertModel, BertConfig, BertTokenizer\r\nmodel = BertModel(BertConfig())\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-cased')\r\n```\r\n\r\nI personally don't have a reason to write my own tokenizer, but if you do feel free to do that as well. All you need to do is generate a vocab.txt file from your corpus.",
"You can now leave `--model_name_or_path` to None in `run_language_modeling.py` to train a model from scratch.\r\n\r\nSee also https://huggingface.co/blog/how-to-train"
] | 1,577 | 1,581 | 1,581 | NONE | null | Hi
could you please assist me how I can pretrain the BERT model, so not like SNLI/MNLI when we finetune a pretrained model, but doing pretraining objective.
thanks a lot
and Merry Christmas and happy new year in advance to the team | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2279/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2278 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2278/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2278/comments | https://api.github.com/repos/huggingface/transformers/issues/2278/events | https://github.com/huggingface/transformers/issues/2278 | 541,658,379 | MDU6SXNzdWU1NDE2NTgzNzk= | 2,278 | where is the script of a second step of knwoledge distillation on SQuAD 1.0? | {
"login": "c0derm4n",
"id": 18226382,
"node_id": "MDQ6VXNlcjE4MjI2Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/18226382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c0derm4n",
"html_url": "https://github.com/c0derm4n",
"followers_url": "https://api.github.com/users/c0derm4n/followers",
"following_url": "https://api.github.com/users/c0derm4n/following{/other_user}",
"gists_url": "https://api.github.com/users/c0derm4n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c0derm4n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c0derm4n/subscriptions",
"organizations_url": "https://api.github.com/users/c0derm4n/orgs",
"repos_url": "https://api.github.com/users/c0derm4n/repos",
"events_url": "https://api.github.com/users/c0derm4n/events{/privacy}",
"received_events_url": "https://api.github.com/users/c0derm4n/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Check here: https://github.com/huggingface/transformers/blob/master/examples/distillation/run_squad_w_distillation.py"
] | 1,577 | 1,588 | 1,582 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
In Distil part, there is a paragraph description which is "distilbert-base-uncased-distilled-squad: A finetuned version of distilbert-base-uncased finetuned using (a second step of) knwoledge distillation on SQuAD 1.0. This model reaches a F1 score of 86.9 on the dev set (for comparison, Bert bert-base-uncased version reaches a 88.5 F1 score)."
so where is the script of "a second step of knwoledge distillation on SQuAD 1.0" mentioned above?
Thanks a lot, it will be very helpful to me!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2278/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2277 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2277/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2277/comments | https://api.github.com/repos/huggingface/transformers/issues/2277/events | https://github.com/huggingface/transformers/issues/2277 | 541,642,599 | MDU6SXNzdWU1NDE2NDI1OTk= | 2,277 | Does the calling order need to be changed? | {
"login": "bcmi220",
"id": 39052744,
"node_id": "MDQ6VXNlcjM5MDUyNzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/39052744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bcmi220",
"html_url": "https://github.com/bcmi220",
"followers_url": "https://api.github.com/users/bcmi220/followers",
"following_url": "https://api.github.com/users/bcmi220/following{/other_user}",
"gists_url": "https://api.github.com/users/bcmi220/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bcmi220/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bcmi220/subscriptions",
"organizations_url": "https://api.github.com/users/bcmi220/orgs",
"repos_url": "https://api.github.com/users/bcmi220/repos",
"events_url": "https://api.github.com/users/bcmi220/events{/privacy}",
"received_events_url": "https://api.github.com/users/bcmi220/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,582 | 1,582 | NONE | null | ## β Questions & Help
pytorch: 1.3.0
torch/optim/lr_scheduler.py:100: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2277/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2276 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2276/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2276/comments | https://api.github.com/repos/huggingface/transformers/issues/2276/events | https://github.com/huggingface/transformers/pull/2276 | 541,615,777 | MDExOlB1bGxSZXF1ZXN0MzU2MjEyMTgz | 2,276 | fix error due to wrong argument name to Tensor.scatter() | {
"login": "ShnitzelKiller",
"id": 6132502,
"node_id": "MDQ6VXNlcjYxMzI1MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6132502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShnitzelKiller",
"html_url": "https://github.com/ShnitzelKiller",
"followers_url": "https://api.github.com/users/ShnitzelKiller/followers",
"following_url": "https://api.github.com/users/ShnitzelKiller/following{/other_user}",
"gists_url": "https://api.github.com/users/ShnitzelKiller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShnitzelKiller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShnitzelKiller/subscriptions",
"organizations_url": "https://api.github.com/users/ShnitzelKiller/orgs",
"repos_url": "https://api.github.com/users/ShnitzelKiller/repos",
"events_url": "https://api.github.com/users/ShnitzelKiller/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShnitzelKiller/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2276?src=pr&el=h1) Report\n> Merging [#2276](https://codecov.io/gh/huggingface/transformers/pull/2276?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ce50305e5b8c8748b81b0c8f5539a337b6a995b9?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2276?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2276 +/- ##\n=======================================\n Coverage 74.45% 74.45% \n=======================================\n Files 85 85 \n Lines 14610 14610 \n=======================================\n Hits 10878 10878 \n Misses 3732 3732\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2276?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2276/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.91% <0%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2276?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2276?src=pr&el=footer). Last update [ce50305...398bb03](https://codecov.io/gh/huggingface/transformers/pull/2276?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks a lot for catching that @ShnitzelKiller!",
"I just realized that despite the official (current) documentation saying that the argument name is \"source\", after upgrading my pytorch version, this code is what throws an error saying the argument name is \"src\"! I should probably notify you to revert this pull request then."
] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | The named argument is called "source", not "src" in the out of place version for some reason, despite it being called "src" in the in-place version of the same Pytorch function. This causes an error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2276/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2276",
"html_url": "https://github.com/huggingface/transformers/pull/2276",
"diff_url": "https://github.com/huggingface/transformers/pull/2276.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2276.patch",
"merged_at": 1577099988000
} |
https://api.github.com/repos/huggingface/transformers/issues/2275 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2275/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2275/comments | https://api.github.com/repos/huggingface/transformers/issues/2275/events | https://github.com/huggingface/transformers/issues/2275 | 541,581,647 | MDU6SXNzdWU1NDE1ODE2NDc= | 2,275 | Gpt2/xl Broken on "Write With Transformer" site | {
"login": "crackerjam",
"id": 7827071,
"node_id": "MDQ6VXNlcjc4MjcwNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7827071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crackerjam",
"html_url": "https://github.com/crackerjam",
"followers_url": "https://api.github.com/users/crackerjam/followers",
"following_url": "https://api.github.com/users/crackerjam/following{/other_user}",
"gists_url": "https://api.github.com/users/crackerjam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/crackerjam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/crackerjam/subscriptions",
"organizations_url": "https://api.github.com/users/crackerjam/orgs",
"repos_url": "https://api.github.com/users/crackerjam/repos",
"events_url": "https://api.github.com/users/crackerjam/events{/privacy}",
"received_events_url": "https://api.github.com/users/crackerjam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Same problem. And it seems not fixed yet.",
"It should be fixed now. Thank you for raising this issue."
] | 1,577 | 1,579 | 1,579 | NONE | null | If you navigate to the [GPT-2 section of the Write With Transformer site](https://transformer.huggingface.co/doc/gpt2-large), select gpt2/xl, and try to generate text, the process will not generate anything. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2275/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2275/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2274 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2274/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2274/comments | https://api.github.com/repos/huggingface/transformers/issues/2274/events | https://github.com/huggingface/transformers/issues/2274 | 541,549,362 | MDU6SXNzdWU1NDE1NDkzNjI= | 2,274 | AttributeError: 'GPT2LMHeadModel' object has no attribute 'generate' | {
"login": "jsh9",
"id": 25124332,
"node_id": "MDQ6VXNlcjI1MTI0MzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/25124332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jsh9",
"html_url": "https://github.com/jsh9",
"followers_url": "https://api.github.com/users/jsh9/followers",
"following_url": "https://api.github.com/users/jsh9/following{/other_user}",
"gists_url": "https://api.github.com/users/jsh9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jsh9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jsh9/subscriptions",
"organizations_url": "https://api.github.com/users/jsh9/orgs",
"repos_url": "https://api.github.com/users/jsh9/repos",
"events_url": "https://api.github.com/users/jsh9/events{/privacy}",
"received_events_url": "https://api.github.com/users/jsh9/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Same problem",
"I have found the reason.\r\n\r\nSo it turns out that the `generate()` method of the `PreTrainedModel` class is newly added, even newer than the latest release (2.3.0). Quite understandable since this library is iterating very fast.\r\n\r\nSo to make `run_generation.py` work, you can install this library like this:\r\n\r\n- Clone the repo to your computer\r\n- cd into the repo\r\n- Run `pip install -e .` (don't forget the dot)\r\n- Re-run `run_generation.py`\r\n\r\nI'll leave this ticket open until the `generate()` method is incorporated into the latest release.",
"@jsh9's solution worked for me!\r\n\r\nAlso, if you want to avoid doing the manual steps, you can just `pip install` directly from the `master` branch by running:\r\n\r\n```bash\r\npip install git+https://github.com/huggingface/transformers.git@master#egg=transformers\r\n```\r\n",
"i was getting the same error then i used repository before 7days which is working fine for me \r\n`!wget https://github.com/huggingface/transformers/archive/f09d9996413f2b265f1c672d7a4b438e4c5099c4.zip`\r\n\r\nthen unzip with\r\n\r\n`!unzip file_name.zip`\r\n\r\nthere is some bugs in recent update, hope they fix it soon",
"@Weenkus's way worked for me. In `requirements.txt` you can use;\r\n```\r\n-e git+https://github.com/huggingface/transformers.git@master#egg=transformers\r\n```\r\n\r\n(all on one line)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | NONE | null | ## π Bug
<!-- Important information -->
The example script `run_generation.py` is broken with the error message `AttributeError: 'GPT2LMHeadModel' object has no attribute 'generate'`
## To Reproduce
Steps to reproduce the behavior:
1. In a terminal, cd to `transformers/examples` and then `python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2`
2. After the model binary is downloaded to cache, enter anything when prompted "`Model prompt >>>`"
3. And then you will see the error:
```
Traceback (most recent call last):
File "run_generation.py", line 236, in <module>
main()
File "run_generation.py", line 216, in main
output_sequences = model.generate(
File "C:\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 585, in __getattr__
type(self).__name__, name))
AttributeError: 'GPT2LMHeadModel' object has no attribute 'generate'
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Windows 10
* Python version: 3.7.3
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.2.2
* Using GPU ? N/A
* Distributed of parallel setup ? N/A
* Any other relevant information:
I'm running the latest version of `run_generation.py`. Here is the permanent link: https://github.com/huggingface/transformers/blob/ce50305e5b8c8748b81b0c8f5539a337b6a995b9/examples/run_generation.py
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2274/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2274/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2273 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2273/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2273/comments | https://api.github.com/repos/huggingface/transformers/issues/2273/events | https://github.com/huggingface/transformers/issues/2273 | 541,514,550 | MDU6SXNzdWU1NDE1MTQ1NTA= | 2,273 | adding special tokens after truncating in run_lm_finetuning.py | {
"login": "lucy3",
"id": 14174175,
"node_id": "MDQ6VXNlcjE0MTc0MTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/14174175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucy3",
"html_url": "https://github.com/lucy3",
"followers_url": "https://api.github.com/users/lucy3/followers",
"following_url": "https://api.github.com/users/lucy3/following{/other_user}",
"gists_url": "https://api.github.com/users/lucy3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucy3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucy3/subscriptions",
"organizations_url": "https://api.github.com/users/lucy3/orgs",
"repos_url": "https://api.github.com/users/lucy3/repos",
"events_url": "https://api.github.com/users/lucy3/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucy3/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,583 | 1,583 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: (give details)
In `run_lm_finetuning.py`, we have the following
```
for i in range(0, len(tokenized_text) - block_size + 1, block_size): # Truncate in block of block_size
self.examples.append(tokenizer.build_inputs_with_special_tokens(tokenized_text[i : i + block_size]))
```
If we add special tokens after truncating to `block_size`, the example is no longer of length `block_size`, but longer.
In the help information for `block_size` as an input argument, it says "Optional input sequence length after tokenization. The training dataset will be truncated in block of this size for training. Default to the model max input length for single sentence inputs (**take into account special tokens**)." This may be confusing, because the `block_size` written as the default input is 512, but if you use BERT-base as the model you're pretraining from, the `block_size` input in that function is actually 510.
The [original BERT code](https://github.com/google-research/bert/blob/master/extract_features.py) makes sure all examples are `block_size` after adding special tokens:
```
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
_truncate_seq_pair(tokens_a, tokens_b, seq_length - 3)
else:
# Account for [CLS] and [SEP] with "- 2"
if len(tokens_a) > seq_length - 2:
tokens_a = tokens_a[0:(seq_length - 2)]
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2273/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2272 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2272/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2272/comments | https://api.github.com/repos/huggingface/transformers/issues/2272/events | https://github.com/huggingface/transformers/issues/2272 | 541,513,624 | MDU6SXNzdWU1NDE1MTM2MjQ= | 2,272 | Run_tf_ner.py error on TPU | {
"login": "dlauc",
"id": 3930351,
"node_id": "MDQ6VXNlcjM5MzAzNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3930351?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dlauc",
"html_url": "https://github.com/dlauc",
"followers_url": "https://api.github.com/users/dlauc/followers",
"following_url": "https://api.github.com/users/dlauc/following{/other_user}",
"gists_url": "https://api.github.com/users/dlauc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dlauc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dlauc/subscriptions",
"organizations_url": "https://api.github.com/users/dlauc/orgs",
"repos_url": "https://api.github.com/users/dlauc/repos",
"events_url": "https://api.github.com/users/dlauc/events{/privacy}",
"received_events_url": "https://api.github.com/users/dlauc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I have the same issue on Colab TPU with tf-nightly 2.2.0. @dlauc did you solve the problem?",
"Hi @BOUALILILila, I've switched to the TF/Keras - it works well with TPU-s",
"Hi @dlauc. Could you elaborate how you fixed this? I am having the same problem. "
] | 1,577 | 1,584 | 1,582 | NONE | null | ## π Bug
run_tf_ner.py does not work with TPU. The first error is:
`File system scheme '[local]' not implemented `
When the script is changed and .tfrecord file is moved to gs:// address (and also hardcoded "/tmp/mylogs" is replaced with gs:/// dir) there is an error with optimiser:
`AttributeError: 'device_map' not accessible within a TPU context.`
## To Reproduce
Steps to reproduce the behaviour:
python run_tf_ner.py.1 --tpu grpc://10.240.1.2:8470 --data_dir gs://nomentech/datadir --labels ./datasets/labels.txt --output_dir gs://nomentech/model1 --max_seq_length 40 --model_type bert --model_name_or_path bert-base-multilingual-cased --do_train --do_eval --cache_dir gs://nomentech/cachedir --num_train_epochs 5 --per_device_train_batch_size 96
## Environment
* OS: Ubuntu 18
* Python version: 3.7
* Tensorflow version: 2.1.0-dev20191222 (tf-nightly)
* PyTorch Transformers version (or branch): 2.3.0
* Distributed of parallel setup ? TPU
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2272/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2271 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2271/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2271/comments | https://api.github.com/repos/huggingface/transformers/issues/2271/events | https://github.com/huggingface/transformers/pull/2271 | 541,493,262 | MDExOlB1bGxSZXF1ZXN0MzU2MTE1MTQ2 | 2,271 | Improve setup and requirements | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2271?src=pr&el=h1) Report\n> Merging [#2271](https://codecov.io/gh/huggingface/transformers/pull/2271?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/23dad8447c8db53682abc3c53d1b90f85d222e4b?src=pr&el=desc) will **decrease** coverage by `0.58%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2271?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2271 +/- ##\n==========================================\n- Coverage 74.27% 73.68% -0.59% \n==========================================\n Files 85 87 +2 \n Lines 14610 14791 +181 \n==========================================\n+ Hits 10851 10899 +48 \n- Misses 3759 3892 +133\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2271?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/commands/user.py](https://codecov.io/gh/huggingface/transformers/pull/2271/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy91c2VyLnB5) | `0% <0%> (ΓΈ)` | |\n| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/2271/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0% <0%> (ΓΈ)` | |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2271/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `92.27% <0%> (+0.96%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2271/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.12% <0%> (+1.58%)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2271/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.31% <0%> (+2.32%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2271/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `35.97% <0%> (+6.6%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2271/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `32.14% <0%> (+7.14%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2271?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2271?src=pr&el=footer). Last update [23dad84...10724a8](https://codecov.io/gh/huggingface/transformers/pull/2271?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | - Clean up several requirements files generated with pip freeze, with no clear update process
- Rely on extra_requires for managing optional requirements
- Update contribution instructions | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2271/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2271",
"html_url": "https://github.com/huggingface/transformers/pull/2271",
"diff_url": "https://github.com/huggingface/transformers/pull/2271.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2271.patch",
"merged_at": 1577182880000
} |
https://api.github.com/repos/huggingface/transformers/issues/2270 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2270/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2270/comments | https://api.github.com/repos/huggingface/transformers/issues/2270/events | https://github.com/huggingface/transformers/pull/2270 | 541,479,400 | MDExOlB1bGxSZXF1ZXN0MzU2MTA2MTk1 | 2,270 | Remove support for Python 2 | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2270?src=pr&el=h1) Report\n> Merging [#2270](https://codecov.io/gh/huggingface/transformers/pull/2270?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b6ea0f43aeb7ff1dcb03658e38bacae1130abd91?src=pr&el=desc) will **increase** coverage by `1.2%`.\n> The diff coverage is `86.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2270?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2270 +/- ##\n=========================================\n+ Coverage 73.25% 74.45% +1.2% \n=========================================\n Files 85 85 \n Lines 14779 14610 -169 \n=========================================\n+ Hits 10826 10878 +52 \n+ Misses 3953 3732 -221\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2270?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGlzdGlsYmVydC5weQ==) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `32.14% <ΓΈ> (-0.41%)` | :arrow_down: |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.12% <ΓΈ> (-0.04%)` | :arrow_down: |\n| [src/transformers/configuration\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ΓΈ> (-0.09%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <ΓΈ> (-0.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.54% <ΓΈ> (-0.01%)` | :arrow_down: |\n| ... and [66 more](https://codecov.io/gh/huggingface/transformers/pull/2270/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2270?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2270?src=pr&el=footer). Last update [b6ea0f4...1a948d7](https://codecov.io/gh/huggingface/transformers/pull/2270?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2270/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2270",
"html_url": "https://github.com/huggingface/transformers/pull/2270",
"diff_url": "https://github.com/huggingface/transformers/pull/2270.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2270.patch",
"merged_at": 1577052278000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2269 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2269/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2269/comments | https://api.github.com/repos/huggingface/transformers/issues/2269/events | https://github.com/huggingface/transformers/issues/2269 | 541,450,275 | MDU6SXNzdWU1NDE0NTAyNzU= | 2,269 | Bad F1 Score for run_squad.py on SQuAD2.0 | {
"login": "WenTingTseng",
"id": 32416416,
"node_id": "MDQ6VXNlcjMyNDE2NDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/32416416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenTingTseng",
"html_url": "https://github.com/WenTingTseng",
"followers_url": "https://api.github.com/users/WenTingTseng/followers",
"following_url": "https://api.github.com/users/WenTingTseng/following{/other_user}",
"gists_url": "https://api.github.com/users/WenTingTseng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenTingTseng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenTingTseng/subscriptions",
"organizations_url": "https://api.github.com/users/WenTingTseng/orgs",
"repos_url": "https://api.github.com/users/WenTingTseng/repos",
"events_url": "https://api.github.com/users/WenTingTseng/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenTingTseng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,577 | 1,577 | 1,577 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
When I run the run_squad.py for SQuAD2.0 like this
python3 run_squad.py \
--model_type bert \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--train_file /share/nas165/Wendy/transformers/examples/tests_samples/SQUAD/train-v2.0.json \
--predict_file /share/nas165/Wendy/transformers/examples/tests_samples/SQUAD/dev-v2.0.json \
--per_gpu_train_batch_size 4 \
--learning_rate 4e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /share/nas165/Wendy/transformers/examples/SQuAD2.0_debug_bert/
--version_2_with_negative=True \
--null_score_diff_threshold=-1.967471694946289
It runs very fast and the f1 score only 7.9% What is wrong with it
The log message like this
<img width="960" alt="ζ·ε" src="https://user-images.githubusercontent.com/32416416/71322341-939daa80-2501-11ea-9313-e179d1760b99.PNG">
Thanks a lot for your help.By the way I clone it today so it is the new version
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2269/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2268 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2268/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2268/comments | https://api.github.com/repos/huggingface/transformers/issues/2268/events | https://github.com/huggingface/transformers/pull/2268 | 541,447,952 | MDExOlB1bGxSZXF1ZXN0MzU2MDg0MTA3 | 2,268 | Improve repository structure | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Last test run failed only because of a flaky test β this is #2240."
] | 1,577 | 1,577 | 1,577 | CONTRIBUTOR | null | This PR builds on top of #2255 (which should be merged first).
Since it changes the location of the source code, once it's merged, contributors must update their local development environment with:
$ pip uninstall transformers
$ pip install -e .
I'll clarify this when I update the contributor documentation (later).
I checked that:
- `python setup.py sdist` packages the right files (only from `src`)
- I didn't lose any tests β the baseline for `run_tests_py3_torch_and_tf` is `691 passed, 68 skipped, 50 warnings`, see [here](https://app.circleci.com/jobs/github/huggingface/transformers/10684))
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2268/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2268",
"html_url": "https://github.com/huggingface/transformers/pull/2268",
"diff_url": "https://github.com/huggingface/transformers/pull/2268.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2268.patch",
"merged_at": 1577029314000
} |
https://api.github.com/repos/huggingface/transformers/issues/2267 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2267/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2267/comments | https://api.github.com/repos/huggingface/transformers/issues/2267/events | https://github.com/huggingface/transformers/issues/2267 | 541,433,305 | MDU6SXNzdWU1NDE0MzMzMDU= | 2,267 | Does Pre-Trained Weights Work Internally in pytorch? | {
"login": "shashankMadan-designEsthetics",
"id": 45225143,
"node_id": "MDQ6VXNlcjQ1MjI1MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/45225143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashankMadan-designEsthetics",
"html_url": "https://github.com/shashankMadan-designEsthetics",
"followers_url": "https://api.github.com/users/shashankMadan-designEsthetics/followers",
"following_url": "https://api.github.com/users/shashankMadan-designEsthetics/following{/other_user}",
"gists_url": "https://api.github.com/users/shashankMadan-designEsthetics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashankMadan-designEsthetics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashankMadan-designEsthetics/subscriptions",
"organizations_url": "https://api.github.com/users/shashankMadan-designEsthetics/orgs",
"repos_url": "https://api.github.com/users/shashankMadan-designEsthetics/repos",
"events_url": "https://api.github.com/users/shashankMadan-designEsthetics/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashankMadan-designEsthetics/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"`BertPredictionHeadTransform` is never used in itself AFAIK, and is only part of the full scale models (e.g. `BertModel`). As such, the weights for the prediction head are loaded when you run BertModel.from_pretrained(), since the `BertPredictionHeadTransform` is only a module in the whole model.",
"> `BertPredictionHeadTransform` is never used in itself AFAIK, and is only part of the full scale models (e.g. `BertModel`). As such, the weights for the prediction head are loaded when you run BertModel.from_pretrained(), since the `BertPredictionHeadTransform` is only a module in the whole model.\r\n\r\nThanks, Bram. As you said `BertModel` does take `BertPreTrainedModel` in `super`. I did notice it, But It's just that my mind doesn't get around how/where and when those weights are getting used exactly.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,582 | 1,582 | NONE | null | I am using bertβs pretrained model in from_pretrained and coming across itβs fine tuning code we can save the new model weights and other hyper params in save_pretrained.
My doubt is in there modeling_bert code there is no explicit code that takes the pre-trained weights in acount and then trains as it generally takes attention matrices and puts it in a feed forward network in the class `BertPredictionHeadTransform`
```
class BertPredictionHeadTransform(nn.Module):
def __init__(self, config):
super(BertPredictionHeadTransform, self).__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
if isinstance(config.hidden_act, str) or (sys.version_info[0] == 2 and isinstance(config.hidden_act, unicode)):
self.transform_act_fn = ACT2FN[config.hidden_act]
else:
self.transform_act_fn = config.hidden_act
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
def forward(self, hidden_states):
hidden_states = self.dense(hidden_states)
hidden_states = self.transform_act_fn(hidden_states)
hidden_states = self.LayerNorm(hidden_states)
print('BertPredictionHeadTransform', hidden_states.shape)
return hidden_states
```
And here i do not see any kind of βinheritanceβ of the pre-trained weightsβ¦
So is it internally handled by pytorch or am i missing something in the code itself?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2267/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2266 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2266/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2266/comments | https://api.github.com/repos/huggingface/transformers/issues/2266/events | https://github.com/huggingface/transformers/issues/2266 | 541,433,070 | MDU6SXNzdWU1NDE0MzMwNzA= | 2,266 | Imports likely broken in examples | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@aaugustin it seems like that i met the same problem when i use convert_bertabs_original_pytorch_checkpoint.py ,have you ever fixed it or find any way to make it work.\r\nAppriciate it if you can tell me!",
"I didn't attempt to fix this issue. I merely noticed it while I was working on the overall quality of the `transformers` code base.\r\n\r\nI suspect these modules used to exist in `transformers` and were removed in a refactoring, but I don't know for sure.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,577 | 1,584 | 1,584 | CONTRIBUTOR | null | ## π Bug
While cleaning up imports with isort, I classified them all. I failed to identify the following four imports:
1. model_bertabs
2. utils_squad
3. utils_squad_evaluate
4. models.model_builder
These modules aren't available on PyPI or in the transformers code repository.
I think they will result in ImportError (I didn't check).
I suspect they used to be in transformers, but they were renamed or removed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2266/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2265 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2265/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2265/comments | https://api.github.com/repos/huggingface/transformers/issues/2265/events | https://github.com/huggingface/transformers/issues/2265 | 541,427,548 | MDU6SXNzdWU1NDE0Mjc1NDg= | 2,265 | Only the Bert model is currently supported | {
"login": "donttal",
"id": 30567352,
"node_id": "MDQ6VXNlcjMwNTY3MzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/30567352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donttal",
"html_url": "https://github.com/donttal",
"followers_url": "https://api.github.com/users/donttal/followers",
"following_url": "https://api.github.com/users/donttal/following{/other_user}",
"gists_url": "https://api.github.com/users/donttal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donttal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donttal/subscriptions",
"organizations_url": "https://api.github.com/users/donttal/orgs",
"repos_url": "https://api.github.com/users/donttal/repos",
"events_url": "https://api.github.com/users/donttal/events{/privacy}",
"received_events_url": "https://api.github.com/users/donttal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You need to give more information. What are you trying to do, what is the code that you use for it, what does not work the way you intended to?",
"> You need to give more information. What are you trying to do, what is the code that you use for it, what does not work the way you intended to?\r\n\r\nI am learning how to use this repo.\r\nI use the example of the official website model2model, the link is as follows\r\nhttps://huggingface.co/transformers/quickstart.html\r\nAnd I use google colab to install the transformer and copy and run the official website code",
"I mean, which error are you getting or what is not working as expected? ",
"> I mean, which error are you getting or what is not working as expected?\r\n\r\nthis is my issue topic\r\nOnly the Bert model is currently supported\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-18-58aac9aa944f> in <module>()\r\n 9 \r\n 10 # Load pre-trained model (weights)\r\n---> 11 model = Model2Model.from_pretrained('fine-tuned-weights')\r\n 12 model.eval()\r\n 13 \r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/modeling_encoder_decoder.py in from_pretrained(cls, pretrained_model_name_or_path, *args, **kwargs)\r\n 281 or \"distilbert\" in pretrained_model_name_or_path\r\n 282 ):\r\n--> 283 raise ValueError(\"Only the Bert model is currently supported.\")\r\n 284 \r\n 285 model = super(Model2Model, cls).from_pretrained(\r\n\r\nValueError: Only the Bert model is currently supported.\r\n```",
"That wasn't clear. In the future, please post the trace so it is clear what your error is, like so:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:/Users/bramv/.PyCharm2019.2/config/scratches/scratch_16.py\", line 62, in <module>\r\n model = Model2Model.from_pretrained('fine-tuned-weights')\r\n File \"C:\\Users\\bramv\\.virtualenvs\\semeval-task7-Z5pypsxD\\lib\\site-packages\\transformers\\modeling_encoder_decoder.py\", line 315, in from_pretrained\r\n raise ValueError(\"Only the Bert model is currently supported.\")\r\nValueError: Only the Bert model is currently supported.\r\n```\r\n\r\nThis is not a bug, then of course. In the example, where \"fine-tuned-weights\" is used, you can load your own fine-tuned model. So if you tuned a model and saved it as \"checkpoint.pth\" you can use that.",
"> This is not a bug, then of course. In the example, where \"fine-tuned-weights\" is used, you can load your own fine-tuned model. So if you tuned a model and saved it as \"checkpoint.pth\" you can use that.\r\n\r\nthanks",
"Please close this question."
] | 1,577 | 1,578 | 1,578 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using Model2Model and BertTokenizer:
Language I am using the model on English:
The problem arise when using:
* the official example scripts: (give details)
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```
# Let's re-use the previous question
question = "Who was Jim Henson?"
encoded_question = tokenizer.encode(question)
question_tensor = torch.tensor([encoded_question])
# This time we try to generate the answer, so we start with an empty sequence
answer = "[CLS]"
encoded_answer = tokenizer.encode(answer, add_special_tokens=False)
answer_tensor = torch.tensor([encoded_answer])
# Load pre-trained model (weights)
model = Model2Model.from_pretrained('fine-tuned-weights')
model.eval()
# If you have a GPU, put everything on cuda
question_tensor = encoded_question.to('cuda')
answer_tensor = encoded_answer.to('cuda')
model.to('cuda')
# Predict all tokens
with torch.no_grad():
outputs = model(question_tensor, answer_tensor)
predictions = outputs[0]
# confirm we were able to predict 'jim'
predicted_index = torch.argmax(predictions[0, -1]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Colab
* Python version:
* PyTorch version:
* PyTorch Transformers version (or branch):
* Using GPU ? Yes
* Distributed of parallel setup ? No
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2265/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2264 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2264/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2264/comments | https://api.github.com/repos/huggingface/transformers/issues/2264/events | https://github.com/huggingface/transformers/pull/2264 | 541,417,303 | MDExOlB1bGxSZXF1ZXN0MzU2MDYyMjQ1 | 2,264 | Fix doc link in README | {
"login": "upura",
"id": 31459778,
"node_id": "MDQ6VXNlcjMxNDU5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/31459778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/upura",
"html_url": "https://github.com/upura",
"followers_url": "https://api.github.com/users/upura/followers",
"following_url": "https://api.github.com/users/upura/following{/other_user}",
"gists_url": "https://api.github.com/users/upura/gists{/gist_id}",
"starred_url": "https://api.github.com/users/upura/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/upura/subscriptions",
"organizations_url": "https://api.github.com/users/upura/orgs",
"repos_url": "https://api.github.com/users/upura/repos",
"events_url": "https://api.github.com/users/upura/events{/privacy}",
"received_events_url": "https://api.github.com/users/upura/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2264?src=pr&el=h1) Report\n> Merging [#2264](https://codecov.io/gh/huggingface/transformers/pull/2264?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/645713e2cb8307e41febb2b7c9f6036f6645efce?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2264?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2264 +/- ##\n=======================================\n Coverage 78.35% 78.35% \n=======================================\n Files 133 133 \n Lines 19878 19878 \n=======================================\n Hits 15576 15576 \n Misses 4302 4302\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2264?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2264?src=pr&el=footer). Last update [645713e...9d00f78](https://codecov.io/gh/huggingface/transformers/pull/2264?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @upura!"
] | 1,576 | 1,577 | 1,577 | CONTRIBUTOR | null | close https://github.com/huggingface/transformers/issues/2252
- [x] Update `.circleci/deploy.sh`
- [x] Update `deploy_multi_version_doc.sh`
Set commit hash before "Release: v2.3.0".
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2264/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2264",
"html_url": "https://github.com/huggingface/transformers/pull/2264",
"diff_url": "https://github.com/huggingface/transformers/pull/2264.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2264.patch",
"merged_at": 1577100661000
} |
https://api.github.com/repos/huggingface/transformers/issues/2263 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2263/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2263/comments | https://api.github.com/repos/huggingface/transformers/issues/2263/events | https://github.com/huggingface/transformers/issues/2263 | 541,416,531 | MDU6SXNzdWU1NDE0MTY1MzE= | 2,263 | BertModel sometimes produces the same output during evaluation | {
"login": "xuesong0309",
"id": 37296256,
"node_id": "MDQ6VXNlcjM3Mjk2MjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/37296256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuesong0309",
"html_url": "https://github.com/xuesong0309",
"followers_url": "https://api.github.com/users/xuesong0309/followers",
"following_url": "https://api.github.com/users/xuesong0309/following{/other_user}",
"gists_url": "https://api.github.com/users/xuesong0309/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuesong0309/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuesong0309/subscriptions",
"organizations_url": "https://api.github.com/users/xuesong0309/orgs",
"repos_url": "https://api.github.com/users/xuesong0309/repos",
"events_url": "https://api.github.com/users/xuesong0309/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuesong0309/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Did you set a fixed seed? If you want deterministic results, you should set a fixed seed. ",
"> Did you set a fixed seed? If you want deterministic results, you should set a fixed seed.\r\n\r\nNo, I didn't. My problem seems that the model finetuned is totally wrong sometimes. Is it a problem related to random seed?",
"What do you mean by \"bad\"? Post some code to better help you.",
"> What do you mean by \"bad\"? Post some code to better help you.\r\n\r\nFor example, the normal performance is 140.1, but the bad performance is 1.0. As I mentioned before, this situation happened sometimes and I found the word embeddings produced by BertModel were all the same when the performance was bad.\r\n\r\nThe text encoding part of my code is as follows:\r\n \r\n\r\n",
"I think you are accessing the CLS token embeddings and It is constant as the model you are using is trained on MLM objective. ",
"> I think you are accessing the CLS token embeddings and It is constant as the model you are using is trained on MLM objective.\r\n\r\nIf I change the out to self.bert(text)[0][:,1,:], the output is the same as self.bert(text)[0][:,0,:]. It seems I get the same output no matter the input that I put in.",
"I saved the checkpoint when this situation happened. I multiplied all the parameter values of bert by 10 and found the outputs were different, while I divided all parameter values by 10 and found the outputs were almost same. So I think the reason is that the parameter values are too small.",
"@xuesong0309 https://github.com/huggingface/transformers/issues/1465οΌCould you look at this problem?",
"> @xuesong0309 https://github.com/huggingface/transformers/issues/1465οΌCould you look at this problem?\r\n\r\nDid you always get same output? I suggest outputing the parameter values of your model as I mentioned above.",
"> > @xuesong0309 [https://github.com/huggingface/transformers/issues/1465οΌCould](https://github.com/huggingface/transformers/issues/1465%EF%BC%8CCould) you look at this problem?\r\n> \r\n> Did you always get same output? I suggest outputing the parameter values of your model as I mentioned above.\r\n\r\nIt may not be the problem you mentioned, because the model is normal for multi-class classification, and this happens only for multi-label classification.",
"> @xuesong0309 https://github.com/huggingface/transformers/issues/1465οΌCould you look at this problem?\r\n> \r\n> Did you always get same output? I suggest outputing the parameter values of your model as I mentioned above.\r\n> \r\n> It may not be the problem you mentioned, because the model is normal for multi-class classification, and this happens only for multi-label classification.\r\n\r\nYou could open a new issue to describe your problem in detail.",
"Have you solved this issue? I have facing the same issue. Output same results during evaluation.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
I finetune BertModel as a part of my model to produce word embeddings. I found sometimes the performance was very bad, then i run the code again without any change, the performance was normal. It is very strange. I check my code to try to find the bug. I found the word embeddings produced by BertModel were all the same. Then I followed the code of BertModel, and found the BertEncoder would make the output become similar gradually which was consisted of 12 BertLayers. I have no idea about this situation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2263/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2262 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2262/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2262/comments | https://api.github.com/repos/huggingface/transformers/issues/2262/events | https://github.com/huggingface/transformers/issues/2262 | 541,399,468 | MDU6SXNzdWU1NDEzOTk0Njg= | 2,262 | How to do_predict on run_glue? | {
"login": "azamatolegen",
"id": 57138593,
"node_id": "MDQ6VXNlcjU3MTM4NTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/57138593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/azamatolegen",
"html_url": "https://github.com/azamatolegen",
"followers_url": "https://api.github.com/users/azamatolegen/followers",
"following_url": "https://api.github.com/users/azamatolegen/following{/other_user}",
"gists_url": "https://api.github.com/users/azamatolegen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/azamatolegen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/azamatolegen/subscriptions",
"organizations_url": "https://api.github.com/users/azamatolegen/orgs",
"repos_url": "https://api.github.com/users/azamatolegen/repos",
"events_url": "https://api.github.com/users/azamatolegen/events{/privacy}",
"received_events_url": "https://api.github.com/users/azamatolegen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"#2198 Same here. A predict script can be really helpful to researchers.",
"same here",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,587 | 1,587 | NONE | null | ## β Questions & Help
I have fine-tuned BERT for sequence classification task by running run_glue script. Now I have trained and evaluated model. My question is how do I make prediction with it on new instances (test set)? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2262/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/2262/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2261 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2261/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2261/comments | https://api.github.com/repos/huggingface/transformers/issues/2261/events | https://github.com/huggingface/transformers/issues/2261 | 541,391,565 | MDU6SXNzdWU1NDEzOTE1NjU= | 2,261 | AlbertTokenizer behavior doesn't match docs | {
"login": "jswift24",
"id": 1891204,
"node_id": "MDQ6VXNlcjE4OTEyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1891204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jswift24",
"html_url": "https://github.com/jswift24",
"followers_url": "https://api.github.com/users/jswift24/followers",
"following_url": "https://api.github.com/users/jswift24/following{/other_user}",
"gists_url": "https://api.github.com/users/jswift24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jswift24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jswift24/subscriptions",
"organizations_url": "https://api.github.com/users/jswift24/orgs",
"repos_url": "https://api.github.com/users/jswift24/repos",
"events_url": "https://api.github.com/users/jswift24/events{/privacy}",
"received_events_url": "https://api.github.com/users/jswift24/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The encoder itself would automatically add the [CLS] and [SEP] tokens so if you've done that during preprocessing, you would need to change the `add_special_tokens` parameter to `False`. So your code should probably be like this:\r\n\r\n```\r\ninput_ids = tokenizer.encode(input_text, add_special_tokens=False)\r\n```",
"Thanks, but that does not fix the error. The problem is there is no [102] token in the list. Maybe because we're using AlbertTokenizer?\r\n\r\n```\r\ninput_ids = tokenizer.encode(input_text, add_special_tokens=False)\r\nprint(input_ids)\r\n[2, 72, 23, 2170, 27674, 60, 3, 2170, 27674, 23, 21, 2210, 10956, 3]\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,583 | 1,583 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): AlbertForQuestionAnswering
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Run the code sample from the docs: https://huggingface.co/transformers/v2.2.0/model_doc/albert.html#albertforquestionanswering
```
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForQuestionAnswering.from_pretrained('albert-base-v2')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_text = "[CLS] " + question + " [SEP] " + text + " [SEP]"
input_ids = tokenizer.encode(input_text)
token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
# a nice puppet
```
2. Getting the following error:
---------------------------------------------------------------------------
```
ValueError Traceback (most recent call last)
<ipython-input-16-2185be87fe39> in <module>
5 input_text = "[CLS] " + question + " [SEP] " + text + " [SEP]"
6 input_ids = tokenizer.encode(input_text)
----> 7 token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
8 start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
9 all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
<ipython-input-16-2185be87fe39> in <listcomp>(.0)
5 input_text = "[CLS] " + question + " [SEP] " + text + " [SEP]"
6 input_ids = tokenizer.encode(input_text)
----> 7 token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
8 start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
9 all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
ValueError: 102 is not in list
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
Code says expected output is "a nice puppet"
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Kaggle kernel, no GPU
* Python version: 3.6.6
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? No
* Distributed of parallel setup ? No
* Any other relevant information:
Some debugging info:
```
print(input_ids)
[2, 2, 72, 23, 2170, 27674, 60, 3, 2170, 27674, 23, 21, 2210, 10956, 3, 3]
tokenizer.decode(input_ids)
'[CLS][CLS] who was jim henson?[SEP] jim henson was a nice puppet[SEP][SEP]'
```
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2261/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2260 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2260/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2260/comments | https://api.github.com/repos/huggingface/transformers/issues/2260/events | https://github.com/huggingface/transformers/pull/2260 | 541,372,520 | MDExOlB1bGxSZXF1ZXN0MzU2MDMwODE3 | 2,260 | Fixing incorrect link in model docstring | {
"login": "Rexhaif",
"id": 5154447,
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rexhaif",
"html_url": "https://github.com/Rexhaif",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2260?src=pr&el=h1) Report\n> Merging [#2260](https://codecov.io/gh/huggingface/transformers/pull/2260?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/645713e2cb8307e41febb2b7c9f6036f6645efce?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2260?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2260 +/- ##\n=======================================\n Coverage 78.35% 78.35% \n=======================================\n Files 133 133 \n Lines 19878 19878 \n=======================================\n Hits 15576 15576 \n Misses 4302 4302\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2260?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2260/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX21tYnQucHk=) | `18.25% <ΓΈ> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2260?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2260?src=pr&el=footer). Last update [645713e...b668a74](https://codecov.io/gh/huggingface/transformers/pull/2260?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks"
] | 1,576 | 1,579 | 1,579 | CONTRIBUTOR | null | The docstring contains a link to Salesforce/CTRL repo, while the model itself is Facebookresearch/mmbt. It may be the wrong copy\paste. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2260/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2260",
"html_url": "https://github.com/huggingface/transformers/pull/2260",
"diff_url": "https://github.com/huggingface/transformers/pull/2260.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2260.patch",
"merged_at": 1579130739000
} |
https://api.github.com/repos/huggingface/transformers/issues/2259 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2259/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2259/comments | https://api.github.com/repos/huggingface/transformers/issues/2259/events | https://github.com/huggingface/transformers/issues/2259 | 541,367,085 | MDU6SXNzdWU1NDEzNjcwODU= | 2,259 | problem in the doc, in the "Quick Start" GPT2 example | {
"login": "thomasboris",
"id": 59124362,
"node_id": "MDQ6VXNlcjU5MTI0MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/59124362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasboris",
"html_url": "https://github.com/thomasboris",
"followers_url": "https://api.github.com/users/thomasboris/followers",
"following_url": "https://api.github.com/users/thomasboris/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasboris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasboris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasboris/subscriptions",
"organizations_url": "https://api.github.com/users/thomasboris/orgs",
"repos_url": "https://api.github.com/users/thomasboris/repos",
"events_url": "https://api.github.com/users/thomasboris/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasboris/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,582 | 1,582 | NONE | null | I am going through the GPT2 example in the doc. Is there a mistake in the "Using the past" code. The main loop to generate text is:
```python
for i in range(100):
print(i)
output, past = model(context, past=past)
token = torch.argmax(output[0, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
```
At the first iteration the tensor output as shape [1, 3, 50257], and at all the following iterations it has size [1,50257]. Should the code be:
```python
for i in range(100):
print(i)
output, past = model(context, past=past)
if i==0:
token = torch.argmax(output[0,-1,:])
else:
token = torch.argmax(output[0, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2259/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2258 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2258/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2258/comments | https://api.github.com/repos/huggingface/transformers/issues/2258/events | https://github.com/huggingface/transformers/issues/2258 | 541,361,605 | MDU6SXNzdWU1NDEzNjE2MDU= | 2,258 | run_ner.py load checkpoint issue | {
"login": "giuliorav",
"id": 33007031,
"node_id": "MDQ6VXNlcjMzMDA3MDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/33007031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giuliorav",
"html_url": "https://github.com/giuliorav",
"followers_url": "https://api.github.com/users/giuliorav/followers",
"following_url": "https://api.github.com/users/giuliorav/following{/other_user}",
"gists_url": "https://api.github.com/users/giuliorav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/giuliorav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giuliorav/subscriptions",
"organizations_url": "https://api.github.com/users/giuliorav/orgs",
"repos_url": "https://api.github.com/users/giuliorav/repos",
"events_url": "https://api.github.com/users/giuliorav/events{/privacy}",
"received_events_url": "https://api.github.com/users/giuliorav/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can confirm this issue -> a temporary workaround would be to change the line to:\r\n\r\n```python\r\nif os.path.exists(args.model_name_or_path) and \"checkpoint\" in args.model_name_or_path:\r\n```",
"See also related (recent) fix on master: https://github.com/huggingface/transformers/commit/4d36472b96d144887cbe95b083f0d2091fd5ff03",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,587 | 1,587 | NONE | null | Hi,
I just launched the **transformers/examples/run_ner.py** script with my custom model:
`python3 transformers/examples/run_ner.py --data_dir $INPUT_DATA_DIR \
--tokenizer_name $TOKENIZER_FILE_PATH --output_dir $OUTPUT_DIR --model_type camembert --labels $LABELS_DIR --model_name_or_path $BERT_MODEL --max_seq_length $MAX_LENGTH --num_train_epochs $NUM_EPOCHS --gradient_accumulation_steps $ACCUMULATION_STEPS --per_gpu_train_batch_size $BATCH_SIZE --save_steps $SAVE_STEPS --do_lower_case --do_train --do_eval --do_predict`
Once the data for the train has been loaded, an error appear:
`Traceback (most recent call last):
File "transformers/examples/run_ner.py", line 567, in <module>
main()
File "transformers/examples/run_ner.py", line 496, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer, labels, pad_token_label_id)
File "transformers/examples/run_ner.py", line 132, in train
global_step = int(args.model_name_or_path.split('-')[-1].split('/')[0])
ValueError: invalid literal for int() with base 10: 'pytorch_dump_folder'`
Launching the same script a few hours ago the error did not appear, is it something related to the last updates #2134 ?
Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2258/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2257 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2257/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2257/comments | https://api.github.com/repos/huggingface/transformers/issues/2257/events | https://github.com/huggingface/transformers/issues/2257 | 541,360,019 | MDU6SXNzdWU1NDEzNjAwMTk= | 2,257 | HuggingFace transformers documentation webpage is blank? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Having the same issue.",
"May be related to this:\r\nhttps://twitter.com/Thom_Wolf/status/1208365367493636096\r\n",
"Are you still having an issue or is it fixed (if it is, please close the issue)?",
"Hello,\r\nI am still having the same issue.\r\n",
"Ok, we are in the process of fixing it. Thanks for the report",
"It works now, thank you for the help"
] | 1,576 | 1,577 | 1,577 | NONE | null | Hello,
Is HuggingFace updating their transformers documentation site (https://huggingface.co/transformers/)?
I looked there to get some information about the HugginFace GPT-2, but for some reason all the contents of the website are gone.
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2257/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2257/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2256 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2256/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2256/comments | https://api.github.com/repos/huggingface/transformers/issues/2256/events | https://github.com/huggingface/transformers/issues/2256 | 541,358,601 | MDU6SXNzdWU1NDEzNTg2MDE= | 2,256 | Untrainable dense layer in TFBert. "WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss." | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After looking at \r\n\r\nhttps://github.com/huggingface/transformers/issues/1727\r\n\r\nI figured out we're getting the warning because we're not using the pooler, therefore it gets no updates. ",
"@Santosh-Gupta can you please update your colab notebook with correct version. i am unbale to get this resolved",
"Hi, I have exactly the same issue with TFXLMRobertaForSequenceClassification. How did you solve the issue ?",
"> Hi, I have exactly the same issue with TFXLMRobertaForSequenceClassification. How did you solve the issue ?\r\n\r\nFrom what understand, this is not a bug. You won't get gradients calculated for variables (kernel and bias) of the layer tf_bert_model/bert/pooler/dense if you don't use it. As such, if you indeed don't use the pooler, you can simply ignore this warning. ",
"> @Santosh-Gupta can you please update your colab notebook with correct version. i am unbale to get this resolved\r\n\r\nIt's not a bug, the forward pass doesn't go through the pooler, so the backwards pass doesn't go through it either. ",
"I'm having the same issue, although I am using the pooling layer. \r\nmy model is like this one \r\n```\r\nclass TFAlbertForNaturalQuestionAnswering(TFAlbertPreTrainedModel):\r\n def __init__(self, config, *inputs, **kwargs):\r\n super().__init__(config, *inputs, **kwargs)\r\n \r\n self.albert = TFAlbertMainLayer(config)\r\n\r\n self.initializer = get_initializer(config.initializer_range)\r\n self.start = tf.keras.layers.Dense(1,\r\n kernel_initializer=self.initializer, name='start')\r\n self.end = tf.keras.layers.Dense(1,\r\n kernel_initializer=self.initializer, name='end')\r\n self.long_outputs = tf.keras.layers.Dense(1, kernel_initializer=self.initializer,\r\n name='long')\r\n\r\n self.answerable = tf.keras.layers.Dense(1, kernel_initializer=self.initializer,\r\n name='answerable', activation = \"sigmoid\")\r\n def call(self, inputs, **kwargs):\r\n outputs = self.albert(inputs, **kwargs)\r\n sequence_output = outputs[0]\r\n \r\n # tf.print(outputs[0].shape) (batch, len->0, hidden) 1->0\r\n # tf.print(outputs[1].shape) (batch, hidden_size)\r\n\r\n start_logits = tf.squeeze(self.start(sequence_output), -1)\r\n end_logits = tf.squeeze(self.end(sequence_output), -1)\r\n long_logits = tf.squeeze(self.long_outputs(sequence_output), -1)\r\n\r\n answerable = tf.squeeze(self.answerable(outputs[1]), -1)\r\n```",
"Hey ! i'm getting the same issue with attention model and embedding layer, the weights of both layers are not updating .\r\n\r\n```\r\n\r\nembd=Embedding(input_dim=len(vocab),output_dim=100,name=\"embd\")\r\nlstm1=Bidirectional(LSTM(units=100,return_sequences=True,name=\"lstm1\"),name=\"bd1\")\r\nlstm2=Bidirectional(LSTM(units=100,return_sequences=True,name=\"lstm2\"),name=\"bd2\")\r\nattention_layer=Attention_Model(21,200)\r\ndense1=Dense(units=80,name=\"dense1\",kernel_regularizer=\"l2\")\r\ndropout1=Dropout(0.5)\r\nact1=Activation('sigmoid')\r\n\r\ndense2=Dense(units=50,name=\"dense2\",kernel_regularizer=\"l2\")\r\ndropout2=Dropout(0.4)\r\nact2=Activation('sigmoid')\r\n\r\ndense3=Dense(units=30,name=\"dense3\",kernel_regularizer=\"l2\")\r\ndropout3=Dropout(0.3)\r\nact3=Activation('sigmoid')\r\n\r\ndense4=Dense(units=len(classes),name=\"dense4\")\r\ndropout4=Dropout(0.2)\r\noutput=Activation('softmax')\r\n\r\n```\r\nForward Pass : \r\n\r\n```\r\ndef forward_pass(X):\r\n t=embd(X)\r\n \r\n t=lstm1(t)\r\n \r\n t=lstm2(t)\r\n \r\n\r\n \r\n t=attention_layer(t)\r\n \r\n \r\n t=dense1(t)\r\n t=dropout1(t)\r\n t=act1(t)\r\n\r\n t=dense2(t)\r\n t=dropout2(t)\r\n t=act2(t)\r\n\r\n t=dense3(t)\r\n t=dropout3(t)\r\n t=act3(t)\r\n \r\n t=dense4(t)\r\n t=dropout4(t)\r\n t=output(t)\r\n\r\n return t\r\n\r\n\r\n```\r\n\r\nAttention Model : \r\n\r\n```\r\n\r\nclass Attention_Model():\r\n def __init__(self,seq_length,units):\r\n self.seq_length=seq_length\r\n self.units=units\r\n self.lstm=LSTM(units=units,return_sequences=True,return_state=True)\r\n \r\n\r\n def get_lstm_s(self,seq_no):\r\n input_lstm=tf.expand_dims(tf.reduce_sum(self.X*(self.alphas[:,:,seq_no:seq_no+1]),axis=1),axis=1)\r\n a,b,c=self.lstm(input_lstm)\r\n self.output[:,seq_no,:]=a[:,0,:]\r\n\r\n return b\r\n\r\n def __call__(self,X):\r\n self.X=X\r\n self.output=np.zeros(shape=(self.X.shape[0],self.seq_length,self.units))\r\n self.dense=Dense(units=self.seq_length)\r\n self.softmax=Softmax(axis=1)\r\n \r\n\r\n for i in range(self.seq_length+1):\r\n if i==0 :\r\n s=np.zeros(shape=(self.X.shape[0],self.units))\r\n else :\r\n s=self.get_lstm_s(i-1)\r\n if(i==self.seq_length):\r\n break \r\n \r\n s=RepeatVector(self.X.shape[1])(s)\r\n concate_X=np.concatenate([self.X,s],axis=-1)\r\n \r\n self.alphas=self.softmax(self.dense(concate_X))\r\n\r\n return self.output\r\n \r\n```\r\n\r\n\r\nis anything wrong with implementation or something else ?",
"@MarioBonse , your forward pass isn't going through the pooling layer\r\n\r\n'''sequence_output = outputs[0]'''\r\n\r\n@gajeshladhar \r\n\r\nIs that code from the hf library? Where are the classes defined? \r\n",
"pooler_output of transformers TFRobertaModel have tf_roberta_model/roberta/pooler/dense/kernel:0. if you have not use pooler_output,tf_roberta_model/roberta/pooler/dense/kernel:0 do not update Gradients"
] | 1,576 | 1,634 | 1,576 | CONTRIBUTOR | null | ## π Bug
<!-- Important information -->
I am getting
> WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss.
> WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss.
> WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss.
> WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss.
Errors when using the Tensorflow Bert.
For convenience, here is a colab notebook that reproduces the error
https://colab.research.google.com/drive/1fh8l43Mm7g4-yjlun7nTilmuP0SOMOx-
It looks like there's an operation in the TF Bert that does not allow gradients to flow, judging from goolging this issue
https://github.com/tensorflow/probability/issues/467
https://github.com/tensorflow/tensorflow/issues/27949
https://stackoverflow.com/questions/55434653/batch-normalization-doesnt-have-gradient-in-tensorflow-2-0
https://stackoverflow.com/questions/57144586/tensorflow-gradienttape-gradients-does-not-exist-for-variables-intermittently
Model I am using (Bert, XLNet....):
TFBertModel
Language I am using the model on (English, Chinese....):
English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X ] my own modified scripts: (give details)
```
!pip install transformers --quiet
%tensorflow_version 2.x
try:
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence
import os
from tensorflow import keras
from tensorflow.keras.layers import Lambda
from tensorflow.keras import backend as K
from keras.preprocessing.sequence import pad_sequences
from transformers import TFBertModel, BertTokenizer, BertConfig
import numpy as np
from glob import glob
from tqdm import tqdm_notebook
print('TensorFlow:', tf.__version__)
posCites = tf.random.uniform(shape=(3, 4, 768), minval=-1, maxval=1, dtype=tf.dtypes.float32)
negCites = tf.random.uniform(shape=(3, 16, 768), minval=-1, maxval=1, dtype=tf.dtypes.float32)
textInputsIds = tf.random.uniform(shape=(3, 8), minval=0, maxval=200, dtype=tf.dtypes.int32)
dataset = (textInputsIds, posCites, negCites)
batch_size = 3
post_size = 4
neg_size = 16
posLabels = keras.backend.ones(batch_size*post_size)
negLabels = keras.backend.zeros(batch_size*neg_size)
totalLabels = keras.backend.concatenate((posLabels, negLabels), axis=-1)
totalLabels = tf.convert_to_tensor([[totalLabels] * 3])
totalLabels = tf.squeeze(totalLabels)
model = TFBertModel.from_pretrained('bert-base-uncased')
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
loss='binary_crossentropy',
metrics=['acc'])
labels = tf.constant(np.array([1,0,1]))
model(textInputsIds)[0]
def create_model():
textInputs = tf.keras.Input(shape=(8,), dtype=tf.int32)
bert_model = TFBertModel.from_pretrained('bert-base-uncased')
textOut = bert_model(textInputs)
textOutMean = tf.reduce_mean(textOut[0], axis=1)
logits = tf.reduce_sum(textOutMean, axis=-1)
return tf.keras.Model(inputs=[textInputs], outputs=[logits])
model = create_model()
# model = TFBertModel.from_pretrained('bert-base-uncased')
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
loss='binary_crossentropy',
metrics=['acc'])
labels = tf.constant(np.array([1,0,1]))
model.fit(textInputsIds, labels, epochs=100)
```
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details)
Using Bert as a text encoder, and matching output embeddings with a target.
## To Reproduce
Steps to reproduce the behavior:
Here is a ColabNotebook which contains the code posted above, to recreate the error
https://colab.research.google.com/drive/1fh8l43Mm7g4-yjlun7nTilmuP0SOMOx-
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Gradients should be calculated for all variables in tfBert.
## Environment
* OS: Linux
* Python version: 3+
* PyTorch version: 1.2+
* PyTorch Transformers version (or branch): same as Pip install
* Using GPU ? colab gpu
* Distributed of parallel setup ? no
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
Possibly related issues
https://github.com/huggingface/transformers/issues/1727
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2256/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2256/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2255 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2255/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2255/comments | https://api.github.com/repos/huggingface/transformers/issues/2255/events | https://github.com/huggingface/transformers/pull/2255 | 541,334,908 | MDExOlB1bGxSZXF1ZXN0MzU2MDA2NDU1 | 2,255 | Implement some Python best practices | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is in reasonably good shape. There are two tasks left:\r\n\r\n1. Figure out why isort doesn't behave the same locally and on Circle CI. Most likely this has to do with how it classifies first-party / third-party / unknown libraries. Then enable it on Circle CI. **EDIT - fixed** - this was a matter of installing optional dependencies, not listed in setup.py, on Circle CI so that isort can classify them correctly.\r\n2. Fix flake8 F841 warnings and stop ignoring them.\r\n\r\nAssuming tests pass, I think it would be best to merge this PR and deal with these two items later. I'd like to do the repository structure changes first, so we're done with the large changes.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2255?src=pr&el=h1) Report\n> Merging [#2255](https://codecov.io/gh/huggingface/transformers/pull/2255?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/645713e2cb8307e41febb2b7c9f6036f6645efce?src=pr&el=desc) will **decrease** coverage by `0.24%`.\n> The diff coverage is `44.23%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2255?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2255 +/- ##\n==========================================\n- Coverage 78.35% 78.11% -0.25% \n==========================================\n Files 133 133 \n Lines 19878 19655 -223 \n==========================================\n- Hits 15576 15354 -222 \n+ Misses 4302 4301 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2255?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | `90.44% <ΓΈ> (-0.04%)` | :arrow_down: |\n| [transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25feGxuZXQucHk=) | `93.47% <ΓΈ> (-0.4%)` | :arrow_down: |\n| [transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX21tYnQucHk=) | `18.25% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsY2FyZC5weQ==) | `87.8% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [transformers/tests/modeling\\_bert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2JlcnRfdGVzdC5weQ==) | `98% <ΓΈ> (-0.02%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `90.9% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_ctrl\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2N0cmxfdGVzdC5weQ==) | `95.74% <ΓΈ> (-0.18%)` | :arrow_down: |\n| [transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fb3BlbmFpLnB5) | `97.22% <ΓΈ> (-0.22%)` | :arrow_down: |\n| [transformers/tests/tokenization\\_xlm\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl94bG1fdGVzdC5weQ==) | `82.22% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [transformers/commands/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbW1hbmRzL19faW5pdF9fLnB5) | `0% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| ... and [237 more](https://codecov.io/gh/huggingface/transformers/pull/2255/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2255?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2255?src=pr&el=footer). Last update [645713e...c11b3e2](https://codecov.io/gh/huggingface/transformers/pull/2255?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"`pyproject.toml` is supposed to be The Future (tm). `setup.cfg`was supposed to be The Future as well. That didn't quite work, but it gained some support.\r\n\r\nUnfortunately, in the Python packaging ecosystem, some futures never became the present. That's why I'm conservative, at least until this changes: https://packaging.python.org/specifications/distribution-formats/\r\n\r\nI'm happy to try converting setup.py to pyproject.toml if you're feeling adventurous. Let me know.\r\n\r\nWe can move the isort configuration there, but not the flake8 configuration until [this PR](https://gitlab.com/pycqa/flake8/issues/428) is merged. I like setup.cfg because we can put both in the same file.",
"Alright thanks for the context!"
] | 1,576 | 1,577 | 1,577 | CONTRIBUTOR | null | Improve source code quality with black, isort & flake8. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2255/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2255",
"html_url": "https://github.com/huggingface/transformers/pull/2255",
"diff_url": "https://github.com/huggingface/transformers/pull/2255.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2255.patch",
"merged_at": 1577028672000
} |
https://api.github.com/repos/huggingface/transformers/issues/2254 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2254/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2254/comments | https://api.github.com/repos/huggingface/transformers/issues/2254/events | https://github.com/huggingface/transformers/pull/2254 | 541,332,443 | MDExOlB1bGxSZXF1ZXN0MzU2MDA0NzE5 | 2,254 | adding positional embeds masking to TFRoBERTa | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2254?src=pr&el=h1) Report\n> Merging [#2254](https://codecov.io/gh/huggingface/transformers/pull/2254?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/73f6e9817c744caae0b73fa343ceaf95ba76f9f8?src=pr&el=desc) will **increase** coverage by `1.46%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2254?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2254 +/- ##\n==========================================\n+ Coverage 77.28% 78.75% +1.46% \n==========================================\n Files 133 131 -2 \n Lines 19872 19742 -130 \n==========================================\n+ Hits 15358 15547 +189 \n+ Misses 4514 4195 -319\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2254?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `90.9% <100%> (+0.47%)` | :arrow_up: |\n| [transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX21tYnQucHk=) | | |\n| [transformers/configuration\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fbW1idC5weQ==) | | |\n| [transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3BpcGVsaW5lcy5weQ==) | `67.94% <0%> (+0.58%)` | :arrow_up: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `64.29% <0%> (+0.71%)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.51% <0%> (+1.32%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.47% <0%> (+2.2%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.72% <0%> (+2.29%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `64.42% <0%> (+10.09%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `94.39% <0%> (+17.24%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/2254/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2254?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2254?src=pr&el=footer). Last update [73f6e98...77676c2](https://codecov.io/gh/huggingface/transformers/pull/2254?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yeepee!"
] | 1,576 | 1,651 | 1,576 | MEMBER | null | Adding positional embeds masking to TFRoBERTa following its addition to the PT model in #1764 to fix PT <=> TF equivalence test | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2254/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2254",
"html_url": "https://github.com/huggingface/transformers/pull/2254",
"diff_url": "https://github.com/huggingface/transformers/pull/2254.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2254.patch",
"merged_at": 1576938803000
} |
https://api.github.com/repos/huggingface/transformers/issues/2253 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2253/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2253/comments | https://api.github.com/repos/huggingface/transformers/issues/2253/events | https://github.com/huggingface/transformers/issues/2253 | 541,314,350 | MDU6SXNzdWU1NDEzMTQzNTA= | 2,253 | bias weights not used in T5Model | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, all the layer norms in T5 have no bias (so we keep the default value of 0)"
] | 1,576 | 1,578 | 1,578 | COLLABORATOR | null | ## π Bug
Running the T5Model on v2.3.0 can show the info message that all bias weights are not used:
> Weights from pretrained model not used in T5Model: ['encoder.block.0.layer.0.layer_norm.bias', 'encoder.block.0.layer.1.layer_norm.bias', 'encoder.block.1.layer.0.layer_norm.bias', 'encoder.block.1.layer.1.layer_norm.bias', 'encoder.block.2.layer.0.layer_norm.bias', 'encoder.block.2.layer.1.layer_norm.bias', 'encoder.block.3.layer.0.layer_norm.bias', 'encoder.block.3.layer.1.layer_norm.bias', 'encoder.block.4.layer.0.layer_norm.bias', 'encoder.block.4.layer.1.layer_norm.bias', 'encoder.block.5.layer.0.layer_norm.bias', 'encoder.block.5.layer.1.layer_norm.bias', 'encoder.final_layer_norm.bias', 'decoder.block.0.layer.0.layer_norm.bias', 'decoder.block.0.layer.1.layer_norm.bias', 'decoder.block.0.layer.2.layer_norm.bias', 'decoder.block.1.layer.0.layer_norm.bias', 'decoder.block.1.layer.1.layer_norm.bias', 'decoder.block.1.layer.2.layer_norm.bias', 'decoder.block.2.layer.0.layer_norm.bias', 'decoder.block.2.layer.1.layer_norm.bias', 'decoder.block.2.layer.2.layer_norm.bias', 'decoder.block.3.layer.0.layer_norm.bias', 'decoder.block.3.layer.1.layer_norm.bias', 'decoder.block.3.layer.2.layer_norm.bias', 'decoder.block.4.layer.0.layer_norm.bias', 'decoder.block.4.layer.1.layer_norm.bias', 'decoder.block.4.layer.2.layer_norm.bias', 'decoder.block.5.layer.0.layer_norm.bias', 'decoder.block.5.layer.1.layer_norm.bias', 'decoder.block.5.layer.2.layer_norm.bias', 'decoder.final_layer_norm.bias']
I think these are to be expected (and that this is actually not a bug), but I'm not sure. https://github.com/huggingface/transformers/issues/180#issuecomment-453937845 mentions that in some cases like this an additional message could be shown indicating whether this is expected behaviour or not but that has not been implemented here.
```python
from transformers import T5Model
import logging
logging.basicConfig(format='%(asctime)s - [%(levelname)s]: %(message)s',
datefmt='%d-%b %H:%M:%S',
level=logging.INFO)
model = T5Model.from_pretrained('t5-small')
for name, _ in model.named_parameters():
print(name)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2253/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2252 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2252/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2252/comments | https://api.github.com/repos/huggingface/transformers/issues/2252/events | https://github.com/huggingface/transformers/issues/2252 | 541,312,616 | MDU6SXNzdWU1NDEzMTI2MTY= | 2,252 | Documentation link broken | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,576 | 1,577 | 1,577 | COLLABORATOR | null | The README shows strange formatting (where 'Documentation' is put between brackets) for the links to documentation. More importantly, the link to the v2.3.0 documentation is broken (404 not found).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2252/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2251 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2251/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2251/comments | https://api.github.com/repos/huggingface/transformers/issues/2251/events | https://github.com/huggingface/transformers/issues/2251 | 541,301,296 | MDU6SXNzdWU1NDEzMDEyOTY= | 2,251 | AttributeError: 'Sst2Processor' object has no attribute 'tfds_map' | {
"login": "abb4s",
"id": 7654832,
"node_id": "MDQ6VXNlcjc2NTQ4MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7654832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abb4s",
"html_url": "https://github.com/abb4s",
"followers_url": "https://api.github.com/users/abb4s/followers",
"following_url": "https://api.github.com/users/abb4s/following{/other_user}",
"gists_url": "https://api.github.com/users/abb4s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abb4s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abb4s/subscriptions",
"organizations_url": "https://api.github.com/users/abb4s/orgs",
"repos_url": "https://api.github.com/users/abb4s/repos",
"events_url": "https://api.github.com/users/abb4s/events{/privacy}",
"received_events_url": "https://api.github.com/users/abb4s/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@abb4s A workaround would be downgrading `transformers` to `2.2.0`. It worked for me that way.",
"Indeed, this was an error introduced by #1548. It was patched by 1efc208. Thank you for raising this issue! ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,584 | 1,584 | NONE | null | ## π Bug
<!-- Important information -->
hey I just wanted to test BERT for sst2. I have just changed official example script to this:
```
import tensorflow as tf
import tensorflow_datasets
from transformers import *
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
data = tensorflow_datasets.load('glue/sst2')
# Prepare dataset for GLUE as a tf.data.Dataset instance
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='sst-2')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='sst-2')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
# Train and evaluate using tf.keras.Model.fit()
history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
validation_data=valid_dataset, validation_steps=7)
# Load the TensorFlow model in PyTorch for inspection
model.save_pretrained('./save/')
pytorch_model = BertForSequenceClassification.from_pretrained('./save/', from_tf=True)
# Quickly test a few predictions - MRPC is a paraphrasing task, let's see if our model learned the task
sentence_0 = "I didn't think this was as absolutely horrible."
inputs_1 = tokenizer.encode_plus(sentence_0, add_special_tokens=True, return_tensors='pt')
pred_1 = pytorch_model(inputs_1['input_ids'], token_type_ids=inputs_1['token_type_ids'])[0].argmax().item()
print("sentence_1 is", pred_1)
```
I'm using google colab with tensorflow2.0 . the error is :
AttributeError: 'Sst2Processor' object has no attribute 'tfds_map'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2251/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2250 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2250/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2250/comments | https://api.github.com/repos/huggingface/transformers/issues/2250/events | https://github.com/huggingface/transformers/issues/2250 | 541,297,134 | MDU6SXNzdWU1NDEyOTcxMzQ= | 2,250 | Four tests fail when running the full test suite | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The first two are easy fixes. I put fixes in the test parallelization PR.\r\n\r\nThe last two are likely the same bug, but I'm out of my depth there.",
"I guess this has been fixed by now"
] | 1,576 | 1,578 | 1,578 | CONTRIBUTOR | null | ## π Bug
```
RUN_SLOW=1 python -m unittest discover -s transformers/tests -p '*_test.py' -t . -v
```
```
======================================================================
ERROR: test_model_from_pretrained (transformers.tests.modeling_tf_albert_test.TFAlbertModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../transformers/transformers/configuration_utils.py", line 160, in from_pretrained
config = cls.from_json_file(resolved_config_file)
File ".../transformers/transformers/configuration_utils.py", line 213, in from_json_file
with open(json_file, "r", encoding='utf-8') as reader:
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/pq/hzv7wgqs5fq0hf1bwzy4mlzr0000gn/T/transformers_test/5b4c66df217ea00b14f607787de616bbff332ae36147a92cd94219160006685a'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ".../transformers/transformers/tests/modeling_tf_albert_test.py", line 221, in test_model_from_pretrained
model = TFAlbertModel.from_pretrained(model_name, cache_dir=CACHE_DIR)
File ".../transformers/transformers/modeling_tf_utils.py", line 249, in from_pretrained
**kwargs
File ".../transformers/transformers/configuration_utils.py", line 173, in from_pretrained
raise EnvironmentError(msg)
OSError: Model name 'albert-base-uncased' was not found in model name list (albert-xxlarge-v2, albert-large-v1, albert-xlarge-v1, albert-base-v2, albert-base-v1, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v1). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/albert-base-uncased/config.json' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
======================================================================
ERROR: test_model_from_pretrained (transformers.tests.modeling_tf_xlm_test.TFXLMModelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../transformers/transformers/tests/modeling_tf_xlm_test.py", line 255, in test_model_from_pretrained
model = XLMModel.from_pretrained(model_name, cache_dir=CACHE_DIR)
NameError: name 'XLMModel' is not defined
======================================================================
FAIL: test_inference_masked_lm (transformers.tests.modeling_roberta_test.RobertaModelIntegrationTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../transformers/transformers/tests/modeling_roberta_test.py", line 227, in test_inference_masked_lm
torch.allclose(output[:, :3, :3], expected_slice, atol=1e-3)
AssertionError: False is not true
======================================================================
FAIL: test_inference_masked_lm (transformers.tests.modeling_tf_roberta_test.TFRobertaModelIntegrationTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../transformers/transformers/tests/modeling_tf_roberta_test.py", line 220, in test_inference_masked_lm
numpy.allclose(output[:, :3, :3].numpy(), expected_slice.numpy(), atol=1e-3)
AssertionError: False is not true
----------------------------------------------------------------------
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2250/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2249 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2249/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2249/comments | https://api.github.com/repos/huggingface/transformers/issues/2249/events | https://github.com/huggingface/transformers/pull/2249 | 541,290,042 | MDExOlB1bGxSZXF1ZXN0MzU1OTc1MDk0 | 2,249 | bertοΌ+lstmοΌ+crf | {
"login": "michael-wzhu",
"id": 35124505,
"node_id": "MDQ6VXNlcjM1MTI0NTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/35124505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michael-wzhu",
"html_url": "https://github.com/michael-wzhu",
"followers_url": "https://api.github.com/users/michael-wzhu/followers",
"following_url": "https://api.github.com/users/michael-wzhu/following{/other_user}",
"gists_url": "https://api.github.com/users/michael-wzhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michael-wzhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michael-wzhu/subscriptions",
"organizations_url": "https://api.github.com/users/michael-wzhu/orgs",
"repos_url": "https://api.github.com/users/michael-wzhu/repos",
"events_url": "https://api.github.com/users/michael-wzhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/michael-wzhu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2249?src=pr&el=h1) Report\n> Merging [#2249](https://codecov.io/gh/huggingface/transformers/pull/2249?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac1b449cc938bb34bc9021feff599cfd3b2376ae?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2249?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2249 +/- ##\n=======================================\n Coverage 79.82% 79.82% \n=======================================\n Files 131 131 \n Lines 19496 19496 \n=======================================\n Hits 15562 15562 \n Misses 3934 3934\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2249?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2249?src=pr&el=footer). Last update [ac1b449...ea25498](https://codecov.io/gh/huggingface/transformers/pull/2249?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@michael-wzhu did this increase f1 on CoNLL?",
"> add crf layer for better performance in NER tasks\r\n\r\nHi. How do you solve the tokens with \"##...\" when they are fed into the crf layer?\r\ne.g. De ##duct ##ive reasoning\r\nDo \"##duct\" and \"\"##ive\" are fed into the crf layer? If they are, do they have chance to be transfered in the transition matrix?",
"> e\" are fed into the crf layer? If they are, do they have chance to be transfered in the transition matrix?\r\n\r\nI have used this to develop my own version. To answer your quesiton, ## sub word tokens are treated as padding, as suggested by the original BERT authors. This code relies on The padding label token being \"X\", at the first position (to get0th index) from the output of get_labels function in crf_utils_ner.py \r\n\r\nThe pad token label id might need to be 0 for calculations in CRF, but you should be careful in declaring your mask so your model does not confuse padding with one of the tokens. ",
"If ## sub word tokens are treated as padding, it will break the tag-tag dependencies, so definitely not ideal.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@srush @mezig351 What did you find after your work on this issue? Why didn't you merge it?",
"I think you should add it, it's not trivial, and I myself spent 2 days to make it work...\r\nand then I just found this thread... "
] | 1,576 | 1,638 | 1,597 | NONE | null | add crf layer for better performance in NER tasks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2249/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2249",
"html_url": "https://github.com/huggingface/transformers/pull/2249",
"diff_url": "https://github.com/huggingface/transformers/pull/2249.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2249.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2248 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2248/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2248/comments | https://api.github.com/repos/huggingface/transformers/issues/2248/events | https://github.com/huggingface/transformers/issues/2248 | 541,283,891 | MDU6SXNzdWU1NDEyODM4OTE= | 2,248 | Extract features aligned to tokens from a BertForQuestionAnswering model | {
"login": "Luvata",
"id": 17178612,
"node_id": "MDQ6VXNlcjE3MTc4NjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/17178612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luvata",
"html_url": "https://github.com/Luvata",
"followers_url": "https://api.github.com/users/Luvata/followers",
"following_url": "https://api.github.com/users/Luvata/following{/other_user}",
"gists_url": "https://api.github.com/users/Luvata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Luvata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luvata/subscriptions",
"organizations_url": "https://api.github.com/users/Luvata/orgs",
"repos_url": "https://api.github.com/users/Luvata/repos",
"events_url": "https://api.github.com/users/Luvata/events{/privacy}",
"received_events_url": "https://api.github.com/users/Luvata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
">My question is how to get feature embedding aligned to each token from a pretrained BertForQuestionAnswering\r\n\r\nCould you elaborate on what this means? I am currently working with BertForQuestionAnswering but haven't encountered this area before. "
] | 1,576 | 1,576 | 1,576 | NONE | null | ## β Questions & Help
I have a finetuned Bert model on a custom Question Answering task (follow original Tensorflow source), I've successful converted this model and loaded it with `BertForQuestionAnswering`
<!-- A clear and concise description of the question. -->
My question is how to get feature embedding aligned to each token from a pretrained `BertForQuestionAnswering`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2248/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2247 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2247/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2247/comments | https://api.github.com/repos/huggingface/transformers/issues/2247/events | https://github.com/huggingface/transformers/issues/2247 | 541,272,420 | MDU6SXNzdWU1NDEyNzI0MjA= | 2,247 | NER pipeline missing start/end | {
"login": "petulla",
"id": 3466817,
"node_id": "MDQ6VXNlcjM0NjY4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3466817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petulla",
"html_url": "https://github.com/petulla",
"followers_url": "https://api.github.com/users/petulla/followers",
"following_url": "https://api.github.com/users/petulla/following{/other_user}",
"gists_url": "https://api.github.com/users/petulla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petulla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petulla/subscriptions",
"organizations_url": "https://api.github.com/users/petulla/orgs",
"repos_url": "https://api.github.com/users/petulla/repos",
"events_url": "https://api.github.com/users/petulla/events{/privacy}",
"received_events_url": "https://api.github.com/users/petulla/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"That would be a great feature, +1.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,584 | 1,584 | NONE | null | ## π Feature
2.3 is a great release! Really excited for pipelines.
The feature to add is the start/end positions of the entities.
Additionally, the option to show the recognized entity rather than in subword form would be more user-friendly as an API.
## Motivation
[The release mentions including positions of the entities](https://github.com/huggingface/transformers/releases/tag/v2.3.0). The start/end positions are not in `transformers.py`
## Additional context
This is really exciting and motivated me to use your module. I hope to make a PR in the future.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2247/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2246 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2246/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2246/comments | https://api.github.com/repos/huggingface/transformers/issues/2246/events | https://github.com/huggingface/transformers/issues/2246 | 541,185,071 | MDU6SXNzdWU1NDExODUwNzE= | 2,246 | Recently added pipelines tests should be marked as slow | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,582 | 1,582 | CONTRIBUTOR | null | ## π Bug
Starting today, when running tests, some very large files are downloaded even though I don't enable RUN_SLOW=true.
Some tests in pipelines_test.py should be marked with `@slow` so they don't run unless RUN_SLOW=True. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2246/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2245 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2245/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2245/comments | https://api.github.com/repos/huggingface/transformers/issues/2245/events | https://github.com/huggingface/transformers/issues/2245 | 541,177,340 | MDU6SXNzdWU1NDExNzczNDA= | 2,245 | Training dataset is not available | {
"login": "wboleksii",
"id": 46055670,
"node_id": "MDQ6VXNlcjQ2MDU1Njcw",
"avatar_url": "https://avatars.githubusercontent.com/u/46055670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wboleksii",
"html_url": "https://github.com/wboleksii",
"followers_url": "https://api.github.com/users/wboleksii/followers",
"following_url": "https://api.github.com/users/wboleksii/following{/other_user}",
"gists_url": "https://api.github.com/users/wboleksii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wboleksii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wboleksii/subscriptions",
"organizations_url": "https://api.github.com/users/wboleksii/orgs",
"repos_url": "https://api.github.com/users/wboleksii/repos",
"events_url": "https://api.github.com/users/wboleksii/events{/privacy}",
"received_events_url": "https://api.github.com/users/wboleksii/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"paging @LysandreJik ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,587 | 1,587 | NONE | null | ## β Questions & Help
As stated [here](https://github.com/huggingface/transformers/blob/1ab8dc44b3d84ed1894f5b6a6fab58fb39298fc7/examples/distillation/README.md), the model was trained using Toronto Book Corpus and English Wikipedia. Neither this repository or BERT repository provides links to obtain this data. Upon further investigation the Toronto Book Corpus is no longer public. Please advise on how to get this data. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2245/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2245/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2244 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2244/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2244/comments | https://api.github.com/repos/huggingface/transformers/issues/2244/events | https://github.com/huggingface/transformers/pull/2244 | 541,169,098 | MDExOlB1bGxSZXF1ZXN0MzU1ODc3MDU2 | 2,244 | Fix Camembert and XLM-R `decode` method- Fix NER pipeline alignement | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2244?src=pr&el=h1) Report\n> Merging [#2244](https://codecov.io/gh/huggingface/transformers/pull/2244?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ceae85ad60da38cacb14eca49f752669a4fe31dc?src=pr&el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `58.82%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2244?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2244 +/- ##\n==========================================\n- Coverage 79.92% 79.88% -0.05% \n==========================================\n Files 131 131 \n Lines 19469 19480 +11 \n==========================================\n Hits 15561 15561 \n- Misses 3908 3919 +11\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2244?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/commands/run.py](https://codecov.io/gh/huggingface/transformers/pull/2244/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbW1hbmRzL3J1bi5weQ==) | `0% <0%> (ΓΈ)` | :arrow_up: |\n| [transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2244/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `67.1% <100%> (-1.76%)` | :arrow_down: |\n| [transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2244/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9jYW1lbWJlcnQucHk=) | `36.61% <50%> (+0.79%)` | :arrow_up: |\n| [transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2244/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG1fcm9iZXJ0YS5weQ==) | `37.68% <50%> (+0.75%)` | :arrow_up: |\n| [transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2244/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3BpcGVsaW5lcy5weQ==) | `67.94% <71.42%> (+0.09%)` | :arrow_up: |\n| [transformers/tests/pipelines\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2244/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3BpcGVsaW5lc190ZXN0LnB5) | `98.03% <0%> (-0.99%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2244?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2244?src=pr&el=footer). Last update [ceae85a...655fd06](https://codecov.io/gh/huggingface/transformers/pull/2244?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,576 | 1,651 | 1,576 | MEMBER | null | Fix `decode` method for Camembert and XLM-R
Simplify alignement method for NER pipeline | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2244/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2244",
"html_url": "https://github.com/huggingface/transformers/pull/2244",
"diff_url": "https://github.com/huggingface/transformers/pull/2244.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2244.patch",
"merged_at": 1576876240000
} |
https://api.github.com/repos/huggingface/transformers/issues/2243 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2243/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2243/comments | https://api.github.com/repos/huggingface/transformers/issues/2243/events | https://github.com/huggingface/transformers/pull/2243 | 541,125,734 | MDExOlB1bGxSZXF1ZXN0MzU1ODQwMTQw | 2,243 | fixing xlm-roberta tokenizer max_length and automodels | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2243?src=pr&el=h1) Report\n> Merging [#2243](https://codecov.io/gh/huggingface/transformers/pull/2243?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/65c75fc58796b278d58b0ce2c8d2031594ef0f64?src=pr&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `21.42%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2243?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2243 +/- ##\n==========================================\n- Coverage 79.9% 79.88% -0.03% \n==========================================\n Files 131 131 \n Lines 19451 19467 +16 \n==========================================\n+ Hits 15543 15551 +8 \n- Misses 3908 3916 +8\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2243?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2243/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `94.1% <ΓΈ> (+1.07%)` | :arrow_up: |\n| [transformers/commands/run.py](https://codecov.io/gh/huggingface/transformers/pull/2243/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbW1hbmRzL3J1bi5weQ==) | `0% <0%> (ΓΈ)` | :arrow_up: |\n| [transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2243/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG1fcm9iZXJ0YS5weQ==) | `36.92% <0%> (ΓΈ)` | :arrow_up: |\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2243/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `92.08% <100%> (+0.01%)` | :arrow_up: |\n| [transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2243/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.92% <18.18%> (-1.09%)` | :arrow_down: |\n| [transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2243/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3BpcGVsaW5lcy5weQ==) | `67.65% <27.58%> (+0.49%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2243/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `93.15% <0%> (-0.53%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2243?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2243?src=pr&el=footer). Last update [65c75fc...bbaaec0](https://codecov.io/gh/huggingface/transformers/pull/2243?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,576 | 1,576 | 1,576 | MEMBER | null | Fix missing max token num in XLM-Roberta | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2243/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2243",
"html_url": "https://github.com/huggingface/transformers/pull/2243",
"diff_url": "https://github.com/huggingface/transformers/pull/2243.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2243.patch",
"merged_at": 1576866849000
} |
https://api.github.com/repos/huggingface/transformers/issues/2242 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2242/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2242/comments | https://api.github.com/repos/huggingface/transformers/issues/2242/events | https://github.com/huggingface/transformers/issues/2242 | 541,046,723 | MDU6SXNzdWU1NDEwNDY3MjM= | 2,242 | BertTokenizer / CamemBERTokenizer `decode` behaviour ? | {
"login": "auroredea",
"id": 2429626,
"node_id": "MDQ6VXNlcjI0Mjk2MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2429626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/auroredea",
"html_url": "https://github.com/auroredea",
"followers_url": "https://api.github.com/users/auroredea/followers",
"following_url": "https://api.github.com/users/auroredea/following{/other_user}",
"gists_url": "https://api.github.com/users/auroredea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/auroredea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/auroredea/subscriptions",
"organizations_url": "https://api.github.com/users/auroredea/orgs",
"repos_url": "https://api.github.com/users/auroredea/repos",
"events_url": "https://api.github.com/users/auroredea/events{/privacy}",
"received_events_url": "https://api.github.com/users/auroredea/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> tokenizer.decode(ids)\r\n\r\nI've made some tests, and this problem occurs also with `BertTokenizer`.\r\n",
"Should be fixed by #2244, can you check that you get the expected behaviour on master?\r\n\r\nThanks!",
"I confirm with the version with the merge commit and new version 2.3.0, it works.\r\nThank you !",
"Thanks for checking!"
] | 1,576 | 1,577 | 1,577 | NONE | null | ## π Bug
Thank you for CamemBERT, it's a great work π
Model I am using (Bert, XLNet....): CamemBERT
Language I am using the model on (English, Chinese....): French
The problem arise when using:
* [ ] the official example scripts:
* [x] my own modified scripts: followed by the official documentation at https://huggingface.co/transformers/main_classes/tokenizer.html#
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset: Show the tokens from the CamemBERT tokenizer.
## To Reproduce
Steps to reproduce the behavior:
1. Tokenize (`encode`) a sentence.
2. Try to `decode` the ids but not working (`TypeError` thrown)
```
tokenizer = CamembertTokenizer.from_pretrained('camembert-base')
ids = tokenizer.encode(sentence)
print(tokenizer.decode(ids))
```
I just followed the documentation for decode which explains :
Converts a sequence of ids (integer) in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces. Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
Traceback :
```
File "/d/Workspaces/camembert-test/main.py", line 51, in <module>
tokens_with_transformers_error(sentence)
File "/d/Workspaces/camembert-test/main.py", line 32, in tokens_with_transformers_error
print(tokenizer.decode(ids))
File "/d/Workspaces/camembert-test/.venv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1187, in decode
sub_texts.append(self.convert_tokens_to_string(current_sub_text))
File "/d/Workspaces/camembert-test/.venv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1156, in convert_tokens_to_string
return ' '.join(self.convert_ids_to_tokens(tokens))
File "/d/Workspaces/camembert-test/.venv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1145, in convert_ids_to_tokens
tokens.append(self._convert_id_to_token(index))
File "/d/Workspaces/camembert-test/.venv/lib/python3.7/site-packages/transformers/tokenization_camembert.py", line 146, in _convert_id_to_token
return self.sp_model.IdToPiece(index - self.fairseq_offset)
TypeError: unsupported operand type(s) for -: 'str' and 'int'
```
Here, convert_tokens_to_string seems to call convert_ids_to_tokens ? Why ?
## Expected behavior
Have the same return statement (or anything else) as this code which seems to works but don't have to...? I am not sure though π
```
tokenizer = CamembertTokenizer.from_pretrained('camembert-base')
ids = tokenizer.encode(sentence)
print(tokenizer.convert_tokens_to_string(ids)) # I give list of ids, not list of tokens
```
## Environment
* OS: Linux
* Python version: 3.7
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.2.2
* Using GPU ? no
* Distributed of parallel setup ? no
* Any other relevant information: /
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2242/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2241 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2241/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2241/comments | https://api.github.com/repos/huggingface/transformers/issues/2241/events | https://github.com/huggingface/transformers/issues/2241 | 541,025,116 | MDU6SXNzdWU1NDEwMjUxMTY= | 2,241 | How to load the finetuned model for retraining from checkpoints in run_squad.py? | {
"login": "Tahsin-Mayeesha",
"id": 17886829,
"node_id": "MDQ6VXNlcjE3ODg2ODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/17886829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tahsin-Mayeesha",
"html_url": "https://github.com/Tahsin-Mayeesha",
"followers_url": "https://api.github.com/users/Tahsin-Mayeesha/followers",
"following_url": "https://api.github.com/users/Tahsin-Mayeesha/following{/other_user}",
"gists_url": "https://api.github.com/users/Tahsin-Mayeesha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tahsin-Mayeesha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tahsin-Mayeesha/subscriptions",
"organizations_url": "https://api.github.com/users/Tahsin-Mayeesha/orgs",
"repos_url": "https://api.github.com/users/Tahsin-Mayeesha/repos",
"events_url": "https://api.github.com/users/Tahsin-Mayeesha/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tahsin-Mayeesha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Ah, turns out to run from a pretrained model we have to specify the output_dir as the previous checkpoint. I feel like its quite unintuitive. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,582 | 1,582 | NONE | null | Because of bad internet connection and computational issues its hard for us to train a large number of epochs. We're trying to use the run_squad.py script for bangla QA system training. We have trained the model before and have the checkpoints.
```
!python run_squad.py \
--model_type distilbert \
--model_name_or_path ../data/tmp/attempt_with_dataset_v3/checkpoint-5000/ \
--do_train \
--do_eval \
--do_lower_case \
--train_file ../data/dataset_v_3/train_bangla_samples.json \
--predict_file ../data/dataset_v_3/valid_bangla_samples.json \
--version_2_with_negative \
--per_gpu_train_batch_size 12 \
--learning_rate 5e-5 \
--num_train_epochs 1.0 \
--max_seq_length 384 \
--doc_stride 128 \
--logging_steps 100 \
--save_steps 100 \
--fp16 \
--evaluate_during_training \
--output_dir ../data/mytrial
```
command produces this error :
```
12/20/2019 14:06:45 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: True
12/20/2019 14:06:47 - INFO - transformers.configuration_utils - loading configuration file ../data/tmp/attempt_with_dataset_v3/checkpoint-5000/config.json
12/20/2019 14:06:47 - INFO - transformers.configuration_utils - Model config {
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"finetuning_task": null,
"hidden_dim": 3072,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"max_position_embeddings": 512,
"n_heads": 12,
"n_layers": 6,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pruned_heads": {},
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 30522
}
12/20/2019 14:06:47 - INFO - transformers.tokenization_utils - Model name '../data/tmp/attempt_with_dataset_v3/checkpoint-5000/' not found in model shortcut name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). Assuming '../data/tmp/attempt_with_dataset_v3/checkpoint-5000/' is a path or url to a directory containing tokenizer files.
12/20/2019 14:06:47 - INFO - transformers.tokenization_utils - Didn't find file ../data/tmp/attempt_with_dataset_v3/checkpoint-5000/vocab.txt. We won't load it.
12/20/2019 14:06:47 - INFO - transformers.tokenization_utils - Didn't find file ../data/tmp/attempt_with_dataset_v3/checkpoint-5000/added_tokens.json. We won't load it.
12/20/2019 14:06:47 - INFO - transformers.tokenization_utils - Didn't find file ../data/tmp/attempt_with_dataset_v3/checkpoint-5000/special_tokens_map.json. We won't load it.
12/20/2019 14:06:47 - INFO - transformers.tokenization_utils - Didn't find file ../data/tmp/attempt_with_dataset_v3/checkpoint-5000/tokenizer_config.json. We won't load it.
Traceback (most recent call last):
File "run_squad.py", line 614, in <module>
main()
File "run_squad.py", line 528, in main
cache_dir=args.cache_dir if args.cache_dir else None)
File "/content/gdrive/My Drive/huggingfaceattempt/transformers/transformers/tokenization_utils.py", line 302, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/content/gdrive/My Drive/huggingfaceattempt/transformers/transformers/tokenization_utils.py", line 370, in _from_pretrained
list(cls.vocab_files_names.values())))
OSError: Model name '../data/tmp/attempt_with_dataset_v3/checkpoint-5000/' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). We assumed '../data/tmp/attempt_with_dataset_v3/checkpoint-5000/' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
```
but none of the checkpoint folders have any vocabulary stored. Are we supposed to pass the checkpoint folder path to the model_name_or_path for training again from that checkpoint? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2241/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2240 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2240/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2240/comments | https://api.github.com/repos/huggingface/transformers/issues/2240/events | https://github.com/huggingface/transformers/issues/2240 | 541,021,382 | MDU6SXNzdWU1NDEwMjEzODI= | 2,240 | TFDistilBertModelTest.test_pt_tf_model_equivalence thrown while merging after PR | {
"login": "TheEdoardo93",
"id": 19664571,
"node_id": "MDQ6VXNlcjE5NjY0NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19664571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheEdoardo93",
"html_url": "https://github.com/TheEdoardo93",
"followers_url": "https://api.github.com/users/TheEdoardo93/followers",
"following_url": "https://api.github.com/users/TheEdoardo93/following{/other_user}",
"gists_url": "https://api.github.com/users/TheEdoardo93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheEdoardo93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheEdoardo93/subscriptions",
"organizations_url": "https://api.github.com/users/TheEdoardo93/orgs",
"repos_url": "https://api.github.com/users/TheEdoardo93/repos",
"events_url": "https://api.github.com/users/TheEdoardo93/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheEdoardo93/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I've seen the same error when running the test suite locally.",
"I know how to reproduce/debug this particular failure so I'll take a look on monday (unless someone beats me to it)",
"aaarg, I can't reproduce it locally anymore.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,586 | 1,586 | NONE | null | ## π Bug
I've seen that some PRs failed because of the same error, such as #2239 (today) and #2237 (today). If I'm not remember wrong, there is also another PR in the last days involved @rlouf that he had the same error. Even if the changes made not affect `DistilBertModel` such as #2237, this error occurs!
Question: **Is it a bug in Transformers or a bug in our code submitted?**
## To Reproduce
Steps to reproduce the behavior: after submitting a PR to Transformers library, it occurs
```
=================================== FAILURES ===================================
______________ TFDistilBertModelTest.test_pt_tf_model_equivalence ______________
self = <transformers.tests.modeling_tf_distilbert_test.TFDistilBertModelTest testMethod=test_pt_tf_model_equivalence>
def test_pt_tf_model_equivalence(self):
if not is_torch_available():
return
import torch
import transformers
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
for model_class in self.all_model_classes:
pt_model_class_name = model_class.__name__[2:] # Skip the "TF" at the beggining
pt_model_class = getattr(transformers, pt_model_class_name)
config.output_hidden_states = True
tf_model = model_class(config)
pt_model = pt_model_class(config)
# Check we can load pt model in tf and vice-versa with model => model functions
tf_model = transformers.load_pytorch_model_in_tf2_model(tf_model, pt_model, tf_inputs=inputs_dict)
pt_model = transformers.load_tf2_model_in_pytorch_model(pt_model, tf_model)
# Check predictions on first output (logits/hidden-states) are close enought given low-level computational differences
pt_model.eval()
pt_inputs_dict = dict((name, torch.from_numpy(key.numpy()).to(torch.long))
for name, key in inputs_dict.items())
with torch.no_grad():
pto = pt_model(**pt_inputs_dict)
tfo = tf_model(inputs_dict, training=False)
tf_hidden_states = tfo[0].numpy()
pt_hidden_states = pto[0].numpy()
tf_hidden_states[np.isnan(tf_hidden_states)] = 0
pt_hidden_states[np.isnan(pt_hidden_states)] = 0
max_diff = np.amax(np.abs(tf_hidden_states - pt_hidden_states))
> self.assertLessEqual(max_diff, 2e-2)
E AssertionError: 3.107201 not less than or equal to 0.02
transformers/tests/modeling_tf_common_test.py:139: AssertionError
```
## Expected behavior
No error was thrown.
## Environment
* OS: **Ubuntu 16.04**
* Python version: **3.6.9**
* PyTorch version: **1.3.1**
* PyTorch Transformers version (or branch): **master**
* Using GPU ? **Indifferent**
* Distributed of parallel setup ? **Indifferent**
* Any other relevant information: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2240/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2239 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2239/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2239/comments | https://api.github.com/repos/huggingface/transformers/issues/2239/events | https://github.com/huggingface/transformers/pull/2239 | 540,999,192 | MDExOlB1bGxSZXF1ZXN0MzU1NzMzOTMz | 2,239 | HANS evaluation | {
"login": "ns-moosavi",
"id": 19606435,
"node_id": "MDQ6VXNlcjE5NjA2NDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/19606435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ns-moosavi",
"html_url": "https://github.com/ns-moosavi",
"followers_url": "https://api.github.com/users/ns-moosavi/followers",
"following_url": "https://api.github.com/users/ns-moosavi/following{/other_user}",
"gists_url": "https://api.github.com/users/ns-moosavi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ns-moosavi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ns-moosavi/subscriptions",
"organizations_url": "https://api.github.com/users/ns-moosavi/orgs",
"repos_url": "https://api.github.com/users/ns-moosavi/repos",
"events_url": "https://api.github.com/users/ns-moosavi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ns-moosavi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for that Nafise!\r\n\r\nI've started to update the readme.\r\n\r\nDo you think you would have an example of a command to run the script together with associated results?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2239?src=pr&el=h1) Report\n> Merging [#2239](https://codecov.io/gh/huggingface/transformers/pull/2239?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dc17f2a1110aed8d1729e77b0619601e3d96b84e?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2239?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2239 +/- ##\n==========================================\n- Coverage 74.67% 74.66% -0.02% \n==========================================\n Files 87 87 \n Lines 14800 14802 +2 \n==========================================\n Hits 11052 11052 \n- Misses 3748 3750 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2239?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/data/metrics/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2239/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3MvX19pbml0X18ucHk=) | `26.66% <0%> (-1.25%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2239?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2239?src=pr&el=footer). Last update [dc17f2a...258ed2e](https://codecov.io/gh/huggingface/transformers/pull/2239?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi Thomas,\r\n\r\nThis is an example of using test_hans.py:\r\n\r\n```\r\nexport HANS_DIR=path-to-hans \r\nexport MODEL_TYPE=type-of-the-model-e.g.-bert-roberta-xlnet-etc\r\nexport MODEL_PATH=path-to-the-model-directory-that-is-trained-on-NLI-e.g.-by-using-run_glue.py\r\n\r\npython examples/test_hans.py \\\r\n --task_name hans \\\r\n --model_type $MODEL_TYPE \\\r\n --do_eval \\\r\n --do_lower_case \\\r\n --data_dir $HANS_DIR \\\r\n --model_name_or_path $MODEL_PATH \\\r\n --max_seq_length 128 \\\r\n -output_dir $MODEL_PATH \\\r\n```\r\n\r\nThis will create the hans_predictions.txt file in MODEL_PATH, which can then be evaluated using hans/evaluate_heur_output.py from the HANS dataset.\r\n\r\nThe results of the BERT-base model that is trained on MNLI using batch size 8 and the random seed 42 on the HANS dataset is as follows:\r\n\r\n\r\n```\r\nHeuristic entailed results:\r\nlexical_overlap: 0.9702\r\nsubsequence: 0.9942\r\nconstituent: 0.9962\r\n\r\nHeuristic non-entailed results:\r\nlexical_overlap: 0.199\r\nsubsequence: 0.0396\r\nconstituent: 0.118\r\n```\r\n",
"Great thanks a lot @ns-moosavi, merging this.\r\nSo happy to welcome HANS in the examples!"
] | 1,576 | 1,579 | 1,579 | NONE | null | Adding the evaluation on the HANS dataset in examples | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2239/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2239",
"html_url": "https://github.com/huggingface/transformers/pull/2239",
"diff_url": "https://github.com/huggingface/transformers/pull/2239.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2239.patch",
"merged_at": 1579177706000
} |
https://api.github.com/repos/huggingface/transformers/issues/2238 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2238/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2238/comments | https://api.github.com/repos/huggingface/transformers/issues/2238/events | https://github.com/huggingface/transformers/issues/2238 | 540,944,537 | MDU6SXNzdWU1NDA5NDQ1Mzc= | 2,238 | Readme installation/test order can lead to confusion when running example unit tests | {
"login": "internetcoffeephone",
"id": 5096835,
"node_id": "MDQ6VXNlcjUwOTY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5096835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/internetcoffeephone",
"html_url": "https://github.com/internetcoffeephone",
"followers_url": "https://api.github.com/users/internetcoffeephone/followers",
"following_url": "https://api.github.com/users/internetcoffeephone/following{/other_user}",
"gists_url": "https://api.github.com/users/internetcoffeephone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/internetcoffeephone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/internetcoffeephone/subscriptions",
"organizations_url": "https://api.github.com/users/internetcoffeephone/orgs",
"repos_url": "https://api.github.com/users/internetcoffeephone/repos",
"events_url": "https://api.github.com/users/internetcoffeephone/events{/privacy}",
"received_events_url": "https://api.github.com/users/internetcoffeephone/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
}
] | [
"related to what @aaugustin is working on",
"Yes I'm planning to rework the contributor documentation. Currently it's a bit haphazard, sorry.",
"This is now clarified. The general README points to the README for examples which is unambiguous."
] | 1,576 | 1,577 | 1,577 | NONE | null | ## β Questions & Help
When following the main readme installation/testing instructions in order, it is not mentioned that in order to let the examples tests pass, the separate examples/requirements.txt must be installed.
Thus, `pip install -r ./examples/requirements.txt` should come before `python -m pytest -sv ./transformers/tests/` in the main readme to avoid confusion.
Additionally, the readme line `python -m unittest discover -s examples -p "*test.py" -t examples` cannot find any tests, and produces the following output;
`Ran 0 tests in 0.000s
OK`
I don't think this is the intended behavior - is the line redundant? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2238/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/2238/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2237 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2237/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2237/comments | https://api.github.com/repos/huggingface/transformers/issues/2237/events | https://github.com/huggingface/transformers/pull/2237 | 540,933,855 | MDExOlB1bGxSZXF1ZXN0MzU1Njc4ODg3 | 2,237 | Fix out-of-date comments in Transformers examples directory | {
"login": "TheEdoardo93",
"id": 19664571,
"node_id": "MDQ6VXNlcjE5NjY0NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19664571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheEdoardo93",
"html_url": "https://github.com/TheEdoardo93",
"followers_url": "https://api.github.com/users/TheEdoardo93/followers",
"following_url": "https://api.github.com/users/TheEdoardo93/following{/other_user}",
"gists_url": "https://api.github.com/users/TheEdoardo93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheEdoardo93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheEdoardo93/subscriptions",
"organizations_url": "https://api.github.com/users/TheEdoardo93/orgs",
"repos_url": "https://api.github.com/users/TheEdoardo93/repos",
"events_url": "https://api.github.com/users/TheEdoardo93/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheEdoardo93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,576 | 1,577 | 1,577 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2237/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2237",
"html_url": "https://github.com/huggingface/transformers/pull/2237",
"diff_url": "https://github.com/huggingface/transformers/pull/2237.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2237.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2236 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2236/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2236/comments | https://api.github.com/repos/huggingface/transformers/issues/2236/events | https://github.com/huggingface/transformers/issues/2236 | 540,933,848 | MDU6SXNzdWU1NDA5MzM4NDg= | 2,236 | Removing redundant model weights | {
"login": "bacicnikola",
"id": 47123842,
"node_id": "MDQ6VXNlcjQ3MTIzODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47123842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bacicnikola",
"html_url": "https://github.com/bacicnikola",
"followers_url": "https://api.github.com/users/bacicnikola/followers",
"following_url": "https://api.github.com/users/bacicnikola/following{/other_user}",
"gists_url": "https://api.github.com/users/bacicnikola/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bacicnikola/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bacicnikola/subscriptions",
"organizations_url": "https://api.github.com/users/bacicnikola/orgs",
"repos_url": "https://api.github.com/users/bacicnikola/repos",
"events_url": "https://api.github.com/users/bacicnikola/events{/privacy}",
"received_events_url": "https://api.github.com/users/bacicnikola/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,582 | 1,582 | NONE | null | ## π This is not a bug, more like implemetation detail
I am using BertForTokenClassification model for my binary token classification problem. If my understanding is right, BertForTokenClassification has one layer on top with num_classes output neurons (one for each class) with softmax activation function + CrossEntropyLoss().
Now, if your problem has >2 classes this is completely fine, but if num_classes=2 you are both modeling P(input = 0) and P(input = 1), and it's easy to see why this is redunant.
num_classes=2 is special case and it should be implemented with only one output neuron with sigmoid activation function + binary cross-entopy.
Please correct me if I am wrong :)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2236/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/2236/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2235 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2235/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2235/comments | https://api.github.com/repos/huggingface/transformers/issues/2235/events | https://github.com/huggingface/transformers/pull/2235 | 540,912,867 | MDExOlB1bGxSZXF1ZXN0MzU1NjYwODUz | 2,235 | add example for Model2Model in quickstart | {
"login": "rlouf",
"id": 3885044,
"node_id": "MDQ6VXNlcjM4ODUwNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3885044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rlouf",
"html_url": "https://github.com/rlouf",
"followers_url": "https://api.github.com/users/rlouf/followers",
"following_url": "https://api.github.com/users/rlouf/following{/other_user}",
"gists_url": "https://api.github.com/users/rlouf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rlouf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rlouf/subscriptions",
"organizations_url": "https://api.github.com/users/rlouf/orgs",
"repos_url": "https://api.github.com/users/rlouf/repos",
"events_url": "https://api.github.com/users/rlouf/events{/privacy}",
"received_events_url": "https://api.github.com/users/rlouf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2235?src=pr&el=h1) Report\n> Merging [#2235](https://codecov.io/gh/huggingface/transformers/pull/2235?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ff36e6d8d713901af807719fa604518c451ff2e5?src=pr&el=desc) will **decrease** coverage by `1.09%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2235?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2235 +/- ##\n=========================================\n- Coverage 81.42% 80.32% -1.1% \n=========================================\n Files 122 122 \n Lines 18348 18344 -4 \n=========================================\n- Hits 14940 14735 -205 \n- Misses 3408 3609 +201\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2235?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2235/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `9.15% <0%> (-80.92%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2235/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `80.17% <0%> (-17.25%)` | :arrow_down: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2235/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `59.41% <0%> (-12.36%)` | :arrow_down: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2235/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `72.21% <0%> (-2.33%)` | :arrow_down: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2235/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `94.27% <0%> (-2.21%)` | :arrow_down: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2235/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `80.19% <0%> (-1.33%)` | :arrow_down: |\n| [transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2235/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy9zcXVhZC5weQ==) | `14.18% <0%> (ΓΈ)` | :arrow_up: |\n| [transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2235/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `71.42% <0%> (+0.06%)` | :arrow_up: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2235/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `91.46% <0%> (+1.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2235?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2235?src=pr&el=footer). Last update [ff36e6d...a3245dd](https://codecov.io/gh/huggingface/transformers/pull/2235?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,576 | 1,651 | 1,576 | CONTRIBUTOR | null | As per discussed @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2235/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2235",
"html_url": "https://github.com/huggingface/transformers/pull/2235",
"diff_url": "https://github.com/huggingface/transformers/pull/2235.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2235.patch",
"merged_at": 1576851153000
} |
https://api.github.com/repos/huggingface/transformers/issues/2234 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2234/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2234/comments | https://api.github.com/repos/huggingface/transformers/issues/2234/events | https://github.com/huggingface/transformers/issues/2234 | 540,829,506 | MDU6SXNzdWU1NDA4Mjk1MDY= | 2,234 | Supoort loading model weights from a single file. | {
"login": "ljch2018",
"id": 22562546,
"node_id": "MDQ6VXNlcjIyNTYyNTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/22562546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ljch2018",
"html_url": "https://github.com/ljch2018",
"followers_url": "https://api.github.com/users/ljch2018/followers",
"following_url": "https://api.github.com/users/ljch2018/following{/other_user}",
"gists_url": "https://api.github.com/users/ljch2018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ljch2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljch2018/subscriptions",
"organizations_url": "https://api.github.com/users/ljch2018/orgs",
"repos_url": "https://api.github.com/users/ljch2018/repos",
"events_url": "https://api.github.com/users/ljch2018/events{/privacy}",
"received_events_url": "https://api.github.com/users/ljch2018/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"As my best knowledge, you **can't** load a model _directly_ from a file. As said in #2094 by @LysandreJik, if you saved using the `save_pretrained` method, then the directory already should have a `config.json` file specifying the shape of the model, so you can simply load it using:\r\n```\r\n>>> import transformers\r\n>>> from transformers import BertModel\r\n>>> model = BertModel.from_pretrained(\"./test/saved_model/\")\r\n```\r\n\r\n> So far, transformers package supports loading a model from a directory, such as\r\n> \r\n> ```python\r\n> model = BertModel.from_pretrained('./test/saved_model/') # E.g. model was saved using save_pretrained('./test/saved_model/')\r\n> ```\r\n> \r\n> Actually, it loads model weight from $directory/pytorch_model.bin.\r\n> Is it possible to load model weights from a file?\r\n> such as\r\n> \r\n> ```python\r\n> tokenizer = BertTokenizer.from_pretrained('./test/saved_model/my_vocab.txt')\r\n> ```\r\n> \r\n> Because my network is too slow to download models from amazon server.\r\n> I have to download the model files manually and put them to transformers.\r\n> It is more convenient to load the downloaded model weights from a file than a directory.\r\n> \r\n> ```python\r\n> model = BertModel.from_pretrained('./test/model_zoo/bert-base-multilingual-cased-pytorch_model.bin') \r\n> ```\r\n> \r\n> Thank you.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,582 | 1,582 | NONE | null | So far, transformers package supports loading a model from a directory, such as
```python
model = BertModel.from_pretrained('./test/saved_model/') # E.g. model was saved using save_pretrained('./test/saved_model/')
```
Actually, it loads model weight from $directory/pytorch_model.bin.
Is it possible to load model weights from a file?
such as
```python
tokenizer = BertTokenizer.from_pretrained('./test/saved_model/my_vocab.txt')
```
Because my network is too slow to download models from amazon server.
I have to download the model files manually and put them to transformers.
It is more convenient to load the downloaded model weights from a file than a directory.
```python
model = BertModel.from_pretrained('./test/model_zoo/bert-base-multilingual-cased-pytorch_model.bin')
```
Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2234/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2233 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2233/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2233/comments | https://api.github.com/repos/huggingface/transformers/issues/2233/events | https://github.com/huggingface/transformers/issues/2233 | 540,809,715 | MDU6SXNzdWU1NDA4MDk3MTU= | 2,233 | The code used to be clean... | {
"login": "Borororo",
"id": 42636061,
"node_id": "MDQ6VXNlcjQyNjM2MDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/42636061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Borororo",
"html_url": "https://github.com/Borororo",
"followers_url": "https://api.github.com/users/Borororo/followers",
"following_url": "https://api.github.com/users/Borororo/following{/other_user}",
"gists_url": "https://api.github.com/users/Borororo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Borororo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Borororo/subscriptions",
"organizations_url": "https://api.github.com/users/Borororo/orgs",
"repos_url": "https://api.github.com/users/Borororo/repos",
"events_url": "https://api.github.com/users/Borororo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Borororo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Updating code by hand seems a very messy way of working. Can't you just make a copy of the (old version of the) examples directory, and make your changes there - regardless of the current installation of `transformers` itself?"
] | 1,576 | 1,576 | 1,576 | NONE | null | Hi, thanks to all contributors!
From my point of view, recent changes in examples codes, especially run_squad script, are kinda confusing.
May I know what is the reason that we delete old utils_squad.py and store all data processing and evaluating scripts under the transformer/data folder? This makes code look quite messy. Particularly, the example and features classes have been changed a lot.
I was reading the old version scripts for a long long time, and I think it was easy for me to change any components in the whole structure, no matter data preprocessing or introducing new LM. I can easily change the input format of examples, so I could run codes on other datasets with few lines.
I believe many people have already made task-specific changes on their own codes, and sometimes some people just want to run codes on new released LMs for testing, so they come here and check. If luckily new LM is available, we would prefer update code by hands instead of the direct downloading whole package again.
But anyway, if changes are compulsory, I am definitely willing to go through it deeply. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2233/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/2233/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2232 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2232/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2232/comments | https://api.github.com/repos/huggingface/transformers/issues/2232/events | https://github.com/huggingface/transformers/pull/2232 | 540,676,584 | MDExOlB1bGxSZXF1ZXN0MzU1NDU1MzY4 | 2,232 | Keep even the first of the special tokens intact while lowercasing | {
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2232?src=pr&el=h1) Report\n> Merging [#2232](https://codecov.io/gh/huggingface/transformers/pull/2232?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a5a06a851e1da79138e53978aa079a093f243dde?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2232?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2232 +/- ##\n=======================================\n Coverage 81.43% 81.43% \n=======================================\n Files 122 122 \n Lines 18338 18338 \n=======================================\n Hits 14933 14933 \n Misses 3405 3405\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2232?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2232/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `90.49% <100%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2232?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2232?src=pr&el=footer). Last update [a5a06a8...06b022d](https://codecov.io/gh/huggingface/transformers/pull/2232?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> This fixes #2220. The order of `all_special_tokens` is random, and the first one of these will get broken by the lowercasing. There are 5 special tokens, so you have a 1 in 5 chance of hitting the problem.\r\n\r\nYou're right! Good job @dirkgr :-)",
"Great, thanks a lot for that @dirkgr. I've added a test in fb393ad9945f66b081f88b81b90a2974d81e9601 to make sure this doesn't happen again."
] | 1,576 | 1,576 | 1,576 | CONTRIBUTOR | null | This fixes #2220. The order of `all_special_tokens` is random, and the first one of these will get broken by the lowercasing. There are 5 special tokens, so you have a 1 in 5 chance of hitting the problem. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2232/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2232",
"html_url": "https://github.com/huggingface/transformers/pull/2232",
"diff_url": "https://github.com/huggingface/transformers/pull/2232.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2232.patch",
"merged_at": 1576859384000
} |
https://api.github.com/repos/huggingface/transformers/issues/2231 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2231/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2231/comments | https://api.github.com/repos/huggingface/transformers/issues/2231/events | https://github.com/huggingface/transformers/pull/2231 | 540,639,657 | MDExOlB1bGxSZXF1ZXN0MzU1NDIyNTY2 | 2,231 | [http] customizable requests user-agent | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2231?src=pr&el=h1) Report\n> Merging [#2231](https://codecov.io/gh/huggingface/transformers/pull/2231?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a5a06a851e1da79138e53978aa079a093f243dde?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `77.77%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2231?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2231 +/- ##\n==========================================\n- Coverage 81.43% 81.42% -0.01% \n==========================================\n Files 122 122 \n Lines 18338 18348 +10 \n==========================================\n+ Hits 14933 14940 +7 \n- Misses 3405 3408 +3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2231?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2231/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `71.36% <77.77%> (-0.07%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2231?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2231?src=pr&el=footer). Last update [a5a06a8...15d897f](https://codecov.io/gh/huggingface/transformers/pull/2231?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,576 | 1,577 | 1,576 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2231/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2231",
"html_url": "https://github.com/huggingface/transformers/pull/2231",
"diff_url": "https://github.com/huggingface/transformers/pull/2231.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2231.patch",
"merged_at": 1576834090000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2230 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2230/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2230/comments | https://api.github.com/repos/huggingface/transformers/issues/2230/events | https://github.com/huggingface/transformers/issues/2230 | 540,520,796 | MDU6SXNzdWU1NDA1MjA3OTY= | 2,230 | what is the most efficient way to store all hidden layers' weights? | {
"login": "vr25",
"id": 22553367,
"node_id": "MDQ6VXNlcjIyNTUzMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22553367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vr25",
"html_url": "https://github.com/vr25",
"followers_url": "https://api.github.com/users/vr25/followers",
"following_url": "https://api.github.com/users/vr25/following{/other_user}",
"gists_url": "https://api.github.com/users/vr25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vr25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vr25/subscriptions",
"organizations_url": "https://api.github.com/users/vr25/orgs",
"repos_url": "https://api.github.com/users/vr25/repos",
"events_url": "https://api.github.com/users/vr25/events{/privacy}",
"received_events_url": "https://api.github.com/users/vr25/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"PyTorch has its [own saving utility](https://pytorch.org/tutorials/beginner/saving_loading_models.html): `torch.save`, which sounds good for your use case as you can easily save/load the tensors. It's based on pickle.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,582 | 1,582 | NONE | null | Hi,
I am following this [post](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/) for getting all 12 hidden layers' weights for every token in a sentence.
Consider I have a short text with 2 sentences: `He stole money today. He is fishing on the Mississippi riverbank.`
I want to store 5 + 8 = 13 tokens - all 12 hidden layers weights where each tensor's size=768. So, I will have 13 x 12 = 156 tensors.
I want to save all the weights in a file and I am wondering if I should use `pickle` or `hd5` format (I am working with long text documents.) I am planning to separate two sentences by a blank line, please suggest if any better ways to do it.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2230/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2229 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2229/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2229/comments | https://api.github.com/repos/huggingface/transformers/issues/2229/events | https://github.com/huggingface/transformers/pull/2229 | 540,456,604 | MDExOlB1bGxSZXF1ZXN0MzU1MjYyNzkw | 2,229 | Minor/basic text fixes | {
"login": "aidankierans",
"id": 31550769,
"node_id": "MDQ6VXNlcjMxNTUwNzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/31550769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aidankierans",
"html_url": "https://github.com/aidankierans",
"followers_url": "https://api.github.com/users/aidankierans/followers",
"following_url": "https://api.github.com/users/aidankierans/following{/other_user}",
"gists_url": "https://api.github.com/users/aidankierans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aidankierans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aidankierans/subscriptions",
"organizations_url": "https://api.github.com/users/aidankierans/orgs",
"repos_url": "https://api.github.com/users/aidankierans/repos",
"events_url": "https://api.github.com/users/aidankierans/events{/privacy}",
"received_events_url": "https://api.github.com/users/aidankierans/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2229?src=pr&el=h1) Report\n> Merging [#2229](https://codecov.io/gh/huggingface/transformers/pull/2229?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a1f1dce0ae511ef7766c6b6a8f5ebf9118279e73?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2229?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2229 +/- ##\n=======================================\n Coverage 81.47% 81.47% \n=======================================\n Files 122 122 \n Lines 18344 18344 \n=======================================\n Hits 14946 14946 \n Misses 3398 3398\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2229?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2229/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy9zcXVhZC5weQ==) | `14.18% <0%> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2229?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2229?src=pr&el=footer). Last update [a1f1dce...70dbca5](https://codecov.io/gh/huggingface/transformers/pull/2229?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,576 | 1,576 | 1,576 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2229/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2229",
"html_url": "https://github.com/huggingface/transformers/pull/2229",
"diff_url": "https://github.com/huggingface/transformers/pull/2229.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2229.patch",
"merged_at": 1576790599000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2228 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2228/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2228/comments | https://api.github.com/repos/huggingface/transformers/issues/2228/events | https://github.com/huggingface/transformers/issues/2228 | 540,431,341 | MDU6SXNzdWU1NDA0MzEzNDE= | 2,228 | Trouble loading Albert model | {
"login": "jswift24",
"id": 1891204,
"node_id": "MDQ6VXNlcjE4OTEyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1891204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jswift24",
"html_url": "https://github.com/jswift24",
"followers_url": "https://api.github.com/users/jswift24/followers",
"following_url": "https://api.github.com/users/jswift24/following{/other_user}",
"gists_url": "https://api.github.com/users/jswift24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jswift24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jswift24/subscriptions",
"organizations_url": "https://api.github.com/users/jswift24/orgs",
"repos_url": "https://api.github.com/users/jswift24/repos",
"events_url": "https://api.github.com/users/jswift24/events{/privacy}",
"received_events_url": "https://api.github.com/users/jswift24/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, that's an error from the docs! I just fixed it with 33adab2. The doc should be updated now, thanks for raising this issue.",
"Wow, that was fast. Thank you!"
] | 1,576 | 1,576 | 1,576 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Albert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: (give details):
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
Using example from docs: https://huggingface.co/transformers/model_doc/albert.html
## To Reproduce
Trying to load the Albert model using the code below:
```
import tensorflow as tf
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('bert-base-uncased')
```
Getting the following error:
```
Traceback (most recent call last):
File "<ipython-input-4-56254e5f4b51>", line 3, in <module>
tokenizer = AlbertTokenizer.from_pretrained('bert-base-uncased')
File "c:\users\admin\appdata\local\programs\python\python37\lib\site-packages\transformers\tokenization_utils.py", line 302, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "c:\users\admin\appdata\local\programs\python\python37\lib\site-packages\transformers\tokenization_utils.py", line 437, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "c:\users\admin\appdata\local\programs\python\python37\lib\site-packages\transformers\tokenization_albert.py", line 90, in __init__
self.sp_model.Load(vocab_file)
File "c:\users\admin\appdata\local\programs\python\python37\lib\site-packages\sentencepiece.py", line 118, in Load
return _sentencepiece.SentencePieceProcessor_Load(self, filename)
RuntimeError: Internal: C:\projects\sentencepiece\src\sentencepiece_processor.cc(73) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
```
Note, the same code works for bert instead of albert:
> from transformers import *
> tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
>
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
Looking for albert model to load without errors
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Windows 10
* Python version: 3.7
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): pip install transformers (fresh install today 12/19/2019)
* Using GPU ?: Yes
* Distributed of parallel setup ?: N/A
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2228/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2227 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2227/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2227/comments | https://api.github.com/repos/huggingface/transformers/issues/2227/events | https://github.com/huggingface/transformers/pull/2227 | 540,373,915 | MDExOlB1bGxSZXF1ZXN0MzU1MTk1MzEw | 2,227 | Add "Train on Valohai" buttons to README | {
"login": "ruksi",
"id": 2681608,
"node_id": "MDQ6VXNlcjI2ODE2MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2681608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruksi",
"html_url": "https://github.com/ruksi",
"followers_url": "https://api.github.com/users/ruksi/followers",
"following_url": "https://api.github.com/users/ruksi/following{/other_user}",
"gists_url": "https://api.github.com/users/ruksi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruksi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruksi/subscriptions",
"organizations_url": "https://api.github.com/users/ruksi/orgs",
"repos_url": "https://api.github.com/users/ruksi/repos",
"events_url": "https://api.github.com/users/ruksi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruksi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2227?src=pr&el=h1) Report\n> Merging [#2227](https://codecov.io/gh/huggingface/transformers/pull/2227?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/62c1fc3c1ecdfab787ee3c34d1ec1eba65c18877?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2227?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2227 +/- ##\n=======================================\n Coverage 81.47% 81.47% \n=======================================\n Files 122 122 \n Lines 18344 18344 \n=======================================\n Hits 14946 14946 \n Misses 3398 3398\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2227?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2227?src=pr&el=footer). Last update [62c1fc3...6bfc181](https://codecov.io/gh/huggingface/transformers/pull/2227?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Big fan of the images explaining the usage :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,583 | 1,583 | NONE | null | This pull request adds two "Train on Valohai" buttons to README:

and...

When clicked, it will automatically create a project on Valohai and let you run the project examples without any further setup. Effectively the same as "Deploy to Heroku" button if you are familiar with that.
## The flow looks like this (after login):




| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2227/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2227/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2227",
"html_url": "https://github.com/huggingface/transformers/pull/2227",
"diff_url": "https://github.com/huggingface/transformers/pull/2227.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2227.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2226 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2226/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2226/comments | https://api.github.com/repos/huggingface/transformers/issues/2226/events | https://github.com/huggingface/transformers/pull/2226 | 540,345,360 | MDExOlB1bGxSZXF1ZXN0MzU1MTcxMzk0 | 2,226 | [REVIEW] Updated a out-of-date comment in run_lm_finetuning.py | {
"login": "TheEdoardo93",
"id": 19664571,
"node_id": "MDQ6VXNlcjE5NjY0NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19664571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheEdoardo93",
"html_url": "https://github.com/TheEdoardo93",
"followers_url": "https://api.github.com/users/TheEdoardo93/followers",
"following_url": "https://api.github.com/users/TheEdoardo93/following{/other_user}",
"gists_url": "https://api.github.com/users/TheEdoardo93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheEdoardo93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheEdoardo93/subscriptions",
"organizations_url": "https://api.github.com/users/TheEdoardo93/orgs",
"repos_url": "https://api.github.com/users/TheEdoardo93/repos",
"events_url": "https://api.github.com/users/TheEdoardo93/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheEdoardo93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,576 | 1,576 | 1,576 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2226/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2226",
"html_url": "https://github.com/huggingface/transformers/pull/2226",
"diff_url": "https://github.com/huggingface/transformers/pull/2226.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2226.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2225 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2225/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2225/comments | https://api.github.com/repos/huggingface/transformers/issues/2225/events | https://github.com/huggingface/transformers/pull/2225 | 540,332,806 | MDExOlB1bGxSZXF1ZXN0MzU1MTYwODEx | 2,225 | [REVIEW] Updated comments in run_lm_finetuning.py | {
"login": "TheEdoardo93",
"id": 19664571,
"node_id": "MDQ6VXNlcjE5NjY0NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19664571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheEdoardo93",
"html_url": "https://github.com/TheEdoardo93",
"followers_url": "https://api.github.com/users/TheEdoardo93/followers",
"following_url": "https://api.github.com/users/TheEdoardo93/following{/other_user}",
"gists_url": "https://api.github.com/users/TheEdoardo93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheEdoardo93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheEdoardo93/subscriptions",
"organizations_url": "https://api.github.com/users/TheEdoardo93/orgs",
"repos_url": "https://api.github.com/users/TheEdoardo93/repos",
"events_url": "https://api.github.com/users/TheEdoardo93/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheEdoardo93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2225?src=pr&el=h1) Report\n> Merging [#2225](https://codecov.io/gh/huggingface/transformers/pull/2225?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8efc6dd544bf1a30d99d4b5abfc5e214699eab2b?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2225?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2225 +/- ##\n=======================================\n Coverage 81.47% 81.47% \n=======================================\n Files 122 122 \n Lines 18344 18344 \n=======================================\n Hits 14946 14946 \n Misses 3398 3398\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2225?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2225?src=pr&el=footer). Last update [8efc6dd...035bfd9](https://codecov.io/gh/huggingface/transformers/pull/2225?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,576 | 1,576 | 1,576 | NONE | null | I've added DistilBERT and CamemBERT models in the description of the models that can be used for fine-tuning a LM model on a custom dataset. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2225/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2225",
"html_url": "https://github.com/huggingface/transformers/pull/2225",
"diff_url": "https://github.com/huggingface/transformers/pull/2225.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2225.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2224 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2224/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2224/comments | https://api.github.com/repos/huggingface/transformers/issues/2224/events | https://github.com/huggingface/transformers/pull/2224 | 540,325,818 | MDExOlB1bGxSZXF1ZXN0MzU1MTU0OTA2 | 2,224 | [REVIEW] Removed duplicate XLMConfig, XLMForQuestionAnswering and XLMTokenizer in run_squad.py | {
"login": "TheEdoardo93",
"id": 19664571,
"node_id": "MDQ6VXNlcjE5NjY0NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19664571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheEdoardo93",
"html_url": "https://github.com/TheEdoardo93",
"followers_url": "https://api.github.com/users/TheEdoardo93/followers",
"following_url": "https://api.github.com/users/TheEdoardo93/following{/other_user}",
"gists_url": "https://api.github.com/users/TheEdoardo93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheEdoardo93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheEdoardo93/subscriptions",
"organizations_url": "https://api.github.com/users/TheEdoardo93/orgs",
"repos_url": "https://api.github.com/users/TheEdoardo93/repos",
"events_url": "https://api.github.com/users/TheEdoardo93/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheEdoardo93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2224?src=pr&el=h1) Report\n> Merging [#2224](https://codecov.io/gh/huggingface/transformers/pull/2224?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8efc6dd544bf1a30d99d4b5abfc5e214699eab2b?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2224?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2224 +/- ##\n=======================================\n Coverage 81.47% 81.47% \n=======================================\n Files 122 122 \n Lines 18344 18344 \n=======================================\n Hits 14946 14946 \n Misses 3398 3398\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2224?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2224?src=pr&el=footer). Last update [8efc6dd...e6a7670](https://codecov.io/gh/huggingface/transformers/pull/2224?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks @TheEdoardo93 !"
] | 1,576 | 1,576 | 1,576 | NONE | null | Before this PR, in run_squad.py at lines 58-65 were the following:
```
MODEL_CLASSES = {
'bert': (BertConfig, BertForQuestionAnswering, BertTokenizer),
'xlnet': (XLNetConfig, XLNetForQuestionAnswering, XLNetTokenizer),
'xlm': (XLMConfig, XLMForQuestionAnswering, XLMTokenizer),
'distilbert': (DistilBertConfig, DistilBertForQuestionAnswering, DistilBertTokenizer),
'albert': (AlbertConfig, AlbertForQuestionAnswering, AlbertTokenizer),
'xlm': (XLMConfig, XLMForQuestionAnswering, XLMTokenizer)
}
```
After this PR, I've **removed** the last (key, value) pair in the `MODEL_CLASSES` dictionary because it contains _xlm_ as key **two** times. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2224/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2224",
"html_url": "https://github.com/huggingface/transformers/pull/2224",
"diff_url": "https://github.com/huggingface/transformers/pull/2224.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2224.patch",
"merged_at": 1576767057000
} |
https://api.github.com/repos/huggingface/transformers/issues/2223 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2223/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2223/comments | https://api.github.com/repos/huggingface/transformers/issues/2223/events | https://github.com/huggingface/transformers/issues/2223 | 540,280,594 | MDU6SXNzdWU1NDAyODA1OTQ= | 2,223 | Need pretrained XLNET on Squad which can be loaded from_pre_trained | {
"login": "karthik19967829",
"id": 35610230,
"node_id": "MDQ6VXNlcjM1NjEwMjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/35610230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karthik19967829",
"html_url": "https://github.com/karthik19967829",
"followers_url": "https://api.github.com/users/karthik19967829/followers",
"following_url": "https://api.github.com/users/karthik19967829/following{/other_user}",
"gists_url": "https://api.github.com/users/karthik19967829/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karthik19967829/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karthik19967829/subscriptions",
"organizations_url": "https://api.github.com/users/karthik19967829/orgs",
"repos_url": "https://api.github.com/users/karthik19967829/repos",
"events_url": "https://api.github.com/users/karthik19967829/events{/privacy}",
"received_events_url": "https://api.github.com/users/karthik19967829/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,582 | 1,582 | NONE | null | ## β Questions & Help
It will be great ,if someone could share their username/model for XLNET pretrained on squad
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2223/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2222 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2222/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2222/comments | https://api.github.com/repos/huggingface/transformers/issues/2222/events | https://github.com/huggingface/transformers/issues/2222 | 540,160,475 | MDU6SXNzdWU1NDAxNjA0NzU= | 2,222 | Can we remove force_download=True from tests? | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After further research, I'm seeing two possible issues.\r\n\r\n**transformers gets files from S3 with https:// URLs, not s3:// URLs.**\r\n\r\nI think we want to preserve the capability to use the library offline; we don't want to require an Internet connection to check the ETag in all cases. So we have two possibilities here:\r\n\r\n1. Accept that, on a machine without access to the Internet, tests may use stale files. I think that's acceptable.\r\n2. Add a `force_check` option; if it's `True` and we can't fetch the ETag, raise an exception instead of using a possibly stale file. I don't think that's worth the effort.\r\n\r\n**files may be truncated if shutil.copyfileobj is interrupted**\r\n\r\nThe whole point of downloading to a temp file and then moving to the final location is to prevent truncation. By using shutil.copyfileobj, you're reducing the risk of truncation, but not eliminating it.\r\n\r\nHere's what I would do:\r\n\r\n- download to a temp file with a random name in the destination folder β to ensure it's on the same disk partition and to prevent an expensive copy\r\n- rename it β renaming is atomic in a practical sense in this context\r\n\r\nI'm not proposing to fetch the Content-Length and compare it with the file on disk because I can't see a situation where we'd get a truncated file.\r\n\r\n----\r\n\r\n**Summary of proposal**\r\n\r\n- harden the \"download to temp file then move\" process as described just above\r\n- remove the `force_download=True` option from tests",
"see related https://github.com/huggingface/transformers/issues/1678"
] | 1,576 | 1,576 | 1,576 | CONTRIBUTOR | null | ## π Feature
I would like to remove `force_download=True` from tests and rely on the cache to keep cached files up to date.
## Motivation
Currently, several tests use `force_download=True` on large model files, e.g.:
https://github.com/huggingface/transformers/blob/9c58b236ef5fbbe5d0cbde4932eb342a73eaa0dc/transformers/tests/modeling_tf_auto_test.py#L49
This prevents caching large models within a test run and across test runs, which is very painful when working on these tests in a local environment. This is the main reason why I'm filing this issue.
In fact, it's so painful that these tests are marked as slow and skipped by default. As a consequence, we're not getting as much value from them as we could. If we downloaded each model only once, perhaps we could run them in CI.
I assume that `force_download=True` was added for robustness, to make sure the cache doesn't contain stale files.
If that's correct, then I believe I can be safely removed, because the current implementation of the cache in `file_utils.py` is sufficiently robust to keep the cache up to date.
The entry point is here:
https://github.com/huggingface/transformers/blob/8efc6dd544bf1a30d99d4b5abfc5e214699eab2b/transformers/configuration_utils.py#L157-L158
which goes here:
https://github.com/huggingface/transformers/blob/8efc6dd544bf1a30d99d4b5abfc5e214699eab2b/transformers/file_utils.py#L192-L196
which always gets an ETag (and, I guess, fails without an Internet connection):
https://github.com/huggingface/transformers/blob/8efc6dd544bf1a30d99d4b5abfc5e214699eab2b/transformers/file_utils.py#L288-L289
As a consequence, you never hit the only situation where the cache may use stale files: if it cannot get an ETag (either because there's no Internet connection, or because the HTTP server doesn't provide an ETag).
(As a side note, file_utils.py is needlessly complex, given that all transformers files are stored on S3.)
Am I missing a reason for `force_download=True`? Did you add it because you encountered another issue? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2222/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2221 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2221/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2221/comments | https://api.github.com/repos/huggingface/transformers/issues/2221/events | https://github.com/huggingface/transformers/pull/2221 | 540,150,776 | MDExOlB1bGxSZXF1ZXN0MzU1MDA3MzAz | 2,221 | Updated typo on the link | {
"login": "ejarkm",
"id": 20713251,
"node_id": "MDQ6VXNlcjIwNzEzMjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/20713251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ejarkm",
"html_url": "https://github.com/ejarkm",
"followers_url": "https://api.github.com/users/ejarkm/followers",
"following_url": "https://api.github.com/users/ejarkm/following{/other_user}",
"gists_url": "https://api.github.com/users/ejarkm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ejarkm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ejarkm/subscriptions",
"organizations_url": "https://api.github.com/users/ejarkm/orgs",
"repos_url": "https://api.github.com/users/ejarkm/repos",
"events_url": "https://api.github.com/users/ejarkm/events{/privacy}",
"received_events_url": "https://api.github.com/users/ejarkm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2221?src=pr&el=h1) Report\n> Merging [#2221](https://codecov.io/gh/huggingface/transformers/pull/2221?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8efc6dd544bf1a30d99d4b5abfc5e214699eab2b?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2221?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2221 +/- ##\n=======================================\n Coverage 81.47% 81.47% \n=======================================\n Files 122 122 \n Lines 18344 18344 \n=======================================\n Hits 14946 14946 \n Misses 3398 3398\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2221?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2221?src=pr&el=footer). Last update [8efc6dd...f9dbf62](https://codecov.io/gh/huggingface/transformers/pull/2221?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, nice catch!"
] | 1,576 | 1,576 | 1,576 | CONTRIBUTOR | null | Updated documentation due to typo | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2221/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2221",
"html_url": "https://github.com/huggingface/transformers/pull/2221",
"diff_url": "https://github.com/huggingface/transformers/pull/2221.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2221.patch",
"merged_at": 1576766205000
} |
https://api.github.com/repos/huggingface/transformers/issues/2220 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2220/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2220/comments | https://api.github.com/repos/huggingface/transformers/issues/2220/events | https://github.com/huggingface/transformers/issues/2220 | 540,022,572 | MDU6SXNzdWU1NDAwMjI1NzI= | 2,220 | tokenizer of bert-base-uncased gives an incorrect split | {
"login": "kangxin",
"id": 4372924,
"node_id": "MDQ6VXNlcjQzNzI5MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4372924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kangxin",
"html_url": "https://github.com/kangxin",
"followers_url": "https://api.github.com/users/kangxin/followers",
"following_url": "https://api.github.com/users/kangxin/following{/other_user}",
"gists_url": "https://api.github.com/users/kangxin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kangxin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kangxin/subscriptions",
"organizations_url": "https://api.github.com/users/kangxin/orgs",
"repos_url": "https://api.github.com/users/kangxin/repos",
"events_url": "https://api.github.com/users/kangxin/events{/privacy}",
"received_events_url": "https://api.github.com/users/kangxin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As you can see, I'm not able to reproduce your bug, therefore **in my environment it works as expected without bugs**. Here the source code I've used (the same as you):\r\n```\r\nPython 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \r\n[GCC 7.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import transformers\r\n>>> from transformers import BertTokenizer\r\n>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n>>> text = \"[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]\"\r\n>>> tokenized_text = tokenizer.tokenize(text)\r\n>>> tokenized_text\r\n['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', 'henson', 'was', 'a', 'puppet', '##eer', '[SEP]']\r\n>>> ' '.join(tokenized_text)\r\n'[CLS] who was jim henson ? [SEP] jim henson was a puppet ##eer [SEP]' \r\n```\r\n\r\nMy environment specifications are the following:\r\n- Python: **3.6.9**\r\n- OS: **Ubuntu 16.04**\r\n- Transformers: **2.2.2** (installed with `pip install --upgrade transformers`)\r\n- PyTorch: **1.3.1**\r\n- TensorFlow: **2.0**\r\n\r\nPlease specify your environment in order to understand why in your case it doesn't work.\r\n\r\nUPDATE: I've just tested also with Transformers **2.1.1** and **2.0.0** and it works! Are you using `pytorch-transformers` or even `pytorch-pretrained-bert`?\r\n\r\n> I found the following code gives an incorrect split\r\n> \r\n> `tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')`\r\n> `text = \"[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]\"`\r\n> `tokenized_text = tokenizer.tokenize(text)`\r\n> `print(' '.join(tokenized_text))`\r\n> \r\n> > [ cl ##s ] who was jim henson ? [ sep ] jim henson was a puppet ##eer [ sep ]\r\n> \r\n> The correct one, based on the Quick tour example, seems to be\r\n> \r\n> > [CLS] who was jim henson ? [SEP] jim heson was a puppet ##eer [SEP]",
"This is an issue that is fixed in 2.2.2. It was present in earlier 2.2.1. If you update to the latest versions, it should be fixed. See https://github.com/huggingface/transformers/issues/2155",
"I am actually seeing that the behavior is now non-deterministic:\r\n```\r\n$ python -c 'import transformers; print(len(transformers.BertTokenizer.from_pretrained(\"bert-base-uncased\").tokenize(\"A, [MASK] AllenNLP sentence.\")))'\r\n8\r\n$ python -c 'import transformers; print(len(transformers.BertTokenizer.from_pretrained(\"bert-base-uncased\").tokenize(\"A, [MASK] AllenNLP sentence.\")))'\r\n8\r\n$ python -c 'import transformers; print(len(transformers.BertTokenizer.from_pretrained(\"bert-base-uncased\").tokenize(\"A, [MASK] AllenNLP sentence.\")))'\r\n10\r\n$ pip freeze | fgrep transformer\r\ntransformers==2.2.2\r\n```",
"I proposed a fix in #2232."
] | 1,576 | 1,576 | 1,576 | NONE | null | I found the following code gives an incorrect split
`tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')`
`text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"`
`tokenized_text = tokenizer.tokenize(text)`
`print(' '.join(tokenized_text))`
> [ cl ##s ] who was jim henson ? [ sep ] jim henson was a puppet ##eer [ sep ]
The correct one, based on the Quick tour example, seems to be
> [CLS] who was jim henson ? [SEP] jim heson was a puppet ##eer [SEP] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2220/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2219 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2219/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2219/comments | https://api.github.com/repos/huggingface/transformers/issues/2219/events | https://github.com/huggingface/transformers/issues/2219 | 540,013,521 | MDU6SXNzdWU1NDAwMTM1MjE= | 2,219 | When i run the script run_tf_ner.py, i got ValueError: Expected floating point type, got <dtype: 'int32'>. | {
"login": "zhipengChen",
"id": 13817269,
"node_id": "MDQ6VXNlcjEzODE3MjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/13817269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhipengChen",
"html_url": "https://github.com/zhipengChen",
"followers_url": "https://api.github.com/users/zhipengChen/followers",
"following_url": "https://api.github.com/users/zhipengChen/following{/other_user}",
"gists_url": "https://api.github.com/users/zhipengChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhipengChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhipengChen/subscriptions",
"organizations_url": "https://api.github.com/users/zhipengChen/orgs",
"repos_url": "https://api.github.com/users/zhipengChen/repos",
"events_url": "https://api.github.com/users/zhipengChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhipengChen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> I have tried tf 2.0.0a0, 2.0.0b0, 2.0.0b1, but the same error was reported.\r\n> \r\n> ## Questions & Help\r\n> Traceback (most recent call last):\r\n> File \"run_tf_ner.py\", line 615, in \r\n> app.run(main)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/absl/app.py\", line 299, in run\r\n> _run_main(main, args)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/absl/app.py\", line 250, in _run_main\r\n> sys.exit(main(argv))\r\n> File \"run_tf_ner.py\", line 517, in main\r\n> cache_dir=args['cache_dir'] if args['cache_dir'] else None)\r\n> File \"/home/zpchen/transformers-master/transformers/modeling_tf_utils.py\", line 303, in from_pretrained\r\n> ret = model(model.dummy_inputs, training=False) # build the network with dummy inputs\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 712, in **call**\r\n> outputs = self.call(inputs, *args, **kwargs)\r\n> File \"/home/zpchen/transformers-master/transformers/modeling_tf_bert.py\", line 1011, in call\r\n> outputs = self.bert(inputs, **kwargs)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 712, in **call**\r\n> outputs = self.call(inputs, *args, **kwargs)\r\n> File \"/home/zpchen/transformers-master/transformers/modeling_tf_bert.py\", line 547, in call\r\n> embedding_output = self.embeddings([input_ids, position_ids, token_type_ids, inputs_embeds], training=training)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 709, in **call**\r\n> self._maybe_build(inputs)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 1966, in _maybe_build\r\n> self.build(input_shapes)\r\n> File \"/home/zpchen/transformers-master/transformers/modeling_tf_bert.py\", line 122, in build\r\n> initializer=get_initializer(self.initializer_range))\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 389, in add_weight\r\n> aggregation=aggregation)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/training/tracking/base.py\", line 713, in _add_variable_with_custom_getter\r\n> **kwargs_for_getter)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer_utils.py\", line 154, in make_variable\r\n> shape=variable_shape if variable_shape else None)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 260, in **call**\r\n> return cls._variable_v1_call(*args, **kwargs)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 221, in _variable_v1_call\r\n> shape=shape)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 60, in getter\r\n> return captured_getter(captured_previous, **kwargs)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/distribute/distribute_lib.py\", line 1250, in creator_with_resource_vars\r\n> return self._create_variable(*args, **kwargs)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/distribute/one_device_strategy.py\", line 76, in _create_variable\r\n> return next_creator(*args, **kwargs)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 199, in \r\n> previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py\", line 2502, in default_variable_creator\r\n> shape=shape)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variables.py\", line 264, in **call**\r\n> return super(VariableMetaclass, cls).**call**(*args, **kwargs)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py\", line 464, in **init**\r\n> shape=shape)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py\", line 608, in _init_from_args\r\n> initial_value() if init_from_fn else initial_value,\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer_utils.py\", line 134, in \r\n> init_val = lambda: initializer(shape, dtype=dtype)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/init_ops_v2.py\", line 341, in **call**\r\n> dtype = _assert_float_dtype(dtype)\r\n> File \"/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/init_ops_v2.py\", line 769, in _assert_float_dtype\r\n> raise ValueError(\"Expected floating point type, got %s.\" % dtype)\r\n> ValueError: Expected floating point type, got <dtype: 'int32'>.\r\n\r\nDo you try to look into [this StackOverflow question](https://stackoverflow.com/questions/43798817/tensorflow-valueerror-expected-non-integer-got-dtype-int32) and #1780 ?",
"Yes, But I also get this error 'AttributeError: module 'tensorflow.python.keras.api._v2.keras.layers' has no attribute 'LayerNormalization'. ",
"> Yes, But I also get this error 'AttributeError: module 'tensorflow.python.keras.api._v2.keras.layers' has no attribute 'LayerNormalization'.\r\n\r\n[Here](https://github.com/bryanlimy/tf2-transformer-chatbot/issues/4) they say to update TensorFlow 2.0 version from `alpha-0` to `beta-0`. But I remember that when you update TF you encounter another problem. Can you give a try? `pip install tensorflow==2.0.0-beta0`",
"> > Yes, But I also get this error 'AttributeError: module 'tensorflow.python.keras.api._v2.keras.layers' has no attribute 'LayerNormalization'.\r\n> \r\n> [Here](https://github.com/bryanlimy/tf2-transformer-chatbot/issues/4) they say to update TensorFlow 2.0 version from `alpha-0` to `beta-0`. But I remember that when you update TF you encounter another problem. Can you give a try? `pip install tensorflow==2.0.0-beta0`\r\n\r\nI can try. But I got \"ValueError: Expected floating point type, got <dtype: 'int32'>.\" previously when i use 2.0.0-beta0.",
"You can try with the **latest** version of TensorFlow 2.X. You can install it through `pip install tensorflow==2.1.0-rc1`. Keep us updated\r\n\r\n> > > Yes, But I also get this error 'AttributeError: module 'tensorflow.python.keras.api._v2.keras.layers' has no attribute 'LayerNormalization'.\r\n> > \r\n> > \r\n> > [Here](https://github.com/bryanlimy/tf2-transformer-chatbot/issues/4) they say to update TensorFlow 2.0 version from `alpha-0` to `beta-0`. But I remember that when you update TF you encounter another problem. Can you give a try? `pip install tensorflow==2.0.0-beta0`\r\n> \r\n> I can try. But I got \"ValueError: Expected floating point type, got <dtype: 'int32'>.\" previously when i use 2.0.0-beta0.",
"Thank you. I will try.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> Thank you. I will try.\r\n\r\nhave you solved this \"ValueError: Expected floating point type, got <dtype: 'int32'>.\" problem?"
] | 1,576 | 1,598 | 1,582 | NONE | null | I have tried tf 2.0.0a0, 2.0.0b0, 2.0.0b1, but the same error was reported.
## β Questions & Help
<!-- A clear and concise description of the question. -->
Traceback (most recent call last):
File "run_tf_ner.py", line 615, in <module>
app.run(main)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_tf_ner.py", line 517, in main
cache_dir=args['cache_dir'] if args['cache_dir'] else None)
File "/home/zpchen/transformers-master/transformers/modeling_tf_utils.py", line 303, in from_pretrained
ret = model(model.dummy_inputs, training=False) # build the network with dummy inputs
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 712, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/home/zpchen/transformers-master/transformers/modeling_tf_bert.py", line 1011, in call
outputs = self.bert(inputs, **kwargs)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 712, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/home/zpchen/transformers-master/transformers/modeling_tf_bert.py", line 547, in call
embedding_output = self.embeddings([input_ids, position_ids, token_type_ids, inputs_embeds], training=training)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 709, in __call__
self._maybe_build(inputs)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1966, in _maybe_build
self.build(input_shapes)
File "/home/zpchen/transformers-master/transformers/modeling_tf_bert.py", line 122, in build
initializer=get_initializer(self.initializer_range))
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 389, in add_weight
aggregation=aggregation)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/training/tracking/base.py", line 713, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 154, in make_variable
shape=variable_shape if variable_shape else None)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 260, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 221, in _variable_v1_call
shape=shape)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 60, in getter
return captured_getter(captured_previous, **kwargs)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/distribute/distribute_lib.py", line 1250, in creator_with_resource_vars
return self._create_variable(*args, **kwargs)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/distribute/one_device_strategy.py", line 76, in _create_variable
return next_creator(*args, **kwargs)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 2502, in default_variable_creator
shape=shape)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 264, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 464, in __init__
shape=shape)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 608, in _init_from_args
initial_value() if init_from_fn else initial_value,
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 134, in <lambda>
init_val = lambda: initializer(shape, dtype=dtype)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/init_ops_v2.py", line 341, in __call__
dtype = _assert_float_dtype(dtype)
File "/home/zpchen/anaconda7/lib/python3.6/site-packages/tensorflow/python/ops/init_ops_v2.py", line 769, in _assert_float_dtype
raise ValueError("Expected floating point type, got %s." % dtype)
ValueError: Expected floating point type, got <dtype: 'int32'>.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2219/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2218 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2218/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2218/comments | https://api.github.com/repos/huggingface/transformers/issues/2218/events | https://github.com/huggingface/transformers/pull/2218 | 540,002,809 | MDExOlB1bGxSZXF1ZXN0MzU0ODg0NDcz | 2,218 | corrected typo in example for t5 model input argument | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2218?src=pr&el=h1) Report\n> Merging [#2218](https://codecov.io/gh/huggingface/transformers/pull/2218?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8efc6dd544bf1a30d99d4b5abfc5e214699eab2b?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2218?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2218 +/- ##\n=======================================\n Coverage 81.47% 81.47% \n=======================================\n Files 122 122 \n Lines 18344 18344 \n=======================================\n Hits 14946 14946 \n Misses 3398 3398\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2218?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2218/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3Q1LnB5) | `96.55% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2218/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3Q1LnB5) | `81.22% <ΓΈ> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2218?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2218?src=pr&el=footer). Last update [8efc6dd...e280aa8](https://codecov.io/gh/huggingface/transformers/pull/2218?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Indeed, nice catch!"
] | 1,576 | 1,576 | 1,576 | MEMBER | null | For the T5Model the argument name of the input has to be specified explicitly since the forward function is defined as
`def forward(self, **kwargs):`
and can therefore only handle keyworded arguments such as `input_ids=inputs_ids`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2218/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2218",
"html_url": "https://github.com/huggingface/transformers/pull/2218",
"diff_url": "https://github.com/huggingface/transformers/pull/2218.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2218.patch",
"merged_at": 1576766096000
} |
https://api.github.com/repos/huggingface/transformers/issues/2217 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2217/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2217/comments | https://api.github.com/repos/huggingface/transformers/issues/2217/events | https://github.com/huggingface/transformers/pull/2217 | 539,946,892 | MDExOlB1bGxSZXF1ZXN0MzU0ODM2OTEw | 2,217 | Support running tests in parallel | {
"login": "aaugustin",
"id": 788910,
"node_id": "MDQ6VXNlcjc4ODkxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/788910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaugustin",
"html_url": "https://github.com/aaugustin",
"followers_url": "https://api.github.com/users/aaugustin/followers",
"following_url": "https://api.github.com/users/aaugustin/following{/other_user}",
"gists_url": "https://api.github.com/users/aaugustin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaugustin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaugustin/subscriptions",
"organizations_url": "https://api.github.com/users/aaugustin/orgs",
"repos_url": "https://api.github.com/users/aaugustin/repos",
"events_url": "https://api.github.com/users/aaugustin/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaugustin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2217?src=pr&el=h1) Report\n> Merging [#2217](https://codecov.io/gh/huggingface/transformers/pull/2217?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac1b449cc938bb34bc9021feff599cfd3b2376ae?src=pr&el=desc) will **increase** coverage by `0.15%`.\n> The diff coverage is `59.43%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2217?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2217 +/- ##\n==========================================\n+ Coverage 79.82% 79.97% +0.15% \n==========================================\n Files 131 131 \n Lines 19496 19427 -69 \n==========================================\n- Hits 15562 15537 -25 \n+ Misses 3934 3890 -44\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2217?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [transformers/tests/utils.py](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3V0aWxzLnB5) | `94.11% <100%> (+0.36%)` | :arrow_up: |\n| [transformers/tests/modeling\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `99.21% <100%> (ΓΈ)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `99.08% <100%> (ΓΈ)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_auto\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2F1dG9fdGVzdC5weQ==) | `40.67% <11.11%> (ΓΈ)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_albert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2FsYmVydF90ZXN0LnB5) | `96.19% <33.33%> (+1.74%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_transfo\\_xl\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX3RyYW5zZm9feGxfdGVzdC5weQ==) | `95.87% <50%> (+1.87%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_openai\\_gpt\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX29wZW5haV9ncHRfdGVzdC5weQ==) | `96.39% <50%> (+1.65%)` | :arrow_up: |\n| [transformers/tests/modeling\\_transfo\\_xl\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RyYW5zZm9feGxfdGVzdC5weQ==) | `96.26% <50%> (+1.71%)` | :arrow_up: |\n| [transformers/tests/modeling\\_xlm\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3hsbV90ZXN0LnB5) | `97.36% <50%> (+1.23%)` | :arrow_up: |\n| [transformers/tests/modeling\\_roberta\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3JvYmVydGFfdGVzdC5weQ==) | `76.15% <50%> (+0.96%)` | :arrow_up: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/2217/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2217?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2217?src=pr&el=footer). Last update [ac1b449...b8e924e](https://codecov.io/gh/huggingface/transformers/pull/2217?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I'm getting an unexpected result on Circle CI:\r\n\r\n- TensorFlow tests are about 4 times faster, which makes sense\r\n- PyTorch tests are about 4 times **slower?!**\r\n\r\nI'm talking about the time spent in pytest here, which is shown at the bottom of each test run.\r\n\r\n**Before parallelization**\r\n\r\nhttps://circleci.com/workflow-run/4ca98875-2d5c-438b-951f-4939d2f3cfc9\r\n\r\n04:08 build_py3_tf\r\n00:59 build_py3_torch\r\n\r\n05:21 build_py3_torch_and_tf\r\n\r\n04:30 build_py2_tf\r\n01:12 build_py2_torch\r\n\r\n**After parallelization**\r\n\r\nhttps://circleci.com/workflow-run/5d273bdb-0b4d-4e71-b0f1-272d3f9f72da\r\n\r\n00:56 build_py3_tf\r\n04:02 build_py3_torch\r\n\r\n04:53 build_py3_torch_and_tf\r\n\r\n01:25 build_py2_tf\r\n05:39 build_py2_torch\r\n\r\n----\r\n\r\nEDIT - I thought this might happen if the PyTorch tests do expensive calculations in setUp / setUpClass. Due to how pytest-xdist works, setUp / setUpClass may run multiple times on different CPUs. However, the `--dist=loadfile` option of pytest-xdist doesn't seem to help, so it must be something else.\r\n\r\n----\r\n\r\nEDIT 2 - setting OMP_NUM_THREADS=1 solves this, thanks @mfuntowicz!",
"I'm also facing an issue with `hf_api_test.py`. I'm hitting a HTTP 401 error when running tests in parallel, and so does Circle CI. Are we blocked by a security rule protecting against excessive login requests?\r\n\r\nI'm not a big fan of tests that depend on network services. Often they're flaky.\r\n\r\nEDIT -- this is solved by the `--dist=loadfile` option of pytest-xdist.",
"For future reference, I logged all filesystem read/writes with:\r\n\r\n```\r\nsudo opensnoop -F -n python \\\r\n | grep -v \"$HOME/\\.pyenv\" \\\r\n | grep -v \"$HOME/\\.virtualenvs\" \\\r\n | grep -v \"__pycache__\" \\\r\n | grep -v \"$TMPDIR\" \\\r\n | grep -v \"0x00000000 \\.\" \\\r\n > open.log\r\n```\r\n\r\nwhile running the full test suite:\r\n\r\n```\r\nRUN_SLOW=1 python -m unittest discover -s transformers/tests -p '*_test.py' -t . -v\r\n```\r\n\r\nThis doesn't reveal anything that could collide between tests."
] | 1,576 | 1,576 | 1,576 | CONTRIBUTOR | null | At this point, after `pip install pytest-xdist`, I'm getting a **2.5x speedup** running tests locally on my 2016 MBP (2,9 GHz Quad-Core Intel Core i7):
- `python -m pytest -n auto -s -v ./transformers/tests/` runs in slightly less than 2 minutes
- `python -m pytest -s -v ./transformers/tests/` takes slightly more than 5 minutes
Furthermore, Circle CI gets a **2,15x speedup**, going from [7:30 minutes](https://circleci.com/workflow-run/4ca98875-2d5c-438b-951f-4939d2f3cfc9) to [3:30 minutes](https://circleci.com/workflow-run/ad7094bb-0a0b-404d-ba3f-bd18b37f98bd).
The bottleneck is now the examples, which take a bit less than 3:30 to run, even with parallelization.
This PR adds a new dependency: filelock. You'll need to `pip install -e .` for local development again after it's merged.
This is now ready for review.
----
EDIT - test run time jumped up after I rebased on top of master mostly because of #2246. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2217/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2217",
"html_url": "https://github.com/huggingface/transformers/pull/2217",
"diff_url": "https://github.com/huggingface/transformers/pull/2217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2217.patch",
"merged_at": 1576925664000
} |
https://api.github.com/repos/huggingface/transformers/issues/2216 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2216/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2216/comments | https://api.github.com/repos/huggingface/transformers/issues/2216/events | https://github.com/huggingface/transformers/issues/2216 | 539,855,885 | MDU6SXNzdWU1Mzk4NTU4ODU= | 2,216 | Error while loading Pretrained Enocder and Decoder transformers | {
"login": "anandhperumal",
"id": 12907396,
"node_id": "MDQ6VXNlcjEyOTA3Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/12907396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anandhperumal",
"html_url": "https://github.com/anandhperumal",
"followers_url": "https://api.github.com/users/anandhperumal/followers",
"following_url": "https://api.github.com/users/anandhperumal/following{/other_user}",
"gists_url": "https://api.github.com/users/anandhperumal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anandhperumal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anandhperumal/subscriptions",
"organizations_url": "https://api.github.com/users/anandhperumal/orgs",
"repos_url": "https://api.github.com/users/anandhperumal/repos",
"events_url": "https://api.github.com/users/anandhperumal/events{/privacy}",
"received_events_url": "https://api.github.com/users/anandhperumal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"How we can solve this problem you've highlighted? Do you want to open a separate PR? If you want, we can work together on this problem!",
"Sure. Drop me a mail at [email protected]",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,582 | 1,582 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Define a model and save it.
when you try to save the model initially you'll get the following error #2196 (create a folder manually encoder and decoder)
2. And try to load the model.
you'll get following error
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/guest_1/anaconda3/envs/my_env/lib/python3.6/site-packages/transformers/modeling_encoder_decoder.py", line 146, in from_pretrained
encoder_pretrained_model_name_or_path, *model_args, **kwargs_encoder
File "/home/guest_1/anaconda3/envs/my_env/lib/python3.6/site-packages/transformers/modeling_auto.py", line 159, in from_pretrained
"'xlm', 'roberta, 'ctrl'".format(pretrained_model_name_or_path))
ValueError: Unrecognized model identifier in /models/m2m/. Should contains one of 'bert', 'openai-gpt', 'gpt2', 'transfo-xl', 'xlnet', 'xlm', 'roberta, 'ctrl'
```
This is because [here](https://github.com/huggingface/transformers/blob/8efc6dd544bf1a30d99d4b5abfc5e214699eab2b/transformers/modeling_auto.py#L170) the model prefix is checked to load the pre-trained model. So rather than just saving the directory as encoder and decoder save it with model prefix.
You could do something like this
```
encoder_dir = self.encoder.base_model_prefix + "encoder"
decoder_dir = self.encoder.base_model_prefix + "decoder"
self.encoder.save_pretrained(os.path.join(save_directory, encoder_dir))
self.decoder.save_pretrained(os.path.join(save_directory, decoder_dir))
```
But another potential bug the above code will lead to is that almost all the transformers has the same prefix. You need to rename all the transformers with their respective transformers prefix.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: ubuntu
* Python version: 3.6
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.2.0
* Using GPU ? doesn't matter
* Distributed of parallel setup ? doesn't matter
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2216/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/2216/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2215 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2215/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2215/comments | https://api.github.com/repos/huggingface/transformers/issues/2215/events | https://github.com/huggingface/transformers/issues/2215 | 539,802,157 | MDU6SXNzdWU1Mzk4MDIxNTc= | 2,215 | Return overflowing tokens if max_length is not given | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,582 | 1,582 | COLLABORATOR | null | ## π Feature
The tokenizer can return overflowing tokens in the `encode_plus` method if 1. `return_overflowing_tokens=True`; and 2. if `max_length` is given. I imagine that it could also be useful to allow to return the overflowing tokens when a max_length is not given but when the input is longer than the model's max_seq_len. As an example: if the input is 600 tokens long, and the model supports up to 512, then the tokenizer will cut down the input to 512 anyway, so the superfluous 88 tokens can then be returned.
Example showing that the overflowing tokens are not returned, even though the input is trimmed. Would expect the trimmed tokens to be returned in the `overflowing_tokens` field.
```python
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = 'I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas. I like bananas.'
encoded_inputs = tokenizer.encode_plus(text,
return_tensors='pt',
pad_to_max_length=True,
return_overflowing_tokens=True,
return_special_tokens_mask=True)
print(encoded_inputs.keys())
print(encoded_inputs['input_ids'].size())
# dict_keys(['special_tokens_mask', 'input_ids', 'token_type_ids', 'attention_mask'])
# torch.Size([1, 512])
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2215/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/huggingface/transformers/issues/2215/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2214 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2214/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2214/comments | https://api.github.com/repos/huggingface/transformers/issues/2214/events | https://github.com/huggingface/transformers/issues/2214 | 539,739,617 | MDU6SXNzdWU1Mzk3Mzk2MTc= | 2,214 | XLM run_squad errors with size mismatch | {
"login": "waalge",
"id": 47293755,
"node_id": "MDQ6VXNlcjQ3MjkzNzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/47293755?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/waalge",
"html_url": "https://github.com/waalge",
"followers_url": "https://api.github.com/users/waalge/followers",
"following_url": "https://api.github.com/users/waalge/following{/other_user}",
"gists_url": "https://api.github.com/users/waalge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/waalge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/waalge/subscriptions",
"organizations_url": "https://api.github.com/users/waalge/orgs",
"repos_url": "https://api.github.com/users/waalge/repos",
"events_url": "https://api.github.com/users/waalge/events{/privacy}",
"received_events_url": "https://api.github.com/users/waalge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've tried different `XLM*` models with and I've obtained the same error you've. I suspect it's broken something into the implementation of `XLM*` models or the .bin file uploaded to AWS S3.\r\nN.B: I've tried to load `xlm-mlm-17-1280` with the usual procedure (i.e. by using `from_pretrained` method) which works as expected in #2043 (about 15 days ago), but now it doesn't work (same error). Therefore, there's something broken for sure.\r\nN.B: **it's not a download problem** itself, I've tried also with `force_download=True` parameter.\r\n\r\nThe stack trace is the following:\r\n```\r\nPython 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \r\n[GCC 7.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import transformers\r\n/home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\r\n/home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\r\n/home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\r\n/home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\r\n/home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\r\n/home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\r\n>>> from transformers import XLMTokenizer, XLMWithLMHeadModel\r\n>>> tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-ende-1024')\r\nDownloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.44M/1.44M [00:00<00:00, 2.06MB/s]\r\nDownloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.00M/1.00M [00:00<00:00, 1.71MB/s]\r\n>>> model = XLMWithLMHeadModel.from_pretrained('xlm-mlm-ende-1024')\r\nDownloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 396/396 [00:00<00:00, 177kB/s]\r\nDownloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 835M/835M [01:13<00:00, 11.3MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/<user>/Desktop/transformers/transformers/transformers/modeling_utils.py\", line 486, in from_pretrained\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\nRuntimeError: Error(s) in loading state_dict for XLMWithLMHeadModel:\r\n\tsize mismatch for transformer.embeddings.weight: copying a param with shape torch.Size([64699, 1024]) from checkpoint, the shape in current model is torch.Size([30145, 1024]).\r\n\tsize mismatch for pred_layer.proj.weight: copying a param with shape torch.Size([64699, 1024]) from checkpoint, the shape in current model is torch.Size([30145, 1024]).\r\n\tsize mismatch for pred_layer.proj.bias: copying a param with shape torch.Size([64699]) from checkpoint, the shape in current model is torch.Size([30145]).\r\n>>> from transformers import XLMTokenizer, XLMForQuestionAnswering\r\n>>> model = XLMForQuestionAnswering.from_pretrained('xlm-mlm-ende-1024')\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/<user>/Desktop/transformers/transformers/transformers/modeling_utils.py\", line 486, in from_pretrained\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\nRuntimeError: Error(s) in loading state_dict for XLMForQuestionAnswering:\r\n\tsize mismatch for transformer.embeddings.weight: copying a param with shape torch.Size([64699, 1024]) from checkpoint, the shape in current model is torch.Size([30145, 1024]).\r\n>>> model = XLMForQuestionAnswering.from_pretrained('xlm-clm-ende-1024')\r\nDownloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 396/396 [00:00<00:00, 164kB/s]\r\nDownloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 835M/835M [01:11<00:00, 11.7MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/<user>/Desktop/transformers/transformers/transformers/modeling_utils.py\", line 486, in from_pretrained\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\nRuntimeError: Error(s) in loading state_dict for XLMForQuestionAnswering:\r\n\tsize mismatch for transformer.embeddings.weight: copying a param with shape torch.Size([64699, 1024]) from checkpoint, the shape in current model is torch.Size([30145, 1024]).\r\n```\r\n\r\n> ## Bug\r\n> Model: XLM_mlm_ende_1024\r\n> \r\n> Language: English\r\n> \r\n> Using: the official example scripts: `run_squad.py`\r\n> \r\n> ## To Reproduce\r\n> Steps to reproduce the behavior:\r\n> \r\n> 1. install dependecies and download squad v1.1 data; pull, install transformers from github master.\r\n> 2. run `run_squad.py` with the following args\r\n> \r\n> ```\r\n> --model_type xlm --model_name_or_path xlm-mlm-ende-1024 --do_train --do_eval --train_file ./squad_data/train-v1.1.json --predict_file ./squad_data/dev-v1.1.json --per_gpu_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir ./debug_xlm\r\n> ```\r\n> \r\n> Ultimate error:\r\n> \r\n> ```\r\n> size mismatch for transformer.embeddings.weight: copying a param with shape torch.Size([64699, 1024]) from checkpoint, the shape in current model is torch.Size([30145, 1024]).\r\n> ```\r\n> \r\n> Full inout ouput below.\r\n> \r\n> Expected behavior: finetune xlm for squad.\r\n> \r\n> ## Environment\r\n> * OS: OpenSuse 15.0\r\n> * Python version: 3.6\r\n> * PyTorch version: torch.**version** = '1.3.1+cpu'\r\n> * PyTorch Transformers version (or branch): (just transformers now?) 2.2.2\r\n> * Using GPU ? nope\r\n> * Distributed of parallel setup ? Ummm n/a\r\n> \r\n> Relates to previous issues (possibly):\r\n> \r\n> * [I've had with XLM](https://github.com/huggingface/transformers/issues/2038)\r\n> * Similar looking [error](https://github.com/huggingface/transformers/issues/594)\r\n> \r\n> ## Additional context\r\n> ```python\r\n> 12/18/2019 15:47:23 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0, distributed training: False, 16-bits training: False\r\n> 12/18/2019 15:47:23 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-ende-1024-config.json from cache at /HOME/.cache/torch/transformers/8f689e7cdf34bbebea67ad44ad6a142c9c5144e5c19d989839139e0d47d1ed74.0038e5c2b48fc777632fc95c3d3422203693750b1d0845a511b3bb84ad6d8c29\r\n> 12/18/2019 15:47:23 - INFO - transformers.configuration_utils - Model config {\r\n> \"asm\": false,\r\n> \"attention_dropout\": 0.1,\r\n> \"bos_index\": 0,\r\n> \"causal\": false,\r\n> \"dropout\": 0.1,\r\n> \"emb_dim\": 1024,\r\n> \"embed_init_std\": 0.02209708691207961,\r\n> \"end_n_top\": 5,\r\n> \"eos_index\": 1,\r\n> \"finetuning_task\": null,\r\n> \"gelu_activation\": true,\r\n> \"id2label\": {\r\n> \"0\": \"LABEL_0\",\r\n> \"1\": \"LABEL_1\"\r\n> },\r\n> \"id2lang\": {\r\n> \"0\": \"de\",\r\n> \"1\": \"en\"\r\n> },\r\n> \"init_std\": 0.02,\r\n> \"is_decoder\": false,\r\n> \"is_encoder\": true,\r\n> \"label2id\": {\r\n> \"LABEL_0\": 0,\r\n> \"LABEL_1\": 1\r\n> },\r\n> \"lang2id\": {\r\n> \"de\": 0,\r\n> \"en\": 1\r\n> },\r\n> \"layer_norm_eps\": 1e-12,\r\n> \"mask_index\": 5,\r\n> \"max_position_embeddings\": 512,\r\n> \"max_vocab\": -1,\r\n> \"min_count\": 0,\r\n> \"n_heads\": 8,\r\n> \"n_langs\": 2,\r\n> \"n_layers\": 6,\r\n> \"num_labels\": 2,\r\n> \"output_attentions\": false,\r\n> \"output_hidden_states\": false,\r\n> \"output_past\": true,\r\n> \"pad_index\": 2,\r\n> \"pruned_heads\": {},\r\n> \"same_enc_dec\": true,\r\n> \"share_inout_emb\": true,\r\n> \"sinusoidal_embeddings\": false,\r\n> \"start_n_top\": 5,\r\n> \"summary_activation\": null,\r\n> \"summary_first_dropout\": 0.1,\r\n> \"summary_proj_to_labels\": true,\r\n> \"summary_type\": \"first\",\r\n> \"summary_use_proj\": true,\r\n> \"torchscript\": false,\r\n> \"unk_index\": 3,\r\n> \"use_bfloat16\": false,\r\n> \"use_lang_emb\": true,\r\n> \"vocab_size\": 30145\r\n> }\r\n> \r\n> 12/18/2019 15:47:24 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-ende-1024-vocab.json from cache at /HOME/.cache/torch/transformers/6771b710c1daf9d51643260fdf576f6353369c3563bf0fb12176c692778dca3f.2c29a4b393decdd458e6a9744fa1d6b533212e4003a4012731d2bc2261dc35f3\r\n> 12/18/2019 15:47:24 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-ende-1024-merges.txt from cache at /HOME/.cache/torch/transformers/85d878ffb1bc2c3395b785d10ce7fc91452780316140d7a26201d7a912483e44.42fa32826c068642fdcf24adbf3ef8158b3b81e210a3d03f3102cf5a899f92a0\r\n> 12/18/2019 15:47:25 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-ende-1024-pytorch_model.bin from cache at /HOME/.cache/torch/transformers/ea4c0bbee310b490decb2b608a4dbc8ed9f2e4a103dd729ce183770b0fef698b.119d74257b953e5d50d73555a430ced11b1c149a7c17583219935ec1bd37d948\r\n> 12/18/2019 15:47:28 - INFO - transformers.modeling_utils - Weights of XLMForQuestionAnswering not initialized from pretrained model: ['qa_outputs.start_logits.dense.weight', 'qa_outputs.start_logits.dense.bias', 'qa_outputs.end_logits.dense_0.weight', 'qa_outputs.end_logits.dense_0.bias', 'qa_outputs.end_logits.LayerNorm.weight', 'qa_outputs.end_logits.LayerNorm.bias', 'qa_outputs.end_logits.dense_1.weight', 'qa_outputs.end_logits.dense_1.bias', 'qa_outputs.answer_class.dense_0.weight', 'qa_outputs.answer_class.dense_0.bias', 'qa_outputs.answer_class.dense_1.weight']\r\n> 12/18/2019 15:47:28 - INFO - transformers.modeling_utils - Weights from pretrained model not used in XLMForQuestionAnswering: ['pred_layer.proj.weight', 'pred_layer.proj.bias']\r\n> Traceback (most recent call last):\r\n> File \"./transformers/examples/run_squad.py\", line 614, in <module>\r\n> main()\r\n> File \"./transformers/examples/run_squad.py\", line 532, in main\r\n> cache_dir=args.cache_dir if args.cache_dir else None)\r\n> File \"/HOME/sandpit/transformers/transformers/modeling_utils.py\", line 486, in from_pretrained\r\n> model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n> RuntimeError: Error(s) in loading state_dict for XLMForQuestionAnswering:\r\n> size mismatch for transformer.embeddings.weight: copying a param with shape torch.Size([64699, 1024]) from checkpoint, the shape in current model is torch.Size([30145, 1024]).\r\n> ```",
"Indeed, there seems to be an error that was introduced by #2164. I'm looking into it now. Thanks for raising an issue!",
"Please let me know if 8efc6dd fixes this issue!",
"I've installed Transformers from source (`master` branch) with `pip install git+https://github.com/huggingface/transformers.git` right now, but **it seems to be the same bug**. Is it possible? The stack trace is the same as before. @LysandreJik \r\n\r\n> Please let me know if [8efc6dd](https://github.com/huggingface/transformers/commit/8efc6dd544bf1a30d99d4b5abfc5e214699eab2b) fixes this issue!",
"Hmm could you post a short snippet to reproduce? Running your initial script in my environment doesn't raise any error:\r\n\r\n```py\r\nfrom transformers import XLMWithLMHeadModel\r\nXLMWithLMHeadModel.from_pretrained(\"xlm-mlm-17-1280\")\r\n```\r\n\r\nThe error seems to be fixed on my side",
"I'm trying to use `XLMForQuestionAnswering` model, is it right for `run_squad.py` correct?\r\n```\r\n>>> import transformers\r\n>>> from transformers import XLMForQuestionAnswering\r\n>>> model = XLMForQuestionAnswering.from_pretrained('xlm-mlm-ende-1024', force_download=True)\r\nDownloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 396/396 [00:00<00:00, 146kB/s]\r\nDownloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 835M/835M [01:16<00:00, 10.9MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vidiemme/Desktop/transformers/transformers/transformers/modeling_utils.py\", line 486, in from_pretrained\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\nRuntimeError: Error(s) in loading state_dict for XLMForQuestionAnswering:\r\n\tsize mismatch for transformer.embeddings.weight: copying a param with shape torch.Size([64699, 1024]) from checkpoint, the shape in current model is torch.Size([30145, 1024]).\r\n```\r\n\r\nN.B: I've tried also your piece of code in my environment but it doesn't work (the same bug as before). How is it possible? I'm using Python 3.6.9, OS Ubuntu 16.04, PyTorch 1.3.1 and TensorFlow 2.0.\r\n\r\n> Hmm could you post a short snippet to reproduce? Running your initial script in my environment doesn't raise any error:\r\n> \r\n> ```python\r\n> from transformers import XLMWithLMHeadModel\r\n> XLMWithLMHeadModel.from_pretrained(\"xlm-mlm-17-1280\")\r\n> ```\r\n> \r\n> The error seems to be fixed on my side",
"Indeed, it doesn't fail on my side either. Is there any way you could go check in your environment, I guess (according to your error trace) following the path:\r\n\r\n```\r\n/home/vidiemme/Desktop/transformers/transformers/transformers/configuration_xlm.py\r\n```\r\n\r\nand telling me if the following lines:\r\n\r\n```py\r\n if \"n_words\" in kwargs:\r\n self.n_words = kwargs[\"n_words\"]\r\n```\r\n\r\nAre on lines 147-148? Just to make sure the install from source worked correctly. Thank you @TheEdoardo93 ",
"> N.B: I've tried also your piece of code in my environment but it doesn't work (the same bug as before). How is it possible? I'm using Python 3.6.9, OS Ubuntu 16.04, PyTorch 1.3.1 and TensorFlow 2.0.\r\n\r\nHmm okay, I'm looking into it.",
"In the file you've said to me at line 147-148 I've got the following lines:\r\n```\r\n@property\r\n def n_words(self): # For backward compatibility\r\n return self.vocab_size\r\n```\r\nI don't have the lines you've posted above. Therefore, I can say that I haven't installed the Transformers library correctly. How can I do (i.e. install from master after your fix)? Usually I do the following: `pip install git+https://github.com/huggingface/transformers.git` \r\n\r\n> Indeed, it doesn't fail on my side either. Is there any way you could go check in your environment, I guess (according to your error trace) following the path:\r\n> \r\n> ```\r\n> /home/vidiemme/Desktop/transformers/transformers/transformers/configuration_xlm.py\r\n> ```\r\n> \r\n> and telling me if the following lines:\r\n> \r\n> ```python\r\n> if \"n_words\" in kwargs:\r\n> self.n_words = kwargs[\"n_words\"]\r\n> ```\r\n> \r\n> Are on lines 147-148? Just to make sure the install from source worked correctly. Thank you @TheEdoardo93 ",
"Hmm it seems your install from source didn't work. I don't exactly know how your environment is setup, but it looks like you've cloned the repository and the code is running from this clone rather than from the library installed in your environment/virtual environment.\r\n\r\nIf you did clone it in `/home/vidiemme/Desktop/transformers/`, I would just do a `git pull` to update it.",
"**Now it works as expected**! Your [fix](https://github.com/huggingface/transformers/commit/8efc6dd544bf1a30d99d4b5abfc5e214699eab2b) fixes the bug! Great work! You can close this issue for me ;)\r\n\r\nNow we can import both `XLMForQuestionAnswering.from_pretrained('xlm-mlm-ende-1024')` and `XLMWithLMHeadModel.from_pretrained(\"xlm-mlm-17-1280\")` correctly.\r\n\r\n> Hmm it seems your install from source didn't work. I don't exactly know how your environment is setup, but it looks like you've cloned the repository and the code is running from this clone rather than from the library installed in your environment/virtual environment.\r\n> \r\n> If you did clone it in `/home/vidiemme/Desktop/transformers/`, I would just do a `git pull` to update it.",
"Glad to hear that @TheEdoardo93 !",
"A bit late to the party, but I can provide a second confirmation that this error no longer appears.\r\nThanks!",
"PS I don't know where is a useful place to put this but for anyone training XLM on squad....\r\n\r\nThe command above now runs to completion.\r\nIts score is underwhelming but demonstrates some training has been achieved\r\n\r\n```\r\nResults: {'exact': 56.9441816461684, 'f1': 67.90690126118979, 'total': 10570, 'HasAns_exact': 56.9441816461684, 'HasAns_f1': 67.90690126118979, 'HasAns_total': 10570, 'best_exact': 56.9441816461684, 'best_exact_thresh': 0.0, 'best_f1': 67.90690126118979, 'best_f1_thresh': 0.0}\r\n```\r\n"
] | 1,576 | 1,576 | 1,576 | NONE | null | ## π Bug
<!-- Important information -->
Model: XLM_mlm_ende_1024
Language: English
Using: the official example scripts: ``run_squad.py``
## To Reproduce
Steps to reproduce the behavior:
1. install dependecies and download squad v1.1 data; pull, install transformers from github master.
2. run ``run_squad.py`` with the following args
```
--model_type xlm --model_name_or_path xlm-mlm-ende-1024 --do_train --do_eval --train_file ./squad_data/train-v1.1.json --predict_file ./squad_data/dev-v1.1.json --per_gpu_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir ./debug_xlm
```
Ultimate error:
```
size mismatch for transformer.embeddings.weight: copying a param with shape torch.Size([64699, 1024]) from checkpoint, the shape in current model is torch.Size([30145, 1024]).
```
Full inout ouput below.
Expected behavior: finetune xlm for squad.
## Environment
* OS: OpenSuse 15.0
* Python version: 3.6
* PyTorch version: torch.__version__ = '1.3.1+cpu'
* PyTorch Transformers version (or branch): (just transformers now?) 2.2.2
* Using GPU ? nope
* Distributed of parallel setup ? Ummm n/a
Relates to previous issues (possibly):
* [I've had with XLM](https://github.com/huggingface/transformers/issues/2038)
* Similar looking [error](https://github.com/huggingface/transformers/issues/594)
## Additional context
```python ./transformers/examples/run_squad.py --model_type xlm --model_name_or_path xlm-mlm-ende-1024 --do_train --do_eval --train_file ./squad_data/train-v1.1.json --predict_file ./squad_data/dev-v1.1.json --per_gpu_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir ./debug_xlm
12/18/2019 15:47:23 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0, distributed training: False, 16-bits training: False
12/18/2019 15:47:23 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-ende-1024-config.json from cache at /HOME/.cache/torch/transformers/8f689e7cdf34bbebea67ad44ad6a142c9c5144e5c19d989839139e0d47d1ed74.0038e5c2b48fc777632fc95c3d3422203693750b1d0845a511b3bb84ad6d8c29
12/18/2019 15:47:23 - INFO - transformers.configuration_utils - Model config {
"asm": false,
"attention_dropout": 0.1,
"bos_index": 0,
"causal": false,
"dropout": 0.1,
"emb_dim": 1024,
"embed_init_std": 0.02209708691207961,
"end_n_top": 5,
"eos_index": 1,
"finetuning_task": null,
"gelu_activation": true,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"id2lang": {
"0": "de",
"1": "en"
},
"init_std": 0.02,
"is_decoder": false,
"is_encoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"lang2id": {
"de": 0,
"en": 1
},
"layer_norm_eps": 1e-12,
"mask_index": 5,
"max_position_embeddings": 512,
"max_vocab": -1,
"min_count": 0,
"n_heads": 8,
"n_langs": 2,
"n_layers": 6,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_index": 2,
"pruned_heads": {},
"same_enc_dec": true,
"share_inout_emb": true,
"sinusoidal_embeddings": false,
"start_n_top": 5,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "first",
"summary_use_proj": true,
"torchscript": false,
"unk_index": 3,
"use_bfloat16": false,
"use_lang_emb": true,
"vocab_size": 30145
}
12/18/2019 15:47:24 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-ende-1024-vocab.json from cache at /HOME/.cache/torch/transformers/6771b710c1daf9d51643260fdf576f6353369c3563bf0fb12176c692778dca3f.2c29a4b393decdd458e6a9744fa1d6b533212e4003a4012731d2bc2261dc35f3
12/18/2019 15:47:24 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-ende-1024-merges.txt from cache at /HOME/.cache/torch/transformers/85d878ffb1bc2c3395b785d10ce7fc91452780316140d7a26201d7a912483e44.42fa32826c068642fdcf24adbf3ef8158b3b81e210a3d03f3102cf5a899f92a0
12/18/2019 15:47:25 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-ende-1024-pytorch_model.bin from cache at /HOME/.cache/torch/transformers/ea4c0bbee310b490decb2b608a4dbc8ed9f2e4a103dd729ce183770b0fef698b.119d74257b953e5d50d73555a430ced11b1c149a7c17583219935ec1bd37d948
12/18/2019 15:47:28 - INFO - transformers.modeling_utils - Weights of XLMForQuestionAnswering not initialized from pretrained model: ['qa_outputs.start_logits.dense.weight', 'qa_outputs.start_logits.dense.bias', 'qa_outputs.end_logits.dense_0.weight', 'qa_outputs.end_logits.dense_0.bias', 'qa_outputs.end_logits.LayerNorm.weight', 'qa_outputs.end_logits.LayerNorm.bias', 'qa_outputs.end_logits.dense_1.weight', 'qa_outputs.end_logits.dense_1.bias', 'qa_outputs.answer_class.dense_0.weight', 'qa_outputs.answer_class.dense_0.bias', 'qa_outputs.answer_class.dense_1.weight']
12/18/2019 15:47:28 - INFO - transformers.modeling_utils - Weights from pretrained model not used in XLMForQuestionAnswering: ['pred_layer.proj.weight', 'pred_layer.proj.bias']
Traceback (most recent call last):
File "./transformers/examples/run_squad.py", line 614, in <module>
main()
File "./transformers/examples/run_squad.py", line 532, in main
cache_dir=args.cache_dir if args.cache_dir else None)
File "/HOME/sandpit/transformers/transformers/modeling_utils.py", line 486, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for XLMForQuestionAnswering:
size mismatch for transformer.embeddings.weight: copying a param with shape torch.Size([64699, 1024]) from checkpoint, the shape in current model is torch.Size([30145, 1024]).
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2214/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2213 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2213/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2213/comments | https://api.github.com/repos/huggingface/transformers/issues/2213/events | https://github.com/huggingface/transformers/issues/2213 | 539,670,826 | MDU6SXNzdWU1Mzk2NzA4MjY= | 2,213 | T5 - Finetuning of an EncoderDecoder Model | {
"login": "f-lng",
"id": 26275863,
"node_id": "MDQ6VXNlcjI2Mjc1ODYz",
"avatar_url": "https://avatars.githubusercontent.com/u/26275863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f-lng",
"html_url": "https://github.com/f-lng",
"followers_url": "https://api.github.com/users/f-lng/followers",
"following_url": "https://api.github.com/users/f-lng/following{/other_user}",
"gists_url": "https://api.github.com/users/f-lng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f-lng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f-lng/subscriptions",
"organizations_url": "https://api.github.com/users/f-lng/orgs",
"repos_url": "https://api.github.com/users/f-lng/repos",
"events_url": "https://api.github.com/users/f-lng/events{/privacy}",
"received_events_url": "https://api.github.com/users/f-lng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"As I know, there is **no** Python scripts for fine-tuning T5 model, **at the moment**.\r\nBesides the source code you can see in this library, you can see the PR #1739 which implements T5 model.\r\n\r\n> Hello,\r\n> \r\n> I know that the T5 implementation is quite new, but is there already code to finetune and lateron decode from the T5 model?\r\n> \r\n> As I understand most of your models are no EncoderDecoder models, so I guess that the default pipeline / code is not working for T5, is that right?\r\n> \r\n> Could you point me to a script / command / piece of code for finetuning T5?",
"The same question. #1739 was merged. First of all, In T5_INPUTS_DOCSTRING is said:\r\n```\r\n To match pre-training, T5 input sequence should be formatted with [CLS] and [SEP] tokens as follows:\r\n (a) For sequence pairs:\r\n tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]\r\n (b) For single sequences:\r\n tokens: [CLS] the dog is hairy . [SEP]\r\n\r\n```\r\n\r\nAt second, it looks like T5Model can work in encoder mode only. So, it's possible to treat it as usual LM:\r\n\r\n```\r\n tokenizer = T5Tokenizer.from_pretrained('t5-small')\r\n model = T5Model.from_pretrained('t5-small')\r\n input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) # Batch size 1\r\n outputs = model(input_ids)\r\n last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n```\r\n\r\nMaybe @thomwolf can clarify how better to fine-tune T5 for classification tasks",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,576 | 1,582 | 1,582 | NONE | null | Hello,
I know that the T5 implementation is quite new, but is there already code to finetune and lateron decode from the T5 model?
As I understand most of your models are no EncoderDecoder models, so I guess that the default pipeline / code is not working for T5, is that right?
Could you point me to a script / command / piece of code for finetuning T5? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2213/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.