url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/3313 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3313/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3313/comments | https://api.github.com/repos/huggingface/transformers/issues/3313/events | https://github.com/huggingface/transformers/issues/3313 | 583,159,176 | MDU6SXNzdWU1ODMxNTkxNzY= | 3,313 | KeyError in GLUE data tokenization with RoBERTA | {
"login": "ethanjperez",
"id": 6402205,
"node_id": "MDQ6VXNlcjY0MDIyMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6402205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethanjperez",
"html_url": "https://github.com/ethanjperez",
"followers_url": "https://api.github.com/users/ethanjperez/followers",
"following_url": "https://api.github.com/users/ethanjperez/following{/other_user}",
"gists_url": "https://api.github.com/users/ethanjperez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethanjperez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethanjperez/subscriptions",
"organizations_url": "https://api.github.com/users/ethanjperez/orgs",
"repos_url": "https://api.github.com/users/ethanjperez/repos",
"events_url": "https://api.github.com/users/ethanjperez/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethanjperez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I also have this issue when i run run_multiple_choice.py in RACE data with RoBERTA.",
"I get the same error when I try to fine-tune Squad",
"Tagging @LysandreJik ",
"> \r\n> \r\n> I also have this issue when i run run_multiple_choice.py in RACE data with RoBERTA.\r\n\r\nSame here. Any solution?",
"@nielingyun @orena1 @Onur90 maybe try pulling again from the latest version of the repo and see if it works? The error went away after I pulled recently, not sure if that fixed it or something else I did - let me know if that worked",
"@ethanjperez by latest version you mean **latest commit** or the **latest release** (v2.6.0)? It is still not working with the **latest commit**."
] | 1,584 | 1,585 | 1,585 | CONTRIBUTOR | null | # 🐛 Bug
I'm getting a KeyError [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py#L94) when using RoBERTa in [examples/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py) and trying to access `'token_type_ids'` while preprocessing the data, maybe from [this commit](https://github.com/huggingface/transformers/commit/5164ea91a7b4d35cb03867233527fa383a651775) removing `'token_type_ids'` from RoBERTa (and DistilBERT)?
I get the error when fine-tuning RoBERTa on CoLA and RTE. I haven't tried other tasks, but I think you'd get the same error.
I don't get the error when fine-tuning XLNet (presumably, since XLNet does use `'token_type_ids'`), and I don't get the error when I do `pip install transformers` instead of `pip install .` (which I think means the issue is coming from a recent commit).
Here's the full error message:
```bash
03/17/2020 11:53:58 - INFO - transformers.data.processors.glue - Writing example 0/13997
Traceback (most recent call last):
File "examples/run_glue.py", line 731, in <module>
main()
File "examples/run_glue.py", line 679, in main
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
File "examples/run_glue.py", line 419, in load_and_cache_examples
pad_token_segment_id=4 if args.model_type in ["xlnet"] else 0,
File "/home/ejp416/cmv/transformers/src/transformers/data/processors/glue.py", line 94, in glue_convert_examples_to_features
input_ids, token_type_ids = inputs["input_ids"], inputs["token_type_ids"]
KeyError: 'token_type_ids'
```
## Information
Model I am using (Bert, XLNet ...): RoBERTa. I think DistilBERT may run into the same issue as well.
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
I've made slight modifications to the training loop in the official [examples/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py), but I did not touch the data pre-processing, which is where the error occurs (before any training).
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
I've run into the error on CoLA and RTE, though I think the error should happen on all GLUE tasks.
## To reproduce
Steps to reproduce the behavior:
1. Install `transformers` using the latest clone (use `pip install .` not `pip install transformers`)
2. Download the RTE data (e.g., into `data/RTE` using the GLUE download scripts in this repo)
3. Run a command to train RoBERTa (base or large). I'm using:
```
python examples/run_glue.py --model_type roberta --model_name_or_path roberta-base --output_dir models/debug --task_name rte --do_train --evaluate_during_training --data_dir data/RTE --max_seq_length 32 --max_grad_norm inf --adam_epsilon 1e-6 --adam_beta_2 0.98 --weight_decay 0.1 --logging_steps 874 --save_steps 874 --num_train_epochs 10 --warmup_steps 874 --per_gpu_train_batch_size 1 --per_gpu_eval_batch_size 2 --learning_rate 1e-5 --seed 12 --gradient_accumulation_steps 16 --overwrite_output_dir
```
## Expected behavior
`load_and_cache_examples` (and specifically, the call to `convert_examples_to_features`) in `examples/run_glue.py` should run without error, to load, preprocess, and tokenize the dataset.
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-3.10.0-1062.12.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Error happens with both GPU and CPU
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3313/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3313/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3312 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3312/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3312/comments | https://api.github.com/repos/huggingface/transformers/issues/3312/events | https://github.com/huggingface/transformers/issues/3312 | 583,143,521 | MDU6SXNzdWU1ODMxNDM1MjE= | 3,312 | GPT2Tokenizer doesn't include BOS or EOS token | {
"login": "moinnadeem",
"id": 813367,
"node_id": "MDQ6VXNlcjgxMzM2Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/813367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moinnadeem",
"html_url": "https://github.com/moinnadeem",
"followers_url": "https://api.github.com/users/moinnadeem/followers",
"following_url": "https://api.github.com/users/moinnadeem/following{/other_user}",
"gists_url": "https://api.github.com/users/moinnadeem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moinnadeem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moinnadeem/subscriptions",
"organizations_url": "https://api.github.com/users/moinnadeem/orgs",
"repos_url": "https://api.github.com/users/moinnadeem/repos",
"events_url": "https://api.github.com/users/moinnadeem/events{/privacy}",
"received_events_url": "https://api.github.com/users/moinnadeem/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Accidental double post -- closing this in favour of #3311 "
] | 1,584 | 1,584 | 1,584 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT-2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Script:
```
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
encoded_dict = tokenizer.encode_plus(text="Hello I am Moin", add_special_tokens=True, \
max_length=512, truncation_strategy="longest_first", pad_to_max_length=False, \
return_tensors=None, return_token_type_ids=True, return_attention_mask=True, \
return_overflowing_tokens=False, return_special_tokens_mask=False)
print(tokenizer.bos_token_id)
print(encoded_dict['input_ids'])
```
You should see that the `input_ids` do not include the `bos_token_id`. Shouldn't `encode_plus` be doing this?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The <|endoftext|> token would appear, since I included to `add_special_tokens`.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux-4.15.0-54-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.2
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3312/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3311 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3311/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3311/comments | https://api.github.com/repos/huggingface/transformers/issues/3311/events | https://github.com/huggingface/transformers/issues/3311 | 583,143,221 | MDU6SXNzdWU1ODMxNDMyMjE= | 3,311 | GPT2 -- build_inputs_with_special_tokens lacking BOS and EOS tokens. | {
"login": "moinnadeem",
"id": 813367,
"node_id": "MDQ6VXNlcjgxMzM2Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/813367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moinnadeem",
"html_url": "https://github.com/moinnadeem",
"followers_url": "https://api.github.com/users/moinnadeem/followers",
"following_url": "https://api.github.com/users/moinnadeem/following{/other_user}",
"gists_url": "https://api.github.com/users/moinnadeem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moinnadeem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moinnadeem/subscriptions",
"organizations_url": "https://api.github.com/users/moinnadeem/orgs",
"repos_url": "https://api.github.com/users/moinnadeem/repos",
"events_url": "https://api.github.com/users/moinnadeem/events{/privacy}",
"received_events_url": "https://api.github.com/users/moinnadeem/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @moinnadeem, \r\n\r\nThanks for posting this! \r\nAs it is implemented in the moment, you are right, GPT2 Tokenizer does not add the BOS in the beginning nor the EOS token in the end. \r\nYou can see e.g. that the XLNet tokenizer has a method that adds special tokens to the encoded input string (see https://github.com/huggingface/transformers/blob/4e4403c9b44324671cb795df2ef30e70fe3b606e/src/transformers/tokenization_xlnet.py#L241), whereas the GPT2 tokenizer does not have such a function and thus uses the default one which does not add any special tokens. \r\n\r\nAs far as I can see this could be a feature request, where a `build_inputs_with_special_tokens()` would be added to `tokenization_gpt2.py`. \r\n\r\nThe expected behavior could be:\r\ninput_string -> BOS + encoded(input_string) + EOS in the case of GPT2. \r\n\r\nFeel free to open a PR to include this feature :-) In the meantime you can obviously just manually add the BOS and EOS token before encoding. \r\n\r\n@mfuntowicz do you think such a PR would make sense? ",
"I don't think this has been fixed, right?",
"It's not really a bug because the default behavior of GPT2 is to just not add bos or eos tokens. GPT2 is mainly used to generate text so it would not make a lot of sense to add a EOS of a input prompt. If one wants he could just manually add `gpt2_tokenizer.eos_token` to the input and the eos_token_id will be added",
"> It's not really a bug because the default behavior of GPT2 is to just not add bos or eos tokens. GPT2 is mainly used to generate text so it would not make a lot of sense to add a EOS of a input prompt. If one wants he could just manually add `gpt2_tokenizer.eos_token` to the input and the eos_token_id will be added\r\n\r\nI think in the original GPT2 model, there *are* special tokens for bos and eos, both of which are `<|endoftext|>`, right? So if I want to finetune it, we should do the same thing -- add both bos and eos to the corpus for finetune, right?",
"@zhujl1991 - yes this is correct. \r\nWe also set bos and eos token to `<|endoftet|>` for GPT2 as you can verify as follows:\r\n\r\n```python\r\nfrom transformers import GPT2Tokenizer\r\ntok = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nprint(tok.eos_token)\r\nprint(tok.bos_token)\r\n```\r\n\r\nHowever, I don't think we plan on adding these tokens automatically when tokenizing an input string because the main use case for GPT2 is open-domain text generation where these tokens should not be added.\r\nI agree that they could /should be added for fine-tuning. \r\n\r\nSo I'm not sure if we want to add any special \"fine-tune\" behavior to the GPT2Tokenizer. @LysandreJik - what do you think?",
"\r\n\r\n> @zhujl1991 - yes this is correct.\r\n> We also set bos and eos token to `<|endoftet|>` for GPT2 as you can verify as follows:\r\n> \r\n> ```python\r\n> from transformers import GPT2Tokenizer\r\n> tok = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n> print(tok.eos_token)\r\n> print(tok.bos_token)\r\n> ```\r\n> \r\n> However, I don't think we plan on adding these tokens automatically when tokenizing an input string because the main use case for GPT2 is open-domain text generation where these tokens should not be added.\r\n> I agree that they could /should be added for fine-tuning.\r\n> \r\n> So I'm not sure if we want to add any special \"fine-tune\" behavior to the GPT2Tokenizer. @LysandreJik - what do you think?\r\n\r\nThe behavior of \"set add_special_tokens to True but no special tokens are added while there are special tokens in the tokenizer\" looks like a bug to me anyway. If the user doesn't want to add special tokens when tokenizing, e.g., as you said, when generating text, the user should set add_special_tokens to False.",
"I see what you mean @zhujl1991 -> Thinking about backwards compatibility and that by default `add_special_tokens` is set to `True`, I still do not think that we should add this feature to the `__call__` or `encode_plus` functions for GPT2. On the other hand such a functionality would be very useful for training/fine-tuning.\r\n\r\nI see three options:\r\n\r\n1) overwrite the __call__ method in GPT2 to have add_special_tokens=`False` by default and append BOS and EOS if set to `True` => I don't like this option as it's quite hacky and would still not be 100% backward compatible\r\n\r\n2) Add a new method `prepare_for_training` where the input is prepared for fine-tuning / training as you said.\r\n\r\n3) Don't do anything about it and let the user overwrite such a method himself. \r\n\r\nI would be fine with option 2), but also don't think it's that important of a feature (option 3))....let's see what @LysandreJik @sgugger, @thomwolf and @sshleifer think",
"IMO this is something that should be written by the user for their specific needs (option 3). We can document more that the tokenizers are pre-set for the most common tasks the corresponding models are used for, to avoid any user being too surprised.\r\n\r\nI feel that if we add a method, it will cover some use cases but not all and it will either be overly too complex or only used by a small percentage of the users.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Ran into this too – this seems like a bug to me, or at the least not intuitive behaviour.\r\n\r\nIf there's a tokeniser that has an EOS token, and I encode with `add_special_tokens=True`, I'd expect it to include the eos token at the end of sentence. ",
"+1 on this.\r\n\r\nThe main issue here is really how opaque and unintuitive this has been for me. Here's my thought process:\r\n\r\n```\r\nfrom transformers import GPT2TokenizerFast\r\ngpt2_tok = GPT2TokenizerFast.from_pretrained(\"gpt2\")\r\ngpt2_tok(\"Mary had a little lamb\")[\"input_ids\"]\r\n# prints [24119, 550, 257, 1310, 19343]\r\n```\r\nMh, weird, no special tokens? I've used HF before and I thought the default was to add them?\r\n\r\nThen I went and looked up the function, and indeed the default is to have them on. Bah, whatevs. Let's explicitly pass it as on:\r\n\r\n```\r\ngpt2_tok(\"Mary had a little lamb\", add_special_tokens=True)[\"input_ids\"]\r\n# No difference :D\r\n```\r\nThis part is what got me massively confused. I think it's entirely fine to change the default behavior for GPT-2 if the majority of the users don't care/want those tokens, but it would be more intuitive to change the default to add_special_tokens=False, and actually add the special tokens when the option is passed explicitly! :)\r\n",
"I had some thoughts over this question too.\r\nIn the end, I realized that the model has been trained using \"full paragraphs/articles of text\", which means that spaces and new line symbols were part of the training. The <|endoftext|> token was added between paragraphs/articles.\r\nSo the <|endoftext|> should only be added at the beginning and end of text paragraphs/articles for fine-tuning, but it seems to be a detail since in fact, it is just a kind of text formatting.\r\nFor text generation, usually you think of the \"end of text\" as a punctuation mark or newline character not as the <|endoftext|> token which denotes the end of a paragraph/article.\r\nSo I think that the code is perfectly right.\r\n",
"@patrickvonplaten \r\n\r\nHi, I also believe that BOS should be prepended before an input sentence (w1, w2, ...) for two reasons:\r\n\r\n1. Without BOS, the model cannot calculate the probability of generating the first token, i.e. P(w1|BOS).\r\n2. BOS also affects the probability of generating the following words, e.g. P(w2|w1) != P(w2|w1, BOS).\r\n\r\nFor the second point, see the following example:\r\n\r\n```\r\nimport torch\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\ninputs = tokenizer(\"<|endoftext|>This\", return_tensors=\"pt\")\r\n# inputs: {'input_ids': tensor([[50256, 1212]]), 'attention_mask': tensor([[1, 1]])}\r\noutputs = model(**inputs, labels=inputs[\"input_ids\"])\r\ntokenizer.convert_ids_to_tokens(outputs.logits[0][1].topk(20)[1])\r\n# ['Ġis', 'Ġarticle', 'Ġpost', 'Ġweek', 'Ġpage', 'Ġstory', 'Ġyear', 'Ġwas', 'Ġmonth', 'Ġsite', 'Ġbook', 'Ġpast', 'Ġitem', 'Ġproject', 'Ġblog', 'Ġstudy', 'Ġsection', 'Ġmorning', 'Ġvideo', 'Ġgame']\r\n\r\ninputs = tokenizer(\"This\", return_tensors=\"pt\")\r\n# {'input_ids': tensor([[1212]]), 'attention_mask': tensor([[1]])}\r\noutputs = model(**inputs, labels=inputs[\"input_ids\"])\r\ntokenizer.convert_ids_to_tokens(outputs.logits[0][0].topk(20)[1])\r\n# ['Ġis', ',', '.', 'Ċ', \"'s\", 'Ġwas', 'Ġto', 'Ġand', 'Ġthe', 'Ġin', 'Ġhas', 'Ġof', 'Ġwill', 'Ġa', ':', 'Ġare', 'Ġcan', 'Ġ(', '-', 'Ġfor']\r\n```\r\nComparing these two generations, the prediction with \"<|endoftext|>\" seems more accurate (e.g. Without BOS, some punctuations are predicted as the next word of \"This\").\r\n\r\nDue to the lack of documentation, I am not entirely sure if the \"<|endoftext|>\" token is actually used as a BOS token during training, but the following example suggests it may be the case.\r\n\r\n```\r\ninputs = tokenizer(\"<|endoftext|>\", return_tensors=\"pt\")\r\noutputs = model(**inputs, labels=inputs[\"input_ids\"])\r\ntokenizer.convert_ids_to_tokens(outputs.logits[0][0].topk(20)[1])\r\n\r\n# ['Ċ', 'The', '\"', 'A', 'I', 'In', '.', 'It', 'S', 'This', 'B', '-', 'C', 'We', '1', 'T', \"'\", 'P', '(', 'G']\r\n```\r\n\r\nEven if you opt not to prepend BOS, I believe these things should be clarified more in the documentation.",
"To add confirmation that `<|endoftext|>` is also a BOS token, the official repo uses it for inference as well: https://github.com/openai/gpt-2/blob/a74da5d99abaaba920de8131d64da2862a8f213b/src/generate_unconditional_samples.py#L60",
"Would be nice to add some documentation on this in the GPT2Tokenizer [docs](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2Tokenizer).\r\n\r\n(Self-note for future PR) ",
"@patrickvonplaten \r\n> However, I don't think we plan on adding these tokens automatically when tokenizing an input string because the main use case for GPT2 is open-domain text generation where these tokens should not be added.\r\n\r\nHi.\r\nI'd like to ask you a question.\r\nCould you explain how the model want to stop generation if there's no EOS token?",
"> @patrickvonplaten\r\n> \r\n> > However, I don't think we plan on adding these tokens automatically when tokenizing an input string because the main use case for GPT2 is open-domain text generation where these tokens should not be added.\r\n> \r\n> Hi. I'd like to ask you a question. Could you explain how the model want to stop generation if there's no EOS token?\r\n\r\nI'm trying to train the model with EOS tokens at the end. Let's see if that works...\r\n\r\nShouldn't the EOS tokens be set by default when we use `DataCollatorForLanguageModeling(..., mlm=False)`? It makes sense to me that they should. If not, **at least** [this documentation](https://huggingface.co/docs/transformers/tasks/language_modeling) should be changed and the EOS token should be added at the end of each raw text.\r\n\r\n",
"> > @patrickvonplaten\r\n> > > However, I don't think we plan on adding these tokens automatically when tokenizing an input string because the main use case for GPT2 is open-domain text generation where these tokens should not be added.\r\n> > \r\n> > \r\n> > Hi. I'd like to ask you a question. Could you explain how the model want to stop generation if there's no EOS token?\r\n> \r\n> I'm trying to train the model with EOS tokens at the end. Let's see if that works...\r\n> \r\n> Shouldn't the EOS tokens be set by default when we use `DataCollatorForLanguageModeling(..., mlm=False)`? It makes sense to me that they should. If not, **at least** [this documentation](https://huggingface.co/docs/transformers/tasks/language_modeling) should be changed and the EOS token should be added at the end of each raw text.\r\n\r\nI agree with you. I really don't understand how CLM train without EOS token.",
"> I'm trying to train the model with EOS tokens at the end. Let's see if that works...\r\n\r\nthis worked for me, but I really had to make sure that the EOS token was always at the end of each sequence",
"> this worked for me, but I really had to make sure that the EOS token was always at the end of each sequence\r\n\r\nDo you mean the inference is working? How the model decide to stop generate if there's no EOS token or there're multiple EOS tokens when they concat sequences as mentioned in [this documentation](https://huggingface.co/learn/nlp-course/chapter7/6#preparing-the-dataset)?\r\n",
"> > this worked for me, but I really had to make sure that the EOS token was always at the end of each sequence\r\n> \r\n> Do you mean the inference is working? How the model decide to stop generate if there's no EOS token or there're multiple EOS tokens when they concat sequences as mentioned in [this documentation](https://huggingface.co/learn/nlp-course/chapter7/6#preparing-the-dataset)?\r\n\r\nTokenizers used in causal models don't append the EOS token by default, while the ones in encoder-decoder (like T5) do.\r\n```\r\nt5_tokenizer(\"My name is Sarah and I live in London\")\r\nOut[7]: {'input_ids': [499, 564, 19, 8077, 11, 27, 619, 16, 1524, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\nt5_tokenizer.eos_token_id\r\nOut[8]: 1\r\ngpt2_tokenizer(\"My name is Sarah and I live in London\")\r\nOut[9]: {'input_ids': [3666, 1438, 318, 10490, 290, 314, 2107, 287, 3576], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\ngpt2_tokenizer.eos_token_id\r\nOut[10]: 50256\r\n```\r\nTo make the model generate EOS tokens at inference time, I had to tokenize my texts and then add the EOS token at the end, like: `tokenized_texts = tokenizer([t + tokenizer.eos_token for t in texts])`\r\n\r\nIf the tokenizer doesn't even have an EOS token, then you may have to create a new one, or rely on some heuristics to stop the generation.",
"Expect to see more GPT2-related fine-tuning cases in transformers doc.\r\n"
] | 1,584 | 1,705 | 1,606 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT-2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Script:
```
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
encoded_dict = tokenizer.encode_plus(text="Hello I am Moin", add_special_tokens=True, \
max_length=512, truncation_strategy="longest_first", pad_to_max_length=False, \
return_tensors=None, return_token_type_ids=True, return_attention_mask=True, \
return_overflowing_tokens=False, return_special_tokens_mask=False)
print(tokenizer.bos_token_id)
print(encoded_dict['input_ids'])
```
You should see that the `input_ids` do not include the `bos_token_id`. Shouldn't `encode_plus` be doing this?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The <|endoftext|> token would appear, since I included to `add_special_tokens`.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux-4.15.0-54-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.2
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3311/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3311/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3310 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3310/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3310/comments | https://api.github.com/repos/huggingface/transformers/issues/3310/events | https://github.com/huggingface/transformers/issues/3310 | 583,087,221 | MDU6SXNzdWU1ODMwODcyMjE= | 3,310 | Add sample softmax possibility to TransfoXL model for TransfoXL training | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | MEMBER | null | # 🚀 Feature request
TransfoXL samples the logits during training if required. At the moment TransfoXL can only be used whithout sampling from the logits during training. A partrly finished implementation can be found under the branch: `add_sampling_and_training_to_transfo_xl_models` .
## Motivation
To be able to train tranfoXL correctly.
## Your contribution
Already looked into the issue. Could try to implement it correctly with help from @thomwolf and @LysandreJik . Not a priority at the moment though.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3310/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3309 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3309/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3309/comments | https://api.github.com/repos/huggingface/transformers/issues/3309/events | https://github.com/huggingface/transformers/pull/3309 | 583,048,379 | MDExOlB1bGxSZXF1ZXN0Mzg5ODY2MDY1 | 3,309 | Create model card for CodeBERTaPy | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=h1) Report\n> Merging [#3309](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2187c49f5cde57306c3fd1eb67dbc68fab9c6403&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3309 +/- ##\n==========================================\n+ Coverage 76.92% 76.93% +0.01% \n==========================================\n Files 100 100 \n Lines 16953 16953 \n==========================================\n+ Hits 13041 13043 +2 \n+ Misses 3912 3910 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.09% <0.00%> (+0.26%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=footer). Last update [2187c49...003d51b](https://codecov.io/gh/huggingface/transformers/pull/3309?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3309/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3309",
"html_url": "https://github.com/huggingface/transformers/pull/3309",
"diff_url": "https://github.com/huggingface/transformers/pull/3309.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3309.patch",
"merged_at": 1584462551000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3308 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3308/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3308/comments | https://api.github.com/repos/huggingface/transformers/issues/3308/events | https://github.com/huggingface/transformers/issues/3308 | 582,915,553 | MDU6SXNzdWU1ODI5MTU1NTM= | 3,308 | Loading DistilBertModel with AutoModel gives 12 layers | {
"login": "sasaadi",
"id": 7882383,
"node_id": "MDQ6VXNlcjc4ODIzODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7882383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sasaadi",
"html_url": "https://github.com/sasaadi",
"followers_url": "https://api.github.com/users/sasaadi/followers",
"following_url": "https://api.github.com/users/sasaadi/following{/other_user}",
"gists_url": "https://api.github.com/users/sasaadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sasaadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sasaadi/subscriptions",
"organizations_url": "https://api.github.com/users/sasaadi/orgs",
"repos_url": "https://api.github.com/users/sasaadi/repos",
"events_url": "https://api.github.com/users/sasaadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sasaadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | I am using ``AutoModel`` to load ``distilbert-base-uncased`` and save the fine-tuned model after training using ``model.save_pretrained('path_to_save')``. However, when I load the fine-tuned model using ``AutoModel.from_pretrained('path_to_the_saved_model')``, it extracts 12 layers instead of 6 layers. I also checked the ``config.json`` file that was saved automatically and the number of layers is still 6. When I load the model with ``DistilBertModel.from_pretrained()`` it extracts 6 layers. In the following, I copied the ``config.json`` file. Does anyone know why this happens? am I lacking some packages or files when loading/saving the model?
> {
"activation": "gelu",
"architectures": [
"DistilBertModel"
],
"attention_dropout": 0.1,
"bos_token_id": 0,
"dim": 768,
"do_sample": false,
"dropout": 0.1,
"eos_token_ids": 0,
"finetuning_task": null,
"hidden_dim": 3072,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"num_beams": 1,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": true,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"qa_dropout": 0.1,
"repetition_penalty": 1.0,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"temperature": 1.0,
"tie_weights_": true,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 40000
}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3308/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3307 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3307/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3307/comments | https://api.github.com/repos/huggingface/transformers/issues/3307/events | https://github.com/huggingface/transformers/pull/3307 | 582,897,039 | MDExOlB1bGxSZXF1ZXN0Mzg5NzM5Nzgz | 3,307 | Make sacremoses dependency optional due to GPL license. | {
"login": "f11r",
"id": 7826519,
"node_id": "MDQ6VXNlcjc4MjY1MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7826519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f11r",
"html_url": "https://github.com/f11r",
"followers_url": "https://api.github.com/users/f11r/followers",
"following_url": "https://api.github.com/users/f11r/following{/other_user}",
"gists_url": "https://api.github.com/users/f11r/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f11r/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f11r/subscriptions",
"organizations_url": "https://api.github.com/users/f11r/orgs",
"repos_url": "https://api.github.com/users/f11r/repos",
"events_url": "https://api.github.com/users/f11r/events{/privacy}",
"received_events_url": "https://api.github.com/users/f11r/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=h1) Report\n> Merging [#3307](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.94%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3307 +/- ##\n==========================================\n- Coverage 77.79% 76.85% -0.95% \n==========================================\n Files 145 145 \n Lines 25355 25356 +1 \n==========================================\n- Hits 19726 19488 -238 \n- Misses 5629 5868 +239 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.00% <100.00%> (+0.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.76%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=footer). Last update [fa5423b...e371b9a](https://codecov.io/gh/huggingface/transformers/pull/3307?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I agree with this, but will let others chime in.\r\n\r\nHowever as discussed in https://github.com/huggingface/transformers/issues/2453#issuecomment-656103152 I think `sacremoses` is MIT-licensed",
"Now that `sacremoses` has changed the license (link from @julien-c and https://github.com/alvations/sacremoses/commit/90376dfaf0f41399a090e7620feb3c2494f865a6) the original reason for this pull request is gone.\r\n\r\nFeel free to simply close this if you currently don't want to use this to reduce the default dependencies. For this use case it would probably make sense to also make the `xlnet` and `gpt` dependencies optional.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,600 | 1,600 | NONE | null | Closes #2453. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3307/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3307/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3307",
"html_url": "https://github.com/huggingface/transformers/pull/3307",
"diff_url": "https://github.com/huggingface/transformers/pull/3307.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3307.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3306 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3306/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3306/comments | https://api.github.com/repos/huggingface/transformers/issues/3306/events | https://github.com/huggingface/transformers/pull/3306 | 582,833,548 | MDExOlB1bGxSZXF1ZXN0Mzg5Njg4ODI2 | 3,306 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=h1) Report\n> Merging [#3306](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68ef0a111f8740f06ca4e5a00374ec4e2adb0a6d&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3306 +/- ##\n==========================================\n- Coverage 77.48% 77.47% -0.02% \n==========================================\n Files 99 99 \n Lines 16799 16799 \n==========================================\n- Hits 13017 13015 -2 \n- Misses 3782 3784 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3306/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.68% <0.00%> (-0.54%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3306/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <0.00%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=footer). Last update [68ef0a1...6053b4e](https://codecov.io/gh/huggingface/transformers/pull/3306?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for contributing @jannesgg – if this is a Swedish LM, could you add the language tag to the top of the model card:\r\n\r\n```\r\n---\r\nlanguage: swedish\r\n---\r\n```",
"Thanks! [`Model page`](https://huggingface.co/jannesg/bertsson)"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3306/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3306",
"html_url": "https://github.com/huggingface/transformers/pull/3306",
"diff_url": "https://github.com/huggingface/transformers/pull/3306.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3306.patch",
"merged_at": 1584450312000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3305 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3305/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3305/comments | https://api.github.com/repos/huggingface/transformers/issues/3305/events | https://github.com/huggingface/transformers/pull/3305 | 582,716,900 | MDExOlB1bGxSZXF1ZXN0Mzg5NTkyNDc2 | 3,305 | Update examples/ner/run_ner.py to use AutoModel | {
"login": "lifefeel",
"id": 38556,
"node_id": "MDQ6VXNlcjM4NTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/38556?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lifefeel",
"html_url": "https://github.com/lifefeel",
"followers_url": "https://api.github.com/users/lifefeel/followers",
"following_url": "https://api.github.com/users/lifefeel/following{/other_user}",
"gists_url": "https://api.github.com/users/lifefeel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lifefeel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lifefeel/subscriptions",
"organizations_url": "https://api.github.com/users/lifefeel/orgs",
"repos_url": "https://api.github.com/users/lifefeel/repos",
"events_url": "https://api.github.com/users/lifefeel/events{/privacy}",
"received_events_url": "https://api.github.com/users/lifefeel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=h1) Report\n> Merging [#3305](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b2028cc26b61a9dad960274d427e261af7c9bdc8&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3305 +/- ##\n==========================================\n- Coverage 77.47% 77.46% -0.01% \n==========================================\n Files 99 99 \n Lines 16799 16799 \n==========================================\n- Hits 13015 13014 -1 \n- Misses 3784 3785 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.68% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.70% <0.00%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=footer). Last update [b2028cc...85e70d9](https://codecov.io/gh/huggingface/transformers/pull/3305?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | This PR is updating `run_ner.py` to use AutoModel implementation. Refer to #3290 and is simpler than before.
Maybe @srush can review this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3305/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3305",
"html_url": "https://github.com/huggingface/transformers/pull/3305",
"diff_url": "https://github.com/huggingface/transformers/pull/3305.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3305.patch",
"merged_at": 1584462610000
} |
https://api.github.com/repos/huggingface/transformers/issues/3304 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3304/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3304/comments | https://api.github.com/repos/huggingface/transformers/issues/3304/events | https://github.com/huggingface/transformers/issues/3304 | 582,709,029 | MDU6SXNzdWU1ODI3MDkwMjk= | 3,304 | Error in loading albert-base-v2 | {
"login": "WenxiongLiao",
"id": 25845940,
"node_id": "MDQ6VXNlcjI1ODQ1OTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25845940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenxiongLiao",
"html_url": "https://github.com/WenxiongLiao",
"followers_url": "https://api.github.com/users/WenxiongLiao/followers",
"following_url": "https://api.github.com/users/WenxiongLiao/following{/other_user}",
"gists_url": "https://api.github.com/users/WenxiongLiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenxiongLiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenxiongLiao/subscriptions",
"organizations_url": "https://api.github.com/users/WenxiongLiao/orgs",
"repos_url": "https://api.github.com/users/WenxiongLiao/repos",
"events_url": "https://api.github.com/users/WenxiongLiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenxiongLiao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@anjubaoGDUT If you can provide the code in text (instead of image) that can copy and paste, it is easy to test. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | 

Help~~
The above problems appear in loading the pre-trained Albert by transformers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3304/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3303 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3303/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3303/comments | https://api.github.com/repos/huggingface/transformers/issues/3303/events | https://github.com/huggingface/transformers/issues/3303 | 582,706,766 | MDU6SXNzdWU1ODI3MDY3NjY= | 3,303 | Error in loading Albert model | {
"login": "WenxiongLiao",
"id": 25845940,
"node_id": "MDQ6VXNlcjI1ODQ1OTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25845940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenxiongLiao",
"html_url": "https://github.com/WenxiongLiao",
"followers_url": "https://api.github.com/users/WenxiongLiao/followers",
"following_url": "https://api.github.com/users/WenxiongLiao/following{/other_user}",
"gists_url": "https://api.github.com/users/WenxiongLiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenxiongLiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenxiongLiao/subscriptions",
"organizations_url": "https://api.github.com/users/WenxiongLiao/orgs",
"repos_url": "https://api.github.com/users/WenxiongLiao/repos",
"events_url": "https://api.github.com/users/WenxiongLiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenxiongLiao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | 
Help~
The above problems appear in loading the pre-trained Albert by transformers. Albert pre-trained model download address is https://drive.google.com/file/d/1byZQmWDgyhrLpj8oXtxBG6AA52c8IHE-/view | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3303/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3302 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3302/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3302/comments | https://api.github.com/repos/huggingface/transformers/issues/3302/events | https://github.com/huggingface/transformers/pull/3302 | 582,694,463 | MDExOlB1bGxSZXF1ZXN0Mzg5NTc0Nzcy | 3,302 | [BART] Delete redundant unit test | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3302/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3302",
"html_url": "https://github.com/huggingface/transformers/pull/3302",
"diff_url": "https://github.com/huggingface/transformers/pull/3302.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3302.patch",
"merged_at": 1584414551000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3301 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3301/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3301/comments | https://api.github.com/repos/huggingface/transformers/issues/3301/events | https://github.com/huggingface/transformers/pull/3301 | 582,654,131 | MDExOlB1bGxSZXF1ZXN0Mzg5NTQxNTY4 | 3,301 | Add model card for Google AI's BERT Miniatures | {
"login": "iuliaturc-google",
"id": 61293507,
"node_id": "MDQ6VXNlcjYxMjkzNTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/61293507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iuliaturc-google",
"html_url": "https://github.com/iuliaturc-google",
"followers_url": "https://api.github.com/users/iuliaturc-google/followers",
"following_url": "https://api.github.com/users/iuliaturc-google/following{/other_user}",
"gists_url": "https://api.github.com/users/iuliaturc-google/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iuliaturc-google/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iuliaturc-google/subscriptions",
"organizations_url": "https://api.github.com/users/iuliaturc-google/orgs",
"repos_url": "https://api.github.com/users/iuliaturc-google/repos",
"events_url": "https://api.github.com/users/iuliaturc-google/events{/privacy}",
"received_events_url": "https://api.github.com/users/iuliaturc-google/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=h1) Report\n> Merging [#3301](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47591763137f17021928e686ef171f25c240f076&el=desc) will **decrease** coverage by `0.35%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3301 +/- ##\n==========================================\n- Coverage 77.68% 77.33% -0.36% \n==========================================\n Files 99 99 \n Lines 16799 16799 \n==========================================\n- Hits 13051 12991 -60 \n- Misses 3748 3808 +60 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0.00%> (-6.50%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.46% <0.00%> (-5.91%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3301/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.01% <0.00%> (-0.99%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=footer). Last update [4759176...46b9c45](https://codecov.io/gh/huggingface/transformers/pull/3301?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I just symlinked all 24 under Google's namespace to this one in 68ef0a111f8740f06ca4e5a00374ec4e2adb0a6d.\r\n\r\nThanks for uploading the models @iuliaturc-google and @srush!\r\n\r\nExample model page: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2",
"Thanks Sasha and Julien!"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | This model card is intended to be shared among all models under google/bert_uncased_*
(We'll need some support from HuggingFace to get this card cross-linked from all models) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3301/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3301/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3301",
"html_url": "https://github.com/huggingface/transformers/pull/3301",
"diff_url": "https://github.com/huggingface/transformers/pull/3301.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3301.patch",
"merged_at": 1584409906000
} |
https://api.github.com/repos/huggingface/transformers/issues/3300 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3300/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3300/comments | https://api.github.com/repos/huggingface/transformers/issues/3300/events | https://github.com/huggingface/transformers/issues/3300 | 582,552,153 | MDU6SXNzdWU1ODI1NTIxNTM= | 3,300 | ImportError: cannot import name 'BartForConditionalGeneration' | {
"login": "NinaHristozovaTR",
"id": 56268343,
"node_id": "MDQ6VXNlcjU2MjY4MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/56268343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NinaHristozovaTR",
"html_url": "https://github.com/NinaHristozovaTR",
"followers_url": "https://api.github.com/users/NinaHristozovaTR/followers",
"following_url": "https://api.github.com/users/NinaHristozovaTR/following{/other_user}",
"gists_url": "https://api.github.com/users/NinaHristozovaTR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NinaHristozovaTR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NinaHristozovaTR/subscriptions",
"organizations_url": "https://api.github.com/users/NinaHristozovaTR/orgs",
"repos_url": "https://api.github.com/users/NinaHristozovaTR/repos",
"events_url": "https://api.github.com/users/NinaHristozovaTR/events{/privacy}",
"received_events_url": "https://api.github.com/users/NinaHristozovaTR/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I saw that to use the examples, it has to be installed from source."
] | 1,584 | 1,584 | 1,584 | NONE | null |
## Information
Hi, I am trying to use the BART model to sumamrize a text snippet.
The problem arises when using:
* from transformers import BartTokenizer, BartConfig, BartForConditionalGeneration
## To reproduce
Steps to reproduce the behavior:
1. Installed Tensorflow 2.0 and Pytorch
2. pip install transformers
3. from transformers import BartTokenizer, BartConfig, BartForConditionalGeneration
<!-- ImportError: cannot import name 'BartForConditionalGeneration'-->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: (2.5.1)
- Platform:
- Python version: Python 3.6.5 :: Anaconda, Inc.
- PyTorch version (GPU?): (1.3.1)
- Tensorflow version (GPU?): (2.0.0)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3300/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3299 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3299/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3299/comments | https://api.github.com/repos/huggingface/transformers/issues/3299/events | https://github.com/huggingface/transformers/pull/3299 | 582,522,705 | MDExOlB1bGxSZXF1ZXN0Mzg5NDMwNzA3 | 3,299 | add camembert for Question answering for examples | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Wow that was quick :D "
] | 1,584 | 1,584 | 1,584 | MEMBER | null | This one might have been accidently deleted in PR #2700 I think. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3299/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3299",
"html_url": "https://github.com/huggingface/transformers/pull/3299",
"diff_url": "https://github.com/huggingface/transformers/pull/3299.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3299.patch",
"merged_at": 1584384133000
} |
https://api.github.com/repos/huggingface/transformers/issues/3298 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3298/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3298/comments | https://api.github.com/repos/huggingface/transformers/issues/3298/events | https://github.com/huggingface/transformers/pull/3298 | 582,244,939 | MDExOlB1bGxSZXF1ZXN0Mzg5MTkyOTk1 | 3,298 | [generate] do_sample default back to False | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not 100% sure how this results in a \"Prettier API,\" but agree this isn't a big deal to fix downstream. (my current code explicitly sets `do_sample=True` just in case something like this happened.)\r\n\r\nIf you are creating any demo generation notebooks/tooling like Write With Transformer, I recommend explicitly noting this behavior."
] | 1,584 | 1,584 | 1,584 | MEMBER | null | This somewhat reverts the commit:
https://github.com/huggingface/transformers/commit/6c1b23554f8bb5b5e1f6c80969acab764c755678
and the decision taken in #2696
and sets the default sampling behavior of `generate()` to greedy - / beam search.
Pros:
- `False` is the more natural default value
- Prettier API (especially for encoder_decoder models which will mostly only use beam search generate())
Cons:
- Some people might aleady be used to the `do_sample=True` default value and this commit might break the logic of their code (but would be trivial to change for them)
I'm somewhat indifferent whether this PR should be merged, but I think @thomwolf and @sshleifer are in favor of it.
@LysandreJik @thomwolf @sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3298/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3298",
"html_url": "https://github.com/huggingface/transformers/pull/3298",
"diff_url": "https://github.com/huggingface/transformers/pull/3298.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3298.patch",
"merged_at": 1584456757000
} |
https://api.github.com/repos/huggingface/transformers/issues/3297 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3297/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3297/comments | https://api.github.com/repos/huggingface/transformers/issues/3297/events | https://github.com/huggingface/transformers/issues/3297 | 582,180,364 | MDU6SXNzdWU1ODIxODAzNjQ= | 3,297 | Getting output of any hidden layer | {
"login": "katarina-cavar",
"id": 32822047,
"node_id": "MDQ6VXNlcjMyODIyMDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/32822047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/katarina-cavar",
"html_url": "https://github.com/katarina-cavar",
"followers_url": "https://api.github.com/users/katarina-cavar/followers",
"following_url": "https://api.github.com/users/katarina-cavar/following{/other_user}",
"gists_url": "https://api.github.com/users/katarina-cavar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/katarina-cavar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katarina-cavar/subscriptions",
"organizations_url": "https://api.github.com/users/katarina-cavar/orgs",
"repos_url": "https://api.github.com/users/katarina-cavar/repos",
"events_url": "https://api.github.com/users/katarina-cavar/events{/privacy}",
"received_events_url": "https://api.github.com/users/katarina-cavar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes, this will be quite hard and is not a feature that is implemented at the moment nor a feature that we plan on implementing soon. \r\n\r\nAn easy way to get what you want though, will be to clone the repo and adapt the code. You can easily add the layer outputs (e.g. `ffn_output`) you want to the `return` functions of the different Albert layers (you will probalby have to return and retrieve it multiple times until you have it in the `AlbertForSequenceClassification.formard() `function",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,589 | 1,584 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Is there a way to get output from any hidden layer of the model?
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
I'm working on the ALBERT transformer (more specifically `AlbertForSequenceClassification`, and when I print the model, this is the model's architecture:
```py
AlbertForSequenceClassification(
(albert): AlbertModel(
(embeddings): AlbertEmbeddings(
(word_embeddings): Embedding(30000, 128, padding_idx=0)
(position_embeddings): Embedding(512, 128)
(token_type_embeddings): Embedding(2, 128)
(LayerNorm): LayerNorm((128,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0, inplace=False)
)
(encoder): AlbertTransformer(
(embedding_hidden_mapping_in): Linear(in_features=128, out_features=768, bias=True)
(albert_layer_groups): ModuleList(
(0): AlbertLayerGroup(
(albert_layers): ModuleList(
(0): AlbertLayer(
(full_layer_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(attention): AlbertAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0, inplace=False)
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(ffn): Linear(in_features=768, out_features=3072, bias=True)
(ffn_output): Linear(in_features=3072, out_features=768, bias=True)
)
)
)
)
)
(pooler): Linear(in_features=768, out_features=768, bias=True)
(pooler_activation): Tanh()
)
(dropout): Dropout(p=0, inplace=False)
(classifier): Linear(in_features=768, out_features=2, bias=True)
)
```
I would like to get the outputs of middle / hidden layers, for example of layers `ffn_output` or `pooler`, but I'm not sure if that option exists. I've tried extracting `hidden_states` by setting `output_hidden_states` to True in AlbertConfig, but that doesn't bring me the result that I want.
I believe tf-hub has a method or attribute for this called by `model.get_layer(layer_name)`.
Is there a way to extract hidden layers?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3297/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3296 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3296/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3296/comments | https://api.github.com/repos/huggingface/transformers/issues/3296/events | https://github.com/huggingface/transformers/issues/3296 | 582,158,024 | MDU6SXNzdWU1ODIxNTgwMjQ= | 3,296 | Installation error: can not find Rust compiler | {
"login": "jiyanloveyou",
"id": 9956312,
"node_id": "MDQ6VXNlcjk5NTYzMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9956312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiyanloveyou",
"html_url": "https://github.com/jiyanloveyou",
"followers_url": "https://api.github.com/users/jiyanloveyou/followers",
"following_url": "https://api.github.com/users/jiyanloveyou/following{/other_user}",
"gists_url": "https://api.github.com/users/jiyanloveyou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiyanloveyou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiyanloveyou/subscriptions",
"organizations_url": "https://api.github.com/users/jiyanloveyou/orgs",
"repos_url": "https://api.github.com/users/jiyanloveyou/repos",
"events_url": "https://api.github.com/users/jiyanloveyou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiyanloveyou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you please open an issue on https://github.com/huggingface/tokenizers?\r\n\r\nThanks!\r\ncc @n1t0 @mfuntowicz "
] | 1,584 | 1,584 | 1,584 | NONE | null | I used pip to install transformers like this:
pip install transformers
in the end, I got the error:
Can not find Rust compiler
while, I have installed rust on my apple computer, please tell me how to deal with the problem, thank you!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3296/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3295 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3295/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3295/comments | https://api.github.com/repos/huggingface/transformers/issues/3295/events | https://github.com/huggingface/transformers/pull/3295 | 582,128,023 | MDExOlB1bGxSZXF1ZXN0Mzg5MDkzMTUw | 3,295 | Create CodeBERTaJS model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=h1) Report\n> Merging [#3295](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/af471ce5e8ca7c19183e70bb998561170addc276?src=pr&el=desc) will **increase** coverage by `0.19%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3295 +/- ##\n==========================================\n+ Coverage 77.82% 78.02% +0.19% \n==========================================\n Files 98 98 \n Lines 16666 16666 \n==========================================\n+ Hits 12970 13003 +33 \n+ Misses 3696 3663 -33\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3295/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.58% <0%> (-0.14%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3295/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.4% <0%> (+0.4%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3295/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.37% <0%> (+5.9%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=footer). Last update [af471ce...62cb24d](https://codecov.io/gh/huggingface/transformers/pull/3295?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3295/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3295",
"html_url": "https://github.com/huggingface/transformers/pull/3295",
"diff_url": "https://github.com/huggingface/transformers/pull/3295.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3295.patch",
"merged_at": 1584375782000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3294 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3294/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3294/comments | https://api.github.com/repos/huggingface/transformers/issues/3294/events | https://github.com/huggingface/transformers/issues/3294 | 582,088,134 | MDU6SXNzdWU1ODIwODgxMzQ= | 3,294 | BertForPreTraining should compute only <MASKED> prediction_scores | {
"login": "songsuoyuan",
"id": 1378976,
"node_id": "MDQ6VXNlcjEzNzg5NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1378976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songsuoyuan",
"html_url": "https://github.com/songsuoyuan",
"followers_url": "https://api.github.com/users/songsuoyuan/followers",
"following_url": "https://api.github.com/users/songsuoyuan/following{/other_user}",
"gists_url": "https://api.github.com/users/songsuoyuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songsuoyuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songsuoyuan/subscriptions",
"organizations_url": "https://api.github.com/users/songsuoyuan/orgs",
"repos_url": "https://api.github.com/users/songsuoyuan/repos",
"events_url": "https://api.github.com/users/songsuoyuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/songsuoyuan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I had little modification on the source code and solve the problem. (But I may changed the behavior and the return format of class BertForPretraining)\r\n\r\n```python\r\ndef gather_indexes_auto(sequence_tensor, masked_lm_labels):\r\n \"\"\"Gathers the vectors according to masked_lm_labels over a minibatch.\r\n Input \r\n sequence_tensor: (batch_size, sequence_length, hidden_size)\r\n masked_lm_labels: (batch_size, sequence_length)\r\n Output\r\n output_tensor: (-1, hidden_size)\r\n output_lm_labels: (-1, )\r\n \"\"\"\r\n batch_size = sequence_tensor.size(0)\r\n sequence_length = sequence_tensor.size(1)\r\n hidden_size = sequence_tensor.size(2)\r\n # Flatten sequence_tensor into (-1, hidden_size)\r\n # Flatten masked_lm_labels into (-1, )\r\n sequence_tensor_flat = sequence_tensor.view(batch_size*sequence_length, hidden_size)\r\n masked_lm_labels_flat = masked_lm_labels.view(-1)\r\n # Get non -100 index \r\n # Note: the input index of torch.index_select is 1-D tensor\r\n masked_lm_location = masked_lm_labels_flat.ge(0).nonzero().view(-1)\r\n # Select corresponding values \r\n output_tensor = torch.index_select(sequence_tensor_flat, dim=0, index=masked_lm_location)\r\n output_lm_labels = torch.index_select(masked_lm_labels_flat, dim=0, index=masked_lm_location)\r\n return output_tensor, output_lm_labels\r\n```\r\nAnd in class `BertForPreTraining`\r\n```\r\nclass BertForPreTraining(BertPreTrainedModel):\r\n ...\r\n def forward(...):\r\n outputs = self.bert(\r\n input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n )\r\n\r\n sequence_output, pooled_output = outputs[:2]\r\n sequence_output, output_lm_labels = gather_indexes_auto(sequence_output, masked_lm_labels)\r\n\r\n prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)\r\n\r\n outputs = (prediction_scores, seq_relationship_score,) + outputs[\r\n 2:\r\n ] # add hidden states and attention if they are here\r\n\r\n if masked_lm_labels is not None and next_sentence_label is not None:\r\n loss_fct = CrossEntropyLoss()\r\n masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), output_lm_labels.view(-1))\r\n next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1))\r\n total_loss = masked_lm_loss + next_sentence_loss\r\n outputs = (total_loss,) \r\n\r\n return outputs # (loss), \r\n```\r\nNote that this code only compute and return the loss on <MASKED> labels and thus save a lot computations and GPU memories.\r\nBefore this change, I can only run BERT-MEDIUM (L8 H512 A8) on batch_size = 64 using P100 (16G RAM), after this change, I can run the pretraining using batch_size = 128.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | # 🚀 Feature request
In class transformers.BertForPreTraining, the forward compute all prediction_scores. In fact, we may only calculate the prediction_scores on the <MASKED> tokens to save some computational cost.
## Motivation
In source code of class BertForPreTraining:
```python
outputs = self.bert(input_ids, attention_mask, ...)
sequence_output, pooled_output = outputs[:2]
prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)
```
It compute `prediction_scores` using `sequence_output` as the input of function `self.cls()`. I was wondering if we can gather the MASKED index (not equal to -100) and only pass the MASKED sequence_output into function `self.cls` and return the MASKED prediction_scores. Then pass them into `CrossEntropy()` to compute the `masked_lm_loss`. In this way, we can save some computational cost and partially relief the OOM problem in GPU.
## My suggestion
We may change the code to
```python
outputs = self.bert(input_ids, attention_mask, ...)
sequence_output, pooled_output = outputs[:2]
# before gather_indexes, size of sequence_output: (batch_size, sequence_length, hidden_size)
# after gather_indexes, size of sequence_output: (batch_size, masked_lm_nums, hidden_size)
sequence_output = gather_indexes(sequence_output, masked_ln_labels)
prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)
```
Then pass prediction_scores and corresponding `masked_lm_labels` (also need to gather the indexes) into function CrossEntropy to compute the masked_lm_loss.
Note: if we adopt this method, the return of BertForPreTraining is changed since we will not compute prediction_scores on all tokens. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3294/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3294/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3293 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3293/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3293/comments | https://api.github.com/repos/huggingface/transformers/issues/3293/events | https://github.com/huggingface/transformers/pull/3293 | 582,081,904 | MDExOlB1bGxSZXF1ZXN0Mzg5MDU0Mjky | 3,293 | Create model card for spanbert-finetuned-squadv2 | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3293/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3293",
"html_url": "https://github.com/huggingface/transformers/pull/3293",
"diff_url": "https://github.com/huggingface/transformers/pull/3293.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3293.patch",
"merged_at": 1584376367000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3292 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3292/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3292/comments | https://api.github.com/repos/huggingface/transformers/issues/3292/events | https://github.com/huggingface/transformers/issues/3292 | 581,958,138 | MDU6SXNzdWU1ODE5NTgxMzg= | 3,292 | NER Pipeline returns null | {
"login": "Realvincentyuan",
"id": 26101303,
"node_id": "MDQ6VXNlcjI2MTAxMzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/26101303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Realvincentyuan",
"html_url": "https://github.com/Realvincentyuan",
"followers_url": "https://api.github.com/users/Realvincentyuan/followers",
"following_url": "https://api.github.com/users/Realvincentyuan/following{/other_user}",
"gists_url": "https://api.github.com/users/Realvincentyuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Realvincentyuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Realvincentyuan/subscriptions",
"organizations_url": "https://api.github.com/users/Realvincentyuan/orgs",
"repos_url": "https://api.github.com/users/Realvincentyuan/repos",
"events_url": "https://api.github.com/users/Realvincentyuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Realvincentyuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"By \"it returns null\" you mean it returns an empty array? That's because it didn't identify any named entity in your sequence."
] | 1,584 | 1,585 | 1,585 | NONE | null | # 🐛 Bug
## Information
Model I am using (NER Pipeline):
Language I am using the model on (English):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import pipeline
# Allocate a pipeline for named entity recognition
nlp = pipeline('ner')
nlp(['We are very happy to include pipeline into the transformers repository.'])
```
**returns null.**
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Expect to have the named entity label for each token. But it returns null.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Jupyter lab
- Python version: 3.6.1
- PyTorch version (GPU?): CPU-1.4.0
- Tensorflow version (GPU?): CPU-1.15.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3292/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3291 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3291/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3291/comments | https://api.github.com/repos/huggingface/transformers/issues/3291/events | https://github.com/huggingface/transformers/issues/3291 | 581,951,591 | MDU6SXNzdWU1ODE5NTE1OTE= | 3,291 | a lot of examples in doc can't run successful | {
"login": "xiongma",
"id": 30991932,
"node_id": "MDQ6VXNlcjMwOTkxOTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30991932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiongma",
"html_url": "https://github.com/xiongma",
"followers_url": "https://api.github.com/users/xiongma/followers",
"following_url": "https://api.github.com/users/xiongma/following{/other_user}",
"gists_url": "https://api.github.com/users/xiongma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiongma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiongma/subscriptions",
"organizations_url": "https://api.github.com/users/xiongma/orgs",
"repos_url": "https://api.github.com/users/xiongma/repos",
"events_url": "https://api.github.com/users/xiongma/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiongma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten my OS is mac OS 10.14.6, Python 3.6.10, tensorflow 2.0, transformers version is the resoruce code in github",
"Hi @policeme, which example did you use? ",
"\r\nlike this",
"@patrickvonplaten ",
"your example like this\r\n\r\n",
"Please don't paste screenshots of your code on issues. Copy and Paste the code in code format. We can't copy the code this way. Typing your code from a screenshot is very time-consuming. \r\n\r\nConsidering the problem you have. How did you train the model that was saved in `...ckpt.index`? Did you use this library? From your error message, it seems like your Bert TF model saved in `.ckpt.index` does not have the correct form. \r\n\r\nThe example you mention should be used if you trained your model with this library.",
"sorry about it, this is my code\r\n`config = AutoConfig.from_pretrained(r'/Users/maxiong/Workpace/Code/transformers/pre_model/bert_config.json')\r\ntokenizer = AutoTokenizer.from_pretrained(r'/Users/maxiong/Workpace/Code/transformers/pre_model/bert_config.json')\r\nmodel = AutoModel.from_pretrained(r'/Users/maxiong/Workpace/Code/transformers/pre_model/bert_model.ckpt.index', from_tf=True, config=config)\r\n`\r\nthis bert model is from google official chinese bert base model, in your answer, the model can be used with this libary, which was trained by this library, if that, if I want use own pretraning bert model, how to use it with this library"
] | 1,584 | 1,585 | 1,585 | NONE | null | I used example in doc, but a lot of example can't run successful, what's wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3291/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3290 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3290/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3290/comments | https://api.github.com/repos/huggingface/transformers/issues/3290/events | https://github.com/huggingface/transformers/pull/3290 | 581,892,856 | MDExOlB1bGxSZXF1ZXN0Mzg4ODkxNTcz | 3,290 | [WIP] Lightning glue example | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@srush Can you please take a look?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=h1) Report\n> Merging [#3290](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8320feec09309a94f673e1e7ce2a93da81eb3366&el=desc) will **increase** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3290 +/- ##\n==========================================\n+ Coverage 77.81% 77.99% +0.18% \n==========================================\n Files 98 98 \n Lines 16666 16666 \n==========================================\n+ Hits 12969 12999 +30 \n+ Misses 3697 3667 -30 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.47% <0.00%> (+5.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=footer). Last update [8320fee...dd1b783](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks excellent. I will let @LysandreJik merge tomorrow, and confirm multi-gpu / TPU work. \r\n\r\nWant to try SQuAD next?",
"> Want to try SQuAD next?\r\n\r\nSure, I'll give it a go."
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | This PR adds an example of using Pytorch Lightning to run the GLUE benchmark. Additionally, I altered the `transformer_base.py` to use auto models and moved it to the example directory so it could be copied in by any script that wishes to use it.
Preferably, the base transformer would have subclasses for the different types of tasks, but I just used a dictionary with a key passed on init instead. (i.e. NER uses `AutoModelForTokenClassification` and GLUE uses `AutoModelForSequenceClassification`). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3290/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3290",
"html_url": "https://github.com/huggingface/transformers/pull/3290",
"diff_url": "https://github.com/huggingface/transformers/pull/3290.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3290.patch",
"merged_at": 1584460003000
} |
https://api.github.com/repos/huggingface/transformers/issues/3289 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3289/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3289/comments | https://api.github.com/repos/huggingface/transformers/issues/3289/events | https://github.com/huggingface/transformers/issues/3289 | 581,775,502 | MDU6SXNzdWU1ODE3NzU1MDI= | 3,289 | GPT-2 attention_mask reshaping uses input_ids first dimension | {
"login": "lazarevskiVsg",
"id": 55826248,
"node_id": "MDQ6VXNlcjU1ODI2MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/55826248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lazarevskiVsg",
"html_url": "https://github.com/lazarevskiVsg",
"followers_url": "https://api.github.com/users/lazarevskiVsg/followers",
"following_url": "https://api.github.com/users/lazarevskiVsg/following{/other_user}",
"gists_url": "https://api.github.com/users/lazarevskiVsg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lazarevskiVsg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lazarevskiVsg/subscriptions",
"organizations_url": "https://api.github.com/users/lazarevskiVsg/orgs",
"repos_url": "https://api.github.com/users/lazarevskiVsg/repos",
"events_url": "https://api.github.com/users/lazarevskiVsg/events{/privacy}",
"received_events_url": "https://api.github.com/users/lazarevskiVsg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @lazarevskiVsg, thanks a lot for pointing this out!\r\nIt should be fixed now :-) "
] | 1,584 | 1,584 | 1,584 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT-2
## To reproduce
Use attention_mask in **GPT2LMHead**, while feeding **inputs_embeds** instead of **input_ids**
The code fails because line 427 of modeling_gpt2.py uses input_ids first dimension to reshape the mask
```
if attention_mask is not None:
batch_size = input_ids.shape[0]
attention_mask = attention_mask.view(batch_size, -1)
```
I fix it by changing **input_ids.shape[0]** to **attention_mask.shape[0]**, but I think it would be more correct to obtain single batch_size from one of the available input formats.
**Update**
I think `batch_size = input_shape[0]` is the best way
- `transformers` version: **master**
- Python version: 3.7
- PyTorch version (GPU?):1.4
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3289/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3288 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3288/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3288/comments | https://api.github.com/repos/huggingface/transformers/issues/3288/events | https://github.com/huggingface/transformers/issues/3288 | 581,701,674 | MDU6SXNzdWU1ODE3MDE2NzQ= | 3,288 | Dockerhub images huggingface/transformers_cpu for version 2.5.1 has version 2.5.0 installed | {
"login": "edwardcqian",
"id": 26368837,
"node_id": "MDQ6VXNlcjI2MzY4ODM3",
"avatar_url": "https://avatars.githubusercontent.com/u/26368837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edwardcqian",
"html_url": "https://github.com/edwardcqian",
"followers_url": "https://api.github.com/users/edwardcqian/followers",
"following_url": "https://api.github.com/users/edwardcqian/following{/other_user}",
"gists_url": "https://api.github.com/users/edwardcqian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edwardcqian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edwardcqian/subscriptions",
"organizations_url": "https://api.github.com/users/edwardcqian/orgs",
"repos_url": "https://api.github.com/users/edwardcqian/repos",
"events_url": "https://api.github.com/users/edwardcqian/events{/privacy}",
"received_events_url": "https://api.github.com/users/edwardcqian/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Should be fixed, I'll push updated images for all the others Dockerfile in the next hours. \r\n\r\nThanks for reporting @edwardcqian ",
"I'm closing, feel free to reopen if I missed something 👍 "
] | 1,584 | 1,584 | 1,584 | NONE | null | # 🐛 Bug
## Information
Model I am using any model introduced in 2.5.1
The problem arises when using:
pulling `huggingface/transformers_cpu:2.5.1` from dockerhub
## To reproduce
Steps to reproduce the behavior:
1. pull docker image from dockerhub
2. run docker container
3. run `pip freeze` to see `transformers==2.5.0`
## Expected behavior
`pip freeze` in the docker container should show: `transformers==2.5.1` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3288/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3287 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3287/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3287/comments | https://api.github.com/repos/huggingface/transformers/issues/3287/events | https://github.com/huggingface/transformers/issues/3287 | 581,684,320 | MDU6SXNzdWU1ODE2ODQzMjA= | 3,287 | Unexpected output from feature extraction pipeline | {
"login": "Stuffooh",
"id": 50005268,
"node_id": "MDQ6VXNlcjUwMDA1MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/50005268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Stuffooh",
"html_url": "https://github.com/Stuffooh",
"followers_url": "https://api.github.com/users/Stuffooh/followers",
"following_url": "https://api.github.com/users/Stuffooh/following{/other_user}",
"gists_url": "https://api.github.com/users/Stuffooh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Stuffooh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Stuffooh/subscriptions",
"organizations_url": "https://api.github.com/users/Stuffooh/orgs",
"repos_url": "https://api.github.com/users/Stuffooh/repos",
"events_url": "https://api.github.com/users/Stuffooh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Stuffooh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The tokenizer adds special tokens (here, specific to BERT) at the beginning and end of the sentence. \r\n\r\nYou can check that with:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\r\nlen(tokenizer.encode(TEXT)) == 18\r\n```",
"@julien-c is there any way to avoid the special tokens when extracting the features? I expected \"add_special_tokens=False\" would prevent this from happening?",
"I would suggest not using the Pipeline and just doing `tokenizer.encode()` then `outputs = model(input_ids)`"
] | 1,584 | 1,584 | 1,584 | NONE | null | Hi everyone,
I'm not sure if it is a bug or if I simply overlooking something so I did not want to submit a bug report yet. I have the following example code:
```
from transformers import pipeline, AutoTokenizer
import numpy as np
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased',
add_special_tokens=False)
#initialize pipeline
nlp = pipeline('feature-extraction', model='bert-base-uncased', config='bert-base-uncased', tokenizer=tokenizer, device=1)
features = nlp("Why is Howard asking questions about the food after Leonard gives him a carton ?")
features = np.squeeze(features)
print(features.shape)
```
I expect the output: (15,768)
But I receive the output: (18,768)
I think there are only 15 tokens but somehow the shape is 18. What am I missing here?
Is this output expected and am I simply missing something or is there more to it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3287/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3286 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3286/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3286/comments | https://api.github.com/repos/huggingface/transformers/issues/3286/events | https://github.com/huggingface/transformers/pull/3286 | 581,650,752 | MDExOlB1bGxSZXF1ZXN0Mzg4Njg1MzEw | 3,286 | Adding LM Head to Transfo-XL and first step to fixing problem with Adaptive Embeddings in TransfoXL | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=h1) Report\n> Merging [#3286](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68ef0a111f8740f06ca4e5a00374ec4e2adb0a6d&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `68.29%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3286 +/- ##\n==========================================\n+ Coverage 77.48% 77.50% +0.01% \n==========================================\n Files 99 99 \n Lines 16799 16768 -31 \n==========================================\n- Hits 13017 12996 -21 \n+ Misses 3782 3772 -10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <ø> (ø)` | |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/3286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `64.61% <ø> (+11.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `89.15% <55.00%> (-2.04%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `77.00% <80.95%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.46% <0.00%> (-3.76%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=footer). Last update [68ef0a1...f2cc11a](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Ok good job.\r\n> \r\n> I feel like we could remove all the dead code related to sampling softmax (see my comments)\r\n\r\nSound good, will do that then!",
"Dead code is now removed. This removed a lot of code. To not re-invent the wheel the code is kept in the branch `add_sampling_and_training_to_transfo_xl_models` and documented by the feature request: #3310 , if someone wants to pick up implementing sample softmax again. \r\n\r\nThis PR still adds language modeling capabilities to TF transfoXL."
] | 1,584 | 1,584 | 1,584 | MEMBER | null | This PR adds LM generation capabilitiies to the TF transfo-xl model. The integration tests for language generation pass, so generation from a pretrained model works now in TF as well.
What does definitely not work yet is running the both PT and TF models with `self.sample_softmax > 0`:
- Transfo-XL uses adaptive word embeddings -> the word embeddings ale broken down into 4 Embeddings of different shapes: `[20000, 1024], [20000, 1024], [160000, 64]` and `[67735, 16]` . When `self.sample_softmax > 0` though, it seems like the model expects the `normal` word embeddings with just a single weight matrix. When trying to tie the weights then as done in line 831 (see comment below), the logic breaks.
This problem seems to be more complex though and I'd suggest to solve it in another PR (add possible have a call before to make things clear). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3286/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3286",
"html_url": "https://github.com/huggingface/transformers/pull/3286",
"diff_url": "https://github.com/huggingface/transformers/pull/3286.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3286.patch",
"merged_at": 1584537868000
} |
https://api.github.com/repos/huggingface/transformers/issues/3285 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3285/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3285/comments | https://api.github.com/repos/huggingface/transformers/issues/3285/events | https://github.com/huggingface/transformers/issues/3285 | 581,614,153 | MDU6SXNzdWU1ODE2MTQxNTM= | 3,285 | Is there a way to evaluate GPT-2 model during fine-tuning process for accuracy and fluency? | {
"login": "D-i-l-r-u-k-s-h-i",
"id": 47185867,
"node_id": "MDQ6VXNlcjQ3MTg1ODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/47185867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i",
"html_url": "https://github.com/D-i-l-r-u-k-s-h-i",
"followers_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/followers",
"following_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/following{/other_user}",
"gists_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/gists{/gist_id}",
"starred_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/subscriptions",
"organizations_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/orgs",
"repos_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/repos",
"events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/events{/privacy}",
"received_events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"A common way of evaluating LMs is to measure their Perplexity. \r\nSay you want to finetune GPT2 on your dataset D.\r\nDefine train, val and test datasets (maybe something around 75%, 10%, 15%.\r\nMeasure the [perplexity](https://towardsdatascience.com/perplexity-intuition-and-derivation-105dd481c8f3) on train and val after each epoch. Compare train and eval curves for overfitting. \r\n\r\nThere are a ton of other evaluation measures that might be better for your task - Google will be your best friend :-) ",
"@patrickvonplaten can you provide example/code implementation? "
] | 1,584 | 1,650 | 1,584 | NONE | null | # ❓ Questions & Help
I'm trying to evaluate GPT-2 model during fine tuning process, and I'm able to calculate the loss at each epoch, but do not know how accuracy can be calculated or how to give a score to the model. Would like to get some suggestions as help.
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/60483956/how-to-perform-accuracy-testing-on-text-generation-task | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3285/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3284 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3284/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3284/comments | https://api.github.com/repos/huggingface/transformers/issues/3284/events | https://github.com/huggingface/transformers/issues/3284 | 581,613,915 | MDU6SXNzdWU1ODE2MTM5MTU= | 3,284 | Return token span from NerPipeline | {
"login": "EmilStenstrom",
"id": 224130,
"node_id": "MDQ6VXNlcjIyNDEzMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/224130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EmilStenstrom",
"html_url": "https://github.com/EmilStenstrom",
"followers_url": "https://api.github.com/users/EmilStenstrom/followers",
"following_url": "https://api.github.com/users/EmilStenstrom/following{/other_user}",
"gists_url": "https://api.github.com/users/EmilStenstrom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EmilStenstrom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EmilStenstrom/subscriptions",
"organizations_url": "https://api.github.com/users/EmilStenstrom/orgs",
"repos_url": "https://api.github.com/users/EmilStenstrom/repos",
"events_url": "https://api.github.com/users/EmilStenstrom/events{/privacy}",
"received_events_url": "https://api.github.com/users/EmilStenstrom/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Oh, I guess a workaround is to pass in `ignore_labels=[]` when creating the pipeline. This makes the nlp call return all tokens, including the ones that are not part of NER. Then I can just chunk two tokens together if their label is the same and they are nearby. Does this make sense, or am I missing something fundamental?",
"Hi again, would you be open to a PR fixing this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | # 🚀 Feature request
I would like to suggest that NerPipeline should return the span in the original text where the matched entity exists.
Instead of:
```python
[
{"word": "New", "score": 0.9995751976966858, "entity": "LOC"},
{"word": "York", "score": 0.9996403455734253, "entity": "LOC"}
]
```
I would like to see this:
```python
[
{"word": "New", "score": 0.9995751976966858, "entity": "LOC", "span": (0, 3)},
{"word": "York", "score": 0.9996403455734253, "entity": "LOC", "span": (4, 8)}
]
```
## Motivation
I'm trying to use transformers for NER, and I specifically want to return multi word entities as one phrase.
With the above example, I would like to return "New York". With spans added, I would be able to merge nearby tokens into one.
This makes it possible to differentiate the result of "A place called New, and a place called York" from "A place called New York". With the current scheme, they both return the same thing.
## Your contribution
I think I understand NerPipeline enough to make a PR, if this is something you would be open to. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3284/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3283 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3283/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3283/comments | https://api.github.com/repos/huggingface/transformers/issues/3283/events | https://github.com/huggingface/transformers/issues/3283 | 581,590,631 | MDU6SXNzdWU1ODE1OTA2MzE= | 3,283 | What is the most effective way to use BERT , ROBERTA , GPT-2 architectures as frozen feature extractors ? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | CONTRIBUTOR | null | We use pretrained self-supervised learning (SSL) models for NLP as feature extractors for downstream tasks like sentiment analysis. In most of such cases, we add a simple new classification layer and **fine-tune the whole model**. With the SSL models getting bigger and the amount of unsupervised training data is huge it would be nice if we can use the problem agnostic behavior of SSL embeddings. In other words if we use them as **Frozen Feature extractors**, we can save lot of time and computational cost.
**Have anyone seen a good review on using SSL networks as frozen feature extractors?** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3283/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3282 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3282/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3282/comments | https://api.github.com/repos/huggingface/transformers/issues/3282/events | https://github.com/huggingface/transformers/issues/3282 | 581,578,692 | MDU6SXNzdWU1ODE1Nzg2OTI= | 3,282 | Install error , Win10,anaconda3,python3.5,pytorch | {
"login": "caucsunjiahui",
"id": 52264790,
"node_id": "MDQ6VXNlcjUyMjY0Nzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/52264790?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caucsunjiahui",
"html_url": "https://github.com/caucsunjiahui",
"followers_url": "https://api.github.com/users/caucsunjiahui/followers",
"following_url": "https://api.github.com/users/caucsunjiahui/following{/other_user}",
"gists_url": "https://api.github.com/users/caucsunjiahui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caucsunjiahui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caucsunjiahui/subscriptions",
"organizations_url": "https://api.github.com/users/caucsunjiahui/orgs",
"repos_url": "https://api.github.com/users/caucsunjiahui/repos",
"events_url": "https://api.github.com/users/caucsunjiahui/events{/privacy}",
"received_events_url": "https://api.github.com/users/caucsunjiahui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"seems to be a sentencepiece issue, please open an issue at https://github.com/google/sentencepiece"
] | 1,584 | 1,584 | 1,584 | NONE | null |
When I pip install the transformers,its not successfull,my environment is Win10,anaconda3,python3.5,the error is as follows,what is wrong with it,thank you!.
## Details
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\sjh\AppData\Local\Temp\pip-install-fu05dcfq\sentencepiece\setup.py", line 29, in <module>
with codecs.open(os.path.join('..', 'VERSION'), 'r', 'utf-8') as f:
File "C:\Users\sjh\Anaconda3\Lib\codecs.py", line 895, in open
file = builtins.open(filename, mode, buffering)
FileNotFoundError: [Errno 2] No such file or directory: '..\\VERSION'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\sjh\AppData\Local\Temp\pip-install-fu05dcfq\sentencepiece\
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3282/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3281 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3281/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3281/comments | https://api.github.com/repos/huggingface/transformers/issues/3281/events | https://github.com/huggingface/transformers/issues/3281 | 581,532,887 | MDU6SXNzdWU1ODE1MzI4ODc= | 3,281 | how to use TFBertModel to load a Bert, which the path is from own computer. | {
"login": "xiongma",
"id": 30991932,
"node_id": "MDQ6VXNlcjMwOTkxOTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30991932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiongma",
"html_url": "https://github.com/xiongma",
"followers_url": "https://api.github.com/users/xiongma/followers",
"following_url": "https://api.github.com/users/xiongma/following{/other_user}",
"gists_url": "https://api.github.com/users/xiongma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiongma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiongma/subscriptions",
"organizations_url": "https://api.github.com/users/xiongma/orgs",
"repos_url": "https://api.github.com/users/xiongma/repos",
"events_url": "https://api.github.com/users/xiongma/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiongma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"and how to use AutoTokenizer to load a local vocab file, instead of download a vocab file from server"
] | 1,584 | 1,585 | 1,585 | NONE | null | my model's path is on my computer, so, I want to load it, but when I use TFBertModel to load it, it appeared this error.
`model = TFBertModel.from_pretrained('/Users/maxiong/Workpace/Code/transformers/pre_model',config=config)`
error:

this is my model files

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3281/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3280 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3280/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3280/comments | https://api.github.com/repos/huggingface/transformers/issues/3280/events | https://github.com/huggingface/transformers/issues/3280 | 581,527,835 | MDU6SXNzdWU1ODE1Mjc4MzU= | 3,280 | how to finetune with PreTrainedEncoderDecoder | {
"login": "vanh17",
"id": 10501538,
"node_id": "MDQ6VXNlcjEwNTAxNTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/10501538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vanh17",
"html_url": "https://github.com/vanh17",
"followers_url": "https://api.github.com/users/vanh17/followers",
"following_url": "https://api.github.com/users/vanh17/following{/other_user}",
"gists_url": "https://api.github.com/users/vanh17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vanh17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vanh17/subscriptions",
"organizations_url": "https://api.github.com/users/vanh17/orgs",
"repos_url": "https://api.github.com/users/vanh17/repos",
"events_url": "https://api.github.com/users/vanh17/events{/privacy}",
"received_events_url": "https://api.github.com/users/vanh17/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843738573,
"node_id": "MDU6TGFiZWwxODQzNzM4NTcz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Encoder-Decoder",
"name": "Core: Encoder-Decoder",
"color": "ef536d",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue --> I am trying to run seq2seq within the same language (say it is English). I was trying to use PreTrainedEncoderDecoder (BERT, BERT), was trying to use BERT and GPT2 however, looks like it does not support this combination yet.
I am trying to understand what the forward function does in the class PreTrainedEncoderDecoder, and how can we use it training my dataset.
Also how can we use it at prediction time since forward need to have both decode_input_ids and encode_input_ids. I do not think we will have decode_input_ids at prediction time. Thank you
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3280/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3279 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3279/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3279/comments | https://api.github.com/repos/huggingface/transformers/issues/3279/events | https://github.com/huggingface/transformers/pull/3279 | 581,411,626 | MDExOlB1bGxSZXF1ZXN0Mzg4NDc1Mzgy | 3,279 | [BART] Remove unused kwargs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=h1) Report\n> Merging [#3279](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3814e167d99c4b2e135b250d73deaa3f63ebef0c&el=desc) will **decrease** coverage by `0.07%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3279 +/- ##\n==========================================\n- Coverage 78.02% 77.94% -0.08% \n==========================================\n Files 98 98 \n Lines 16670 16666 -4 \n==========================================\n- Hits 13007 12991 -16 \n- Misses 3663 3675 +12 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.26% <100.00%> (-0.04%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.22% <0.00%> (-1.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.72% <0.00%> (-0.14%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=footer). Last update [3814e16...1b8aa30](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I found one more `forward` in bertabs, these others are not obviously wrong.\r\n\r\n`git grep \"\\.forward(\"`\r\n\r\n```\r\nexamples/ner/run_pl_ner.py: outputs = self.forward(**inputs)\r\nexamples/ner/run_pl_ner.py: outputs = self.forward(**inputs)\r\n```\r\non a lightning module so OK\r\n\r\n```\r\nexamples/summarization/bertabs/modeling_bertabs.py: See :obj:`onmt.modules.RNNDecoderBase.forward()` \r\n``` \r\nIn documentation so OK\r\n\r\n```\r\nsrc/transformers/modeling_bart.py: return super().forward(positions)\r\nsrc/transformers/modeling_roberta.py: return super().forward(\r\n```\r\ntried to change and got \"super() is not callable\".\r\n\r\nMerging!\r\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | This doesn't change anything,
- k_dim and v_dim kwargs are there for other models in fairseq but we don't need them.
- attention weights are returned by the AttentionModule (and ignored later) no matter what | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3279/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3279",
"html_url": "https://github.com/huggingface/transformers/pull/3279",
"diff_url": "https://github.com/huggingface/transformers/pull/3279.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3279.patch",
"merged_at": 1584327644000
} |
https://api.github.com/repos/huggingface/transformers/issues/3278 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3278/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3278/comments | https://api.github.com/repos/huggingface/transformers/issues/3278/events | https://github.com/huggingface/transformers/pull/3278 | 581,397,782 | MDExOlB1bGxSZXF1ZXN0Mzg4NDYyNzAz | 3,278 | [BART] generation_mode as a kwarg not a class attribute | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=h1) Report\n> Merging [#3278](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3814e167d99c4b2e135b250d73deaa3f63ebef0c?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3278 +/- ##\n==========================================\n- Coverage 78.02% 78.02% -0.01% \n==========================================\n Files 98 98 \n Lines 16670 16667 -3 \n==========================================\n- Hits 13007 13004 -3 \n Misses 3663 3663\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.7% <ø> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.28% <100%> (-0.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.37% <0%> (+0.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=footer). Last update [3814e16...473dab8](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"### Summary:\r\n`generation_mode` is a flag that\r\n - tells the decoder NOT to make decoder_attn_mask (ignored pad tokens and causal tokens)\r\n - keeps position_embeds correct even though we only decode one new token at a time.\r\n - tells the decoder \r\n\r\n\r\nThe easiest way to get rid of it in `modeling_utils.py`: \r\n\tpass a kwarg from `BartModel.prepare_inputs_from_generation` (then it never needs to be unset, and modeling_utils.py doesn't need to know about it)\r\n\tI don't know how to get rid of the logic entirely. It's tough to know whether you're in generation mode at step 0 because the cache is empty.",
"I updated this PR to implement the solution I proposed.",
"Merging, but feel free to ask further questions!"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | Currently, we set it to `BartModel.decoder.generation_mode = True` and then never unset it, which is confusing in the the rare case where you try to finetune or extract features after generating.
We can encapsulate bart specific logic to modeling_bart.py by just using a kwarg.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3278/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3278",
"html_url": "https://github.com/huggingface/transformers/pull/3278",
"diff_url": "https://github.com/huggingface/transformers/pull/3278.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3278.patch",
"merged_at": 1584377274000
} |
https://api.github.com/repos/huggingface/transformers/issues/3277 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3277/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3277/comments | https://api.github.com/repos/huggingface/transformers/issues/3277/events | https://github.com/huggingface/transformers/pull/3277 | 581,297,842 | MDExOlB1bGxSZXF1ZXN0Mzg4Mzc2MDc3 | 3,277 | Add missing token classification for XLM | {
"login": "sakares",
"id": 1306031,
"node_id": "MDQ6VXNlcjEzMDYwMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1306031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sakares",
"html_url": "https://github.com/sakares",
"followers_url": "https://api.github.com/users/sakares/followers",
"following_url": "https://api.github.com/users/sakares/following{/other_user}",
"gists_url": "https://api.github.com/users/sakares/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sakares/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sakares/subscriptions",
"organizations_url": "https://api.github.com/users/sakares/orgs",
"repos_url": "https://api.github.com/users/sakares/repos",
"events_url": "https://api.github.com/users/sakares/events{/privacy}",
"received_events_url": "https://api.github.com/users/sakares/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,584 | 1,585 | 1,585 | CONTRIBUTOR | null | The current [modeling_xlm.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlm.py) did not have alike `ForTokenClassification` class like others, which helps for NER task comparison across all existing models.
now `XLMForTokenClassification` can be called via:
```python
from transformers import XLMForTokenClassification
model = XLMForTokenClassification.from_pretrained('xlm-mlm-100-1280')
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3277/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3277/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3277",
"html_url": "https://github.com/huggingface/transformers/pull/3277",
"diff_url": "https://github.com/huggingface/transformers/pull/3277.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3277.patch",
"merged_at": 1585232533000
} |
https://api.github.com/repos/huggingface/transformers/issues/3276 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3276/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3276/comments | https://api.github.com/repos/huggingface/transformers/issues/3276/events | https://github.com/huggingface/transformers/issues/3276 | 581,255,883 | MDU6SXNzdWU1ODEyNTU4ODM= | 3,276 | Model fail to revert to generation_mode=False after generation | {
"login": "AOZMH",
"id": 49521559,
"node_id": "MDQ6VXNlcjQ5NTIxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/49521559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AOZMH",
"html_url": "https://github.com/AOZMH",
"followers_url": "https://api.github.com/users/AOZMH/followers",
"following_url": "https://api.github.com/users/AOZMH/following{/other_user}",
"gists_url": "https://api.github.com/users/AOZMH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AOZMH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AOZMH/subscriptions",
"organizations_url": "https://api.github.com/users/AOZMH/orgs",
"repos_url": "https://api.github.com/users/AOZMH/repos",
"events_url": "https://api.github.com/users/AOZMH/events{/privacy}",
"received_events_url": "https://api.github.com/users/AOZMH/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I don't follow your workflow. Why do you want to run forward after you generate using the generated_ids as decoder_input_ids?\r\n\r\nIt would be easier to follow if you:\r\n- checked whether the behavior you expect is achieved by the authors' implementation in fairseq\r\n- made the example smaller\r\n\r\nThanks for contributing!\r\n",
"@sshleifer Thanks for the reply! \r\nYour latest PR already fixed the generation_mode, thanks! The rest of the problem (that you probably failed to follow) is like a sanity check, but the result failed to meet the expectation for me.\r\n\r\nBasically, if I feed sequence s1 to the encoder to let the model to generate, suppose sequence s2 is generated, then if I directly feed <encoder_input_ids=s1, decoder_input_ids=s2> to the model, the output of the decoder would be something that resembles s2. However, as shown in the code below, the actual output of the decoder given input <s1,s2> is a sequence \"and and ... and\", which is largly different from s2, and that's where I'm confused.\r\n\r\n```\r\nwith torch.no_grad():\r\n result = model.generate(input_ids=input_ids, eos_token_ids=tokenizer.eos_token_id, num_beams=4, max_length=20)\r\n\r\nmodel.model.decoder.generation_mode=False\r\nnew_result = model(input_ids, decoder_input_ids=result)\r\n\r\nprint(tokenizer.decode(torch.argmax(new_result[0][0], dim=1)))\r\n# ' and and and and and and and and and and and and and and and and and and and'\r\n```\r\n\r\nOf course we won't use this code in practice (since it's just a sanity check), but I post this because I'm wondering if it's my incorrect way of using `forward` function when training that caused this confusion. As for me, if I want to train the summarization model using <paragraph=s1, summary=s2> pairs, I formerly feed <encoder_input=s1, decoder_input=< bos >+s2> and train the model with decoder_output=s2+< eos >. May I ask if it is the correct way?\r\n\r\nThanks again for the kind help!",
"For summarization finetuning, I'd recommend:\r\n- prepend a space to s1 and s2,\r\n- then use `tokenizer.batch_encode_plus(s1, max_length=1024)` for `input_ids` and `attention_mask`\r\n- then `tokenizer.batch_encode_plus(s2, max_length=1024)['input_ids']` to get `decoder_input_ids`\r\n",
"@sshleifer Thanks for the update! I'll try out you method and report the result I get for the problem above tomorrow.",
"The underlying issue here is fixed, closing!"
] | 1,584 | 1,587 | 1,587 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BART
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name): CNN/DM
* [ ] my own task or dataset: (give details below)
## To reproduce
Hi @sshleifer,
Thanks for the amazing model!
I found a bug when alternatively trying to use **forward** to train the BartForConditionalGeneration and use **generate** to inference and evaluate the trained model. As shown in the code below, if I first use generate function then call forward function, the generation_mode attribute of decoder is set to True, and the shape of the decoder_output seems incorrect.
```
model = BartForConditionalGeneration.from_pretrained('bart-large-cnn')
tokenizer = BartTokenizer.from_pretrained('bart-large-cnn')
# input sequence
input_seq = "Bart is a deep-learning pretrained model implemented in pytorch. It can smoothly handle summarization. It is a big model pretrained for generation tasks, especially summarizaition."
input_ids = torch.LongTensor([tokenizer.encode(input_seq)])
# expected output sequence
decoder_input_seq = "Bart is a big pretrained deep model in pytorch for summarization."
decoder_input_ids = torch.LongTensor([tokenizer.encode(decoder_input_seq)])
# using generate method to inference
with torch.no_grad():
result = model.generate(input_ids=input_ids, eos_token_ids=tokenizer.eos_token_id, num_beams=4, max_length=20)
print(tokenizer.decode(result[0]))
# 'B. It is a big model pretrained for generation tasks, especially summarizaition. It'
# NOW use forward to train
result = model(input_ids, decoder_input_ids=decoder_input_ids)
# the shape of decoder_output and encoder_output
# what expected is: <1, 18, 50264> and <40, 1, 1024>
# but actual output is: torch.Size([1, 1, 50264]) torch.Size([40, 1, 1024])
print(result[0].shape, result[2].shape)
```
Such issue can **seemingly** be addressed by mannually setting the generation_mode.
```
# mannually set the generation mode to False **seemingly** fix the issue
model.model.decoder.generation_mode=False
result = model(input_ids, decoder_input_ids=decoder_input_ids)
print(result[0].shape, result[2].shape)
# output is: torch.Size([1, 18, 50264]) torch.Size([40, 1, 1024]) and make sense
```
However, now the output of the forward function doesn't make sense, as shown below.
```
with torch.no_grad():
result = model.generate(input_ids=input_ids, eos_token_ids=tokenizer.eos_token_id, num_beams=4, max_length=20)
model.model.decoder.generation_mode=False
new_result = model(input_ids, decoder_input_ids=result)
print(tokenizer.decode(torch.argmax(new_result[0][0], dim=1)))
# ' and and and and and and and and and and and and and and and and and and and'
```
In expectation, when we feed the generated decoder sequence to the decoder and remain the input of the encoder unchanged, the output of the decoder would at least resemble the decoder_input. However, the actual output of the model is `' and and and and and and and and and and and and and and and and and and and'`, which is definately not a reasonable output of a trained decoder.
To go deeper, I tested different samples, all feeding the generated output of a encoder_input to the decoder and inspect the decoder_output (which "should" resemble decoder_input, i.e. generated sequence), the same happens everytime for me (most output is the duplication of a single token for many times, like "the the the ...", "and and and...", ". . . . ").
Given all these results, I think there is a bug in either the implementation of **generate** or that of **forward**, the revert of generation_mode is simple, but I don't think the output of forward is a reasonable result currently. Could you please look into the issue?
@sshleifer If I'm not using those functions in a correct manner, any advise or instruction is welcomed! Many thanks for the help!
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master-branch
- Platform: windows 10
- Python version: 3.7.0
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): /
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3276/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3275 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3275/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3275/comments | https://api.github.com/repos/huggingface/transformers/issues/3275/events | https://github.com/huggingface/transformers/issues/3275 | 581,251,417 | MDU6SXNzdWU1ODEyNTE0MTc= | 3,275 | Cannot Achieve Reproducability with Tensorflow Transformer Models | {
"login": "nategr03",
"id": 55382292,
"node_id": "MDQ6VXNlcjU1MzgyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/55382292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nategr03",
"html_url": "https://github.com/nategr03",
"followers_url": "https://api.github.com/users/nategr03/followers",
"following_url": "https://api.github.com/users/nategr03/following{/other_user}",
"gists_url": "https://api.github.com/users/nategr03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nategr03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nategr03/subscriptions",
"organizations_url": "https://api.github.com/users/nategr03/orgs",
"repos_url": "https://api.github.com/users/nategr03/repos",
"events_url": "https://api.github.com/users/nategr03/events{/privacy}",
"received_events_url": "https://api.github.com/users/nategr03/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have the same issue and I get (sometimes wildly) different results from run to run. Here's my code:\r\n\r\nhttps://github.com/dmitriydligach/Thyme/blob/master/RelKeras/et.py\r\n\r\nDoes anybody have a solution yet?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,593 | 1,593 | NONE | null | I've been experimenting with the roberta-large model with both PyTorch and TensorFlow for a sentiment analysis task. With the PyTorch model, I am able to achieve 100% reproducability, however, this is not the case with the Tensorflow model.
I have set all the necessary seeds as follows:
```
seed_val = 3
os.environ['PYTHONHASHSEED'] = str(seed_val)
np.random.seed(seed_val)
random.seed(seed_val)
tf.random.set_seed(seed_val)
```
and I am even using the fix described at https://github.com/NVIDIA/tensorflow-determinism:
```os.environ['TF_DETERMINISTIC_OPS'] = '1'```
I am not sure if reproducability is fully supported with the TensorFlow transformer models yet, or if I am doing something wrong.
Here is the link to the **CoLab notebook** containing my code:
https://drive.google.com/open?id=1xPTYPl8LyRrMgkiXUNtSbxxJ7pAslD2x
Thanks in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3275/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3274 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3274/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3274/comments | https://api.github.com/repos/huggingface/transformers/issues/3274/events | https://github.com/huggingface/transformers/issues/3274 | 581,246,388 | MDU6SXNzdWU1ODEyNDYzODg= | 3,274 | Tremendous slowdown in multi-node distributed training | {
"login": "Genius1237",
"id": 15867363,
"node_id": "MDQ6VXNlcjE1ODY3MzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/15867363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Genius1237",
"html_url": "https://github.com/Genius1237",
"followers_url": "https://api.github.com/users/Genius1237/followers",
"following_url": "https://api.github.com/users/Genius1237/following{/other_user}",
"gists_url": "https://api.github.com/users/Genius1237/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Genius1237/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Genius1237/subscriptions",
"organizations_url": "https://api.github.com/users/Genius1237/orgs",
"repos_url": "https://api.github.com/users/Genius1237/repos",
"events_url": "https://api.github.com/users/Genius1237/events{/privacy}",
"received_events_url": "https://api.github.com/users/Genius1237/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you find an answer to what the cause is?",
"Well, I guess that you need infiniband between the nodes to train a model as large as bert. The ethernet interface seems to be a bottleneck during the gradient synchronization process.\r\n\r\nTo test this out, I tried out a much smaller bert model on the existing setup without infiniband. Less gradient information to exchange, so I was able to observe a speedup. I tried out a normal sized model on a setup with infiniband and I was observing some speedup (albeit not perfect scaling, only like a 50-60% improvement).\r\n\r\nI concluded that not having infiniband was the issue. Maybe this could be updated in the readme where the instructions for `run_language_modeling.py` are given.",
"I also encounter this. And I found the `fairseq` code base scales linearly on the same ethernet hardware I have than `run_language_modeling.py`",
"Could you share some details on what model you trained on `fairseq` and that model's size?",
"Since `run_language_modeling.py` uses only 1 GPU per node in the code, could you share what changes need to be made to the file in order to work with Multi-GPU, Multi-Node settings like `Azure NC24s_v3 nodes`?\r\n\r\nIn the code in `examples/run_language_modeling.py`, `1` GPU per node is hard-coded (in the last line)\r\n\r\n\r\n \r\n if args.local_rank == -1 or args.no_cuda:\r\n device = torch.device(\"cuda\" if torch.cuda.is_available() and not args.no_cuda else \"cpu\")\r\n args.n_gpu = 0 if args.no_cuda else torch.cuda.device_count()\r\n else: # Initializes the distributed backend which will take care of sychronizing nodes/GPUs\r\n torch.cuda.set_device(args.local_rank)\r\n device = torch.device(\"cuda\", args.local_rank)\r\n torch.distributed.init_process_group(backend=\"nccl\")\r\n args.n_gpu = 1\r\n",
"You do not need any modifications. You will have to launch the script via `torch.distributed.launch`. It'll be something like this\r\n```\r\npython -m torch.distributed.launch --nproc_per_node 4 --nnodes $NODE_COUNT --node_rank $RANK --master_addr $MASTER_ADDR run_lm_finetuning.py ....\r\n```\r\n`$NODE_COUNT` will be your number of nodes. You'll have to find a way to obtain `$RANK` and `$MASTER_ADDR` depending on your cluster configuration.\r\n\r\nSince when you run via DistributedDataParallel, one process has only one gpu associated with it, that line does something to ensure that.",
"Is there any update on this? I suppose that not having infiniband interconnect was the only limiting factor? ",
"Yes. Ran faster on a system with infiniband.\n\nOn Thu, Nov 19, 2020 at 6:08 PM gvijqb <[email protected]> wrote:\n\n> Is there any update on this? I suppose that not having infiniband\n> interconnect was the only limiting factor?\n>\n> —\n> You are receiving this because you modified the open/close state.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3274#issuecomment-730347349>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ADZB3Y3PKZBAOFOP5SSA3BLSQUGUXANCNFSM4LJHN2QA>\n> .\n>\n\n\n-- \nRegards\nAnirudh Srinivasan\nResearch Fellow\nMicrosoft Research, India\n"
] | 1,584 | 1,605 | 1,584 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Finetuning a bert-base model on language modeling for a particular domain
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Training is happening on Azure NC24s_v3 nodes (4 V100s each) with NCCL as the backend. I'm comparing the performance on a single node scenario vs a 2 node scenario. Note that there is no infiniband networking between the nodes, only 40Gbps ethernet.
2. Use torch.distributed.launch to launch `run_language_modeling.py` in single node (mult-gpu) and multi node (mult-gpu) scenarios
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
In the single node scenario, I'm getting about 2 iteration/sec during training. In the multi-node scenario, it drops to 4 sec/iteration.
Theorizing that the network was the issue, I reduced the model size significantly (down to 1 layer from 12 and the other hyperparameters also scaled down appropriately) and ran the test again. The same slowdown in performance was observed even then.
Am I missing something here? Is it possible to perform multi-node training of bert models without infiniband?
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Ubuntu 18.04
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3274/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3273 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3273/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3273/comments | https://api.github.com/repos/huggingface/transformers/issues/3273/events | https://github.com/huggingface/transformers/pull/3273 | 581,241,930 | MDExOlB1bGxSZXF1ZXN0Mzg4MzMwNzU5 | 3,273 | add XLMForTokenClassification | {
"login": "sakares",
"id": 1306031,
"node_id": "MDQ6VXNlcjEzMDYwMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1306031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sakares",
"html_url": "https://github.com/sakares",
"followers_url": "https://api.github.com/users/sakares/followers",
"following_url": "https://api.github.com/users/sakares/following{/other_user}",
"gists_url": "https://api.github.com/users/sakares/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sakares/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sakares/subscriptions",
"organizations_url": "https://api.github.com/users/sakares/orgs",
"repos_url": "https://api.github.com/users/sakares/repos",
"events_url": "https://api.github.com/users/sakares/events{/privacy}",
"received_events_url": "https://api.github.com/users/sakares/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | Firstly, I want to experiment with NER task across all available architecture with Thai language pretrained model.
Turned out I found not TokenClassification class for XLM yet. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3273/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3273",
"html_url": "https://github.com/huggingface/transformers/pull/3273",
"diff_url": "https://github.com/huggingface/transformers/pull/3273.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3273.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3272 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3272/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3272/comments | https://api.github.com/repos/huggingface/transformers/issues/3272/events | https://github.com/huggingface/transformers/issues/3272 | 581,233,650 | MDU6SXNzdWU1ODEyMzM2NTA= | 3,272 | how can i distill xlm-roberta model , just like distill roberta model , any suggestion ?thanks a lot | {
"login": "ciel-zhang",
"id": 18700473,
"node_id": "MDQ6VXNlcjE4NzAwNDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/18700473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ciel-zhang",
"html_url": "https://github.com/ciel-zhang",
"followers_url": "https://api.github.com/users/ciel-zhang/followers",
"following_url": "https://api.github.com/users/ciel-zhang/following{/other_user}",
"gists_url": "https://api.github.com/users/ciel-zhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ciel-zhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ciel-zhang/subscriptions",
"organizations_url": "https://api.github.com/users/ciel-zhang/orgs",
"repos_url": "https://api.github.com/users/ciel-zhang/repos",
"events_url": "https://api.github.com/users/ciel-zhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ciel-zhang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any answer on this? "
] | 1,584 | 1,649 | 1,589 | NONE | null | # ❓ Questions & Help
how can i distill xlm-roberta model , maybe i can distill it ,just like distill roberta model , any suggestion ? thanks a lot | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3272/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/3272/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3271 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3271/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3271/comments | https://api.github.com/repos/huggingface/transformers/issues/3271/events | https://github.com/huggingface/transformers/issues/3271 | 581,171,540 | MDU6SXNzdWU1ODExNzE1NDA= | 3,271 | Finetuning before feature extraction | {
"login": "Stuffooh",
"id": 50005268,
"node_id": "MDQ6VXNlcjUwMDA1MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/50005268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Stuffooh",
"html_url": "https://github.com/Stuffooh",
"followers_url": "https://api.github.com/users/Stuffooh/followers",
"following_url": "https://api.github.com/users/Stuffooh/following{/other_user}",
"gists_url": "https://api.github.com/users/Stuffooh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Stuffooh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Stuffooh/subscriptions",
"organizations_url": "https://api.github.com/users/Stuffooh/orgs",
"repos_url": "https://api.github.com/users/Stuffooh/repos",
"events_url": "https://api.github.com/users/Stuffooh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Stuffooh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The script has been renamed [`run_language_modeling.py`](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) to better reflect the fact that it can also be used to train a new model from scratch.\r\n\r\nLet us know if it works.",
"Hi @julien-c thanks for the quick reply!\r\n\r\nThat makes sense, I must have missed the fact it got renamed. I have one more question about the \"new\" script:\r\n\r\nHow difficult is it make to script compatible for finetuning a new model like BART? Is it as simple as adding the model to the list inside the script or will it need a lot of workarounds to get it working?"
] | 1,584 | 1,588 | 1,588 | NONE | null | Hi,
Currently I am using the pipeline feature extraction to extract features with my own dataset as input. I was wondering if it is possible to finetune different models using my own dataset before using the pipeline for feature extraction and if so what will be the easiest way to do so?
In the past there used to be a lm_finetuning script example but this one is no longer available. I cannot find any examples or guides how to finetune different models on a personal dataset.
tl;dr is it possible to finetune different models on a personal dataset in just a few lines of code and if so how?
Thanks in advance,
A clueless person trying to learn more about the world of NLP.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3271/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3270 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3270/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3270/comments | https://api.github.com/repos/huggingface/transformers/issues/3270/events | https://github.com/huggingface/transformers/issues/3270 | 581,133,096 | MDU6SXNzdWU1ODExMzMwOTY= | 3,270 | train model from scratch with big data | {
"login": "jorgtied",
"id": 614718,
"node_id": "MDQ6VXNlcjYxNDcxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/614718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jorgtied",
"html_url": "https://github.com/jorgtied",
"followers_url": "https://api.github.com/users/jorgtied/followers",
"following_url": "https://api.github.com/users/jorgtied/following{/other_user}",
"gists_url": "https://api.github.com/users/jorgtied/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jorgtied/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jorgtied/subscriptions",
"organizations_url": "https://api.github.com/users/jorgtied/orgs",
"repos_url": "https://api.github.com/users/jorgtied/repos",
"events_url": "https://api.github.com/users/jorgtied/events{/privacy}",
"received_events_url": "https://api.github.com/users/jorgtied/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,589 | 1,589 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
I would like to train a new model from scratch with some big data sets and the tutorial suggests to load and tokenize examples on the fly. It would be great if that feature is readily integrated in the example scripts (e.g. `run_language_modeling.py`). It is not entirely clear to me how to implement this in the most efficient way. My data set does not fit into memory and I cannot train out-of-the box with the existing scripts.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3270/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3270/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3269 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3269/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3269/comments | https://api.github.com/repos/huggingface/transformers/issues/3269/events | https://github.com/huggingface/transformers/issues/3269 | 581,023,842 | MDU6SXNzdWU1ODEwMjM4NDI= | 3,269 | when I install transformers, it appeared this error | {
"login": "xiongma",
"id": 30991932,
"node_id": "MDQ6VXNlcjMwOTkxOTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30991932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiongma",
"html_url": "https://github.com/xiongma",
"followers_url": "https://api.github.com/users/xiongma/followers",
"following_url": "https://api.github.com/users/xiongma/following{/other_user}",
"gists_url": "https://api.github.com/users/xiongma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiongma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiongma/subscriptions",
"organizations_url": "https://api.github.com/users/xiongma/orgs",
"repos_url": "https://api.github.com/users/xiongma/repos",
"events_url": "https://api.github.com/users/xiongma/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiongma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should add more information about your environment like OS, Python Version etc?\r\nI just installed it on Windows 10, WSL etc Python 3.6.9 :: Anaconda, Inc.; It worked..",
"I installed it on Mac, Python 3.6.10 Anaconda"
] | 1,584 | 1,585 | 1,585 | NONE | null | ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3269/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3268 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3268/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3268/comments | https://api.github.com/repos/huggingface/transformers/issues/3268/events | https://github.com/huggingface/transformers/pull/3268 | 580,820,138 | MDExOlB1bGxSZXF1ZXN0Mzg3OTU5NzU1 | 3,268 | add gpt2-xl for tf | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good to merge for me. Tested whether model can generate text and everything seems fine.\r\n@julien-c ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=h1) Report\n> Merging [#3268](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cc4c37952a961f2d13e83f3d5ba6dab811d0bbfd&el=desc) will **increase** coverage by `0.19%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3268 +/- ##\n==========================================\n+ Coverage 77.82% 78.01% +0.19% \n==========================================\n Files 98 98 \n Lines 16666 16666 \n==========================================\n+ Hits 12970 13002 +32 \n+ Misses 3696 3664 -32 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `96.16% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.19% <0.00%> (+5.72%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=footer). Last update [cc4c379...501291a](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | MEMBER | null | TF GPT2-XL is now added to AWS and can be loaded via:
```
from transformers import TFGPT2LMHeadModel
model = TFGPT2LMHeadModel.from_pretrained('gpt2-xl')
```
Thanks @bkkaggle for pointing this out! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3268/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3268/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3268",
"html_url": "https://github.com/huggingface/transformers/pull/3268",
"diff_url": "https://github.com/huggingface/transformers/pull/3268.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3268.patch",
"merged_at": 1584132036000
} |
https://api.github.com/repos/huggingface/transformers/issues/3267 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3267/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3267/comments | https://api.github.com/repos/huggingface/transformers/issues/3267/events | https://github.com/huggingface/transformers/pull/3267 | 580,768,852 | MDExOlB1bGxSZXF1ZXN0Mzg3OTE2MzEw | 3,267 | removing torch.cuda.empty_cache() from TF function | {
"login": "keskarnitish",
"id": 5945552,
"node_id": "MDQ6VXNlcjU5NDU1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5945552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keskarnitish",
"html_url": "https://github.com/keskarnitish",
"followers_url": "https://api.github.com/users/keskarnitish/followers",
"following_url": "https://api.github.com/users/keskarnitish/following{/other_user}",
"gists_url": "https://api.github.com/users/keskarnitish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keskarnitish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keskarnitish/subscriptions",
"organizations_url": "https://api.github.com/users/keskarnitish/orgs",
"repos_url": "https://api.github.com/users/keskarnitish/repos",
"events_url": "https://api.github.com/users/keskarnitish/events{/privacy}",
"received_events_url": "https://api.github.com/users/keskarnitish/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=h1) Report\n> Merging [#3267](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cc4c37952a961f2d13e83f3d5ba6dab811d0bbfd&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3267 +/- ##\n=======================================\n Coverage 77.82% 77.82% \n=======================================\n Files 98 98 \n Lines 16666 16666 \n=======================================\n Hits 12970 12970 \n Misses 3696 3696 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=footer). Last update [cc4c379...4bb2bc3](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Indeed, thanks!"
] | 1,584 | 1,585 | 1,584 | CONTRIBUTOR | null | torch.cuda.empty_cache() was being called from a TF function (even when torch is unavailable)
not sure any replacement is needed if TF OOMs
simply running the benchmarks on a GPU with lower HBM will reproduce this error | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3267/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3267",
"html_url": "https://github.com/huggingface/transformers/pull/3267",
"diff_url": "https://github.com/huggingface/transformers/pull/3267.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3267.patch",
"merged_at": 1584656731000
} |
https://api.github.com/repos/huggingface/transformers/issues/3266 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3266/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3266/comments | https://api.github.com/repos/huggingface/transformers/issues/3266/events | https://github.com/huggingface/transformers/pull/3266 | 580,720,844 | MDExOlB1bGxSZXF1ZXN0Mzg3ODc2MzY2 | 3,266 | [BART] FP16 testing fixes | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have a suspicion that this will also fix the GPU test runner. "
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | closes #3249: fp16 forward pass failing when no `decoder_attention_mask` provided. Adds test coverage.
closes #3265: test_generate_fp16 was failing since #3140 (by sending proper kwargs to `BartForConditionalGenerate.generate`)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3266/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3266",
"html_url": "https://github.com/huggingface/transformers/pull/3266",
"diff_url": "https://github.com/huggingface/transformers/pull/3266.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3266.patch",
"merged_at": 1584143307000
} |
https://api.github.com/repos/huggingface/transformers/issues/3265 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3265/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3265/comments | https://api.github.com/repos/huggingface/transformers/issues/3265/events | https://github.com/huggingface/transformers/issues/3265 | 580,685,392 | MDU6SXNzdWU1ODA2ODUzOTI= | 3,265 | [BART] test_generate_fp16 fails after PR#3140 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Interesting - will investigate. Probably something with the `device` of `unfinished_sents` then!\r\n\r\n@julien-c ",
"Caused by previously unexposed kwargs changing behavior:\r\npassing\r\n```\r\ndo_sample=False, early_stopping=True\r\n```\r\nin the unit test fixes them.",
"I'm getting this error @sshleifer ",
"https://github.com/huggingface/transformers/issues/5221",
"I have early stopping = False, but do_sample = True"
] | 1,584 | 1,592 | 1,584 | CONTRIBUTOR | null | passes after `git checkout d6de6423` (commit preceding #3140)
Traceback:
```
unfinished_sents.mul_((~eos_in_sents).long())
# stop when there is a </s> in each sentence, or if we exceed the maximul length
> if unfinished_sents.max() == 0:
E RuntimeError: cuda runtime error (716) : misaligned address at /pytorch/aten/src/THC/THCReduceAll.cuh:327
src/transformers/modeling_utils.py:992: RuntimeError
```
@thomwolf @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3265/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3264 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3264/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3264/comments | https://api.github.com/repos/huggingface/transformers/issues/3264/events | https://github.com/huggingface/transformers/pull/3264 | 580,636,961 | MDExOlB1bGxSZXF1ZXN0Mzg3ODA2ODkx | 3,264 | Clean special token init in modeling_....py | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Is good to merge for me if you guys are fine with it. Can open a new PR for a test proposal for this one and then also adapt the dutch bert model config on AWS. ",
"Isn't this breaking other hosted configs than the ones you're listing?\r\n\r\nLike https://huggingface.co/microsoft/DialoGPT-large for instance? (more generally, we don't control which configs users use the lib with, so adding keys is fine, but renaming keys – or worse, changing their types – is costly)",
"PS: I do agree that having a `eos_token_id` is cleaner than having `eos_token_ids`",
"> Isn't this breaking other hosted configs than the ones you're listing?\r\n> \r\n> Like https://huggingface.co/microsoft/DialoGPT-large for instance? (more generally, we don't control which configs users use the lib with, so adding keys is fine, but renaming keys – or worse, changing their types – is costly)\r\n\r\nThat's a very good point! There is one scenario, where it could break other hosted configs:\r\n\r\n- The user defined an `eos_token_ids` that is different from the default `eos_token_id` the model has. \r\n\r\nBut most of the time (just from browsing through some `config.json` files) the model was trained with HF and then saved and uploaded which means that the `eos_token_ids` was saved and is included in the `config.json`. In this case the values are the same as is the case for `https://huggingface.co/microsoft/DialoGPT-large`. In this case we have still have a dead parameter in the config which should be removed. \r\n\r\nI propose the following: \r\nI can write a script that checks the following for each configs:\r\n\r\n1) does eos_token_ids exist ?\r\n2) is eos_token_ids == default config.eos_token_id ?\r\n\r\nIf there are a lot of 1) and 2) then would write a bash script that simply replaces the line \"`eos_token_ids` = [ ... ]\" with `eos_token_ids`=...\r\n\r\nWill report the results here",
"If you need an exhaustive list of all hosted models (with their config + files), you can do\r\n```python\r\napi = HfApi()\r\nmodels = api.model_list()\r\n```\r\n\r\n",
"Okey here is my analysis of the 308 (not bad actually! ) added community models:\r\n\r\n1. **66** can't load either their config (n)or their tokenizer (including 3 facebook bart models because we call them directly by `bart-large-cnn` and not by `facebook/bart-large-cnn` -> should maybe add a new link or change model name online)\r\n 2. **79** currently have wrong `pad_token_id`, `eos_token_id`, `bos_token_id` in their configs. IMPORTANT: The reason for this is that we used to have the wrong defaults saved in `PretrainedConfig()` - see e.g. [here](https://github.com/huggingface/transformers/pull/2885/commits/77d958ac7f0b008df17656e3652246f602aef095)\r\nthe default value for **any** model for `pad_token_id` was 0. People trained a model with the lib, saved it and the resulting config.json now had a `pad_token_id = 0` saved. This was then uploaded. But it's wrong and should be corrected regardless of this PR. \r\n3. For **68** after changing `eos_token_ids` to `eos_token_id` we will have to remove the `eos_token_ids` parameter and possibly adapt the `eos_token_id` parameter - almost all of which we have to change anyway (1 exception) \r\n4. For **162** models everything is fine! \r\n\r\nHere the full analysis log [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/results.txt)\r\nHere the code that created this log (simple comparison of loaded tokenizer and config with default config): [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/test_all_community_models.py)\r\nHere the 308 models I checked: [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/all_community_models.txt)\r\n\r\n**First conclusion:** \r\n- I think we can merge this PR as all models for which this PR would change to a \"wrong\" behavior already have a \"wrong\" behavior that should be fixed. The sooner we merge the sooner we have the correct API. \r\n\r\n**Second conclusion:** \r\nI think besides the FB models 1) is not really our job to fix. \r\nBut 2) and 3) I think should be fixed on AWS. I'm happy to do this using some automated bash/python scripting. I would try out that it work on 1,2 community models and then apply it to all other cases (to not screw something up on AWS). \r\n\r\nWould that be good for you @julien-c @thomwolf @LysandreJik ?\r\n\r\nIn a future PR we could think about some automated testing that tokenizer configs are equal to model configs. ",
"> If you need an exhaustive list of all hosted models (with their config + files), you can do\r\n> \r\n> ```python\r\n> api = HfApi()\r\n> models = api.model_list()\r\n> ```\r\n\r\nGreat, this will be very helpful when writing an automated script to corret the config.json of the community models! ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=h1) Report\n> Merging [#3264](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8becb732931bbab5dd75cca5f5e7c75b2516d10b&el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `97.91%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3264 +/- ##\n==========================================\n- Coverage 77.64% 77.55% -0.10% \n==========================================\n Files 100 100 \n Lines 16979 16970 -9 \n==========================================\n- Hits 13184 13161 -23 \n- Misses 3795 3809 +14 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.92% <ø> (ø)` | |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.55% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.58% <94.11%> (+0.21%)` | :arrow_up: |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.29% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <100.00%> (ø)` | |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=footer). Last update [8becb73...8296647](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"More in-detail analysis why **67** (actually one more now) models can't be loaded - log files are updated and use `api = HfApi(); models = api.model_list()` now. \r\n\r\nA) **34** models can't even load their config file. The reasons for this are either: \r\n\r\n1. **11/34**: Model identifier is wrong, e.g. `albert-large` does not exist anymore, it seems like it was renamed to `albert-large-v1`. These models have saved the wrong name online that how it is saved on AWS. Can easily be corrected. *e.g.*\r\nThe model_identifier: b`ertabs-finetuned-xsum-extractive-abstractive-summarization` does not exist, but `remi/bertabs-finetuned-xsum-extractive-abstractive-summarization` does exist -> wrong model identifier. Just 11 cases, so easy to correct.\r\n\r\n2. **23/34**: There is an unrecognized `model_type` in the config.json, `e.g.` \r\n\r\n> \"Error: Message: Unrecognized model in hfl/rbtl3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: t5, distilbert, albert, camembert, xlm-roberta, bart, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl\r\n> \"\r\n\r\nHere I think we should add a `model_type` to the config (probably `bert` most of the time)\r\n\r\nB) **33** models can load their config, but cannot load their tokenizers. The error message is almost always the same **32/33**: \r\n\r\n> TOK ERROR: clue/roberta_chinese_base tokenizer can not be loaded\r\n> Message: Model name 'clue/roberta_chinese_base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-larg\r\n> e-openai-detector). We assumed 'clue/roberta_chinese_base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocab\r\n> ulary files at this path or url.\r\n\r\nHere: the model has neither of: \r\n\r\n- `vocab_file`\r\n- `added_tokens_file`\r\n- `special_tokens_map_file`\r\n- `tokenizer_config_file`\r\n\r\n-> So we would have to upload one of those files to make it work. Not sure how time-consuming this is!\r\n\r\nand we got one tokenizer which does not even have a path: `Message: stat: path should be string, bytes, os.PathLike or integer, not NoneType`\r\n\r\nSo I think it's mostly just renaming the model identifiers to their correct names and adding some tokenizer names.\r\n\r\n@julien-c "
] | 1,584 | 1,584 | 1,584 | MEMBER | null | #### INTRO:
This PR is a follow-up from PR #3011.
After discussion with @thomwolf today, we decided that the variable `eos_token_ids` in all models causes more confusion and ugly code than it helps.
#### BACKGROUND:
All models now have `pad_token_id`, `bos_token_id` and `eos_token_id` as default values. The reasons are discussed and explained in #3011.
Originally, we had the `list` variable `eos_token_ids`. The idea behind was that a model could have multiple `eos_token_ids` if the user wants to finish at certain tokens besides the standard EOS token. But this caused a lot of unclean code AND is not consistent with `tokenizers` which all has a `tokenizer.eos_token_id` int variable. So, we return to `eos_token_id` for moders as well and might in the future have a variable `forbidden_tokens` or `special_stop_tokens`.
#### THIS PR DOES:
- Replace all list `eos_token_ids` with `eos_token_id`
- Add default `eos_token_id, pad_token_id, bos_token_id` to all models
#### TESTS:
I tested that the `pretrained Config` has now the same special tokens as the `pretrained Tokenizer` for all model identifier names (e.g. `gpt2-large`) with the following code:
```
for model_id_name in ALL_PRETRAINED_MODEL_ARCHIVE_MAP.keys():
tok = AutoTokenizer.from_pretrained(model_id_name)
conf = AutoConfig.from_pretrained(model_id_name)
pad_equal = tok.pad_token_id == conf.pad_token_id
eos_equal = tok.eos_token_id == conf.eos_token_id
bos_equal = tok.bos_token_id == conf.bos_token_id
if not pad_equal:
print("PAD not equal for {}!".format(model_id_name))
print("TOK: {} | CONF: {}".format(tok.pad_token_id, conf.pad_token_id))
if not eos_equal:
print("EOS not equal for {}!".format(model_id_name))
print("TOK: {} | CONF: {}".format(tok.eos_token_id, conf.eos_token_id))
if not bos_equal:
print("BOS not equal for {}!".format(model_id_name))
print("TOK: {} | CONF: {}".format(tok.bos_token_id, conf.bos_token_id))
```
which gives the following result:
```
PAD not equal for bert-base-dutch-cased!
TOK: 3 | CONF: 0
BOS not equal for distilbert-base-cased!
TOK: None | CONF: 0
BOS not equal for distilbert-base-cased-distilled-squad!
TOK: None | CONF: 0
```
This means that:
- `bert-base-dutch-cased` has a different `pad_token_id` in its tokenizer config than the `pad_token_id` in default Bert tokenizer, so that we will have to update the `bert-base-dutch-cased-config.json` file on AWS (Best option in my opinion).
- `distilbert-base-cased` and `distilbert-base-cased-distilled-squad` have hard coded `bos_token_id` in their config.json file on AWS (I checked), but the distilbert tokenizer doesn`t even have it -> is that correct? @VictorSanh
#### TODO:
- [x] Is the approach good for you? @thomwolf @julien-c @LysandreJik @mfuntowicz @sshleifer
- [ ] Should we also check all community models whether their tokenizer differs from the default one?
- [ ] I think the test I wrote is quite useful, but it uses Config and Tokenizer Classes in the same file, which is not in line with the current test files, which is why I didn't add it. Should we add a test like this? If yes, how?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3264/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3264/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3264",
"html_url": "https://github.com/huggingface/transformers/pull/3264",
"diff_url": "https://github.com/huggingface/transformers/pull/3264.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3264.patch",
"merged_at": 1584736865000
} |
https://api.github.com/repos/huggingface/transformers/issues/3263 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3263/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3263/comments | https://api.github.com/repos/huggingface/transformers/issues/3263/events | https://github.com/huggingface/transformers/pull/3263 | 580,561,650 | MDExOlB1bGxSZXF1ZXN0Mzg3NzQ0NDIx | 3,263 | Create camembert-base-README.md | {
"login": "benjamin-mlr",
"id": 17753315,
"node_id": "MDQ6VXNlcjE3NzUzMzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/17753315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjamin-mlr",
"html_url": "https://github.com/benjamin-mlr",
"followers_url": "https://api.github.com/users/benjamin-mlr/followers",
"following_url": "https://api.github.com/users/benjamin-mlr/following{/other_user}",
"gists_url": "https://api.github.com/users/benjamin-mlr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjamin-mlr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjamin-mlr/subscriptions",
"organizations_url": "https://api.github.com/users/benjamin-mlr/orgs",
"repos_url": "https://api.github.com/users/benjamin-mlr/repos",
"events_url": "https://api.github.com/users/benjamin-mlr/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjamin-mlr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=h1) Report\n> Merging [#3263](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/afea70c01c7d2a844662a4d66b9f9d933cc6449c?src=pr&el=desc) will **increase** coverage by `0.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3263 +/- ##\n==========================================\n+ Coverage 77.82% 77.93% +0.11% \n==========================================\n Files 98 98 \n Lines 16666 16666 \n==========================================\n+ Hits 12970 12989 +19 \n+ Misses 3696 3677 -19\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <0%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.68% <0%> (+3.22%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=footer). Last update [afea70c...14537eb](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Merged it (not sure it was ready yet, but feel free to update in another PR).\r\n\r\n**[Model page](https://huggingface.co/camembert-base)**\r\n\r\nLet us know if we can help in any way @benjamin-mlr @louismartin @pjox\r\n\r\nYou can also add \r\n```\r\n---\r\nlanguage: french\r\n---\r\n```\r\n\r\non top of the README for the model to pop up when looking for FR models"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | First version of our model_card for the original uploaded CamemBERT.
@louismartin @pjox | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3263/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3263",
"html_url": "https://github.com/huggingface/transformers/pull/3263",
"diff_url": "https://github.com/huggingface/transformers/pull/3263.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3263.patch",
"merged_at": 1584106555000
} |
https://api.github.com/repos/huggingface/transformers/issues/3262 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3262/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3262/comments | https://api.github.com/repos/huggingface/transformers/issues/3262/events | https://github.com/huggingface/transformers/issues/3262 | 580,503,244 | MDU6SXNzdWU1ODA1MDMyNDQ= | 3,262 | TFAlbertMainLayer cannot be imported from the transformers library. | {
"login": "shreyansh05s",
"id": 22441463,
"node_id": "MDQ6VXNlcjIyNDQxNDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/22441463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shreyansh05s",
"html_url": "https://github.com/shreyansh05s",
"followers_url": "https://api.github.com/users/shreyansh05s/followers",
"following_url": "https://api.github.com/users/shreyansh05s/following{/other_user}",
"gists_url": "https://api.github.com/users/shreyansh05s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shreyansh05s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shreyansh05s/subscriptions",
"organizations_url": "https://api.github.com/users/shreyansh05s/orgs",
"repos_url": "https://api.github.com/users/shreyansh05s/repos",
"events_url": "https://api.github.com/users/shreyansh05s/events{/privacy}",
"received_events_url": "https://api.github.com/users/shreyansh05s/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"This seems like a valid point, what do you think @LysandreJik?"
] | 1,584 | 1,584 | 1,584 | NONE | null | Unlike the TFBertMainLayer class which can be imported from transformers, TFAlbertMainLayer cannot be imported.
Locally I made the changes to init.py:
Import TFAlbertMainLayer from modeling_tf_albert
Similar to tensorflow version, pytorch version should also be implemented. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3262/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3261 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3261/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3261/comments | https://api.github.com/repos/huggingface/transformers/issues/3261/events | https://github.com/huggingface/transformers/pull/3261 | 580,479,370 | MDExOlB1bGxSZXF1ZXN0Mzg3Njc4ODUx | 3,261 | Update examples/ner/run_ner.py | {
"login": "lifefeel",
"id": 38556,
"node_id": "MDQ6VXNlcjM4NTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/38556?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lifefeel",
"html_url": "https://github.com/lifefeel",
"followers_url": "https://api.github.com/users/lifefeel/followers",
"following_url": "https://api.github.com/users/lifefeel/following{/other_user}",
"gists_url": "https://api.github.com/users/lifefeel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lifefeel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lifefeel/subscriptions",
"organizations_url": "https://api.github.com/users/lifefeel/orgs",
"repos_url": "https://api.github.com/users/lifefeel/repos",
"events_url": "https://api.github.com/users/lifefeel/events{/privacy}",
"received_events_url": "https://api.github.com/users/lifefeel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=h1) Report\n> Merging [#3261](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/afea70c01c7d2a844662a4d66b9f9d933cc6449c&el=desc) will **increase** coverage by `0.12%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3261 +/- ##\n==========================================\n+ Coverage 77.82% 77.94% +0.12% \n==========================================\n Files 98 98 \n Lines 16666 16666 \n==========================================\n+ Hits 12970 12990 +20 \n+ Misses 3696 3676 -20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3261/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.56% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3261/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.22% <0.00%> (+3.75%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=footer). Last update [afea70c...547efb9](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Well, I don't think the `SequenceClassification\" head is not the right one, as it is supposed for sequence classification/regression tasks (see https://huggingface.co/transformers/model_doc/albert.html#transformers.AlbertForSequenceClassification).\r\n\r\nNER requires a per token classification implementation: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L791 ",
"Will this be fixed by switching to AutoModel? I think we are doing that here for `run_pl_ner.py` https://github.com/huggingface/transformers/pull/3290",
" I knew what the problem was. In version 2.5.1, there is no definition for `AlbertForTokenClassification` in `run_ner.py`\r\nHowever, it is included in the master branch. I'll close this request.",
"@srush I think using AutoModelForTokenClassification is better than calling each model class. How about making a change to `run_ner.py`?",
"Yes, if you can send that PR and add me as a reviewers"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | Update the example file by changing the name of AlbertForTokenClassification to AlbertForSequenceClassification. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3261/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3261",
"html_url": "https://github.com/huggingface/transformers/pull/3261",
"diff_url": "https://github.com/huggingface/transformers/pull/3261.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3261.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3260 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3260/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3260/comments | https://api.github.com/repos/huggingface/transformers/issues/3260/events | https://github.com/huggingface/transformers/issues/3260 | 580,454,121 | MDU6SXNzdWU1ODA0NTQxMjE= | 3,260 | Model name 'distilbert-base-german-cased' was not found in model name list. | {
"login": "woiza",
"id": 7392237,
"node_id": "MDQ6VXNlcjczOTIyMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7392237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/woiza",
"html_url": "https://github.com/woiza",
"followers_url": "https://api.github.com/users/woiza/followers",
"following_url": "https://api.github.com/users/woiza/following{/other_user}",
"gists_url": "https://api.github.com/users/woiza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/woiza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/woiza/subscriptions",
"organizations_url": "https://api.github.com/users/woiza/orgs",
"repos_url": "https://api.github.com/users/woiza/repos",
"events_url": "https://api.github.com/users/woiza/events{/privacy}",
"received_events_url": "https://api.github.com/users/woiza/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @woiza,\r\n\r\ntry to use: `DistilBertForSequenceClassification` instead of `BertForSequenceClassification` :)\r\n\r\n",
"@stefan-it \r\n\r\nthat worked, thank you!"
] | 1,584 | 1,584 | 1,584 | NONE | null | This code is ok:
`from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-german-cased")
model = AutoModel.from_pretrained("distilbert-base-german-cased")
`
But the following fails:
`from transformers import BertForSequenceClassification, AdamW, BertConfig
model = BertForSequenceClassification.from_pretrained(
"distilbert-base-german-cased",
num_labels = 2,
output_attentions = False,
output_hidden_states = False,
)
`
`OSError: Model name 'distilbert-base-german-cased' was not found in model name list. We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-german-cased/config.json' was a path, a model identifier, or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3260/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3259 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3259/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3259/comments | https://api.github.com/repos/huggingface/transformers/issues/3259/events | https://github.com/huggingface/transformers/issues/3259 | 580,329,711 | MDU6SXNzdWU1ODAzMjk3MTE= | 3,259 | Great job advice | {
"login": "DenceChen",
"id": 11643704,
"node_id": "MDQ6VXNlcjExNjQzNzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/11643704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DenceChen",
"html_url": "https://github.com/DenceChen",
"followers_url": "https://api.github.com/users/DenceChen/followers",
"following_url": "https://api.github.com/users/DenceChen/following{/other_user}",
"gists_url": "https://api.github.com/users/DenceChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DenceChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DenceChen/subscriptions",
"organizations_url": "https://api.github.com/users/DenceChen/orgs",
"repos_url": "https://api.github.com/users/DenceChen/repos",
"events_url": "https://api.github.com/users/DenceChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/DenceChen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@DenceChen thanks for sharing :-) Feel free trying to add a model if you need it!"
] | 1,584 | 1,584 | 1,584 | NONE | null | First of all this is a very perfect job,I shared it with multiple colleagues,also please provide an implementation of the ElMo model and more tensorflow examples.
thank you very much~ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3259/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3258 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3258/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3258/comments | https://api.github.com/repos/huggingface/transformers/issues/3258/events | https://github.com/huggingface/transformers/issues/3258 | 580,317,715 | MDU6SXNzdWU1ODAzMTc3MTU= | 3,258 | very slow performance on transformer 2.5.0 versus 2.3.0 | {
"login": "yes1234man",
"id": 59166627,
"node_id": "MDQ6VXNlcjU5MTY2NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/59166627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yes1234man",
"html_url": "https://github.com/yes1234man",
"followers_url": "https://api.github.com/users/yes1234man/followers",
"following_url": "https://api.github.com/users/yes1234man/following{/other_user}",
"gists_url": "https://api.github.com/users/yes1234man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yes1234man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yes1234man/subscriptions",
"organizations_url": "https://api.github.com/users/yes1234man/orgs",
"repos_url": "https://api.github.com/users/yes1234man/repos",
"events_url": "https://api.github.com/users/yes1234man/events{/privacy}",
"received_events_url": "https://api.github.com/users/yes1234man/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I met the same issue when running the run_squad.py using the 2.5.0 version. ",
"Which PyTorch version are you using for your benchmarks? Is it the same for both, or is it different?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,591 | 1,591 | NONE | null | Hi
I am running the run_glue version with last version of transformer 2.5.0 python=3.5 and this is at least 10 times slower than running the same code with transformer 2.3.0 with python 3.6.9. I use the BERT model. The difference between speed of these versions is extremely high, could you have a look and check why the performance of latest version of the transformer is very low. thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3258/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3257 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3257/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3257/comments | https://api.github.com/repos/huggingface/transformers/issues/3257/events | https://github.com/huggingface/transformers/pull/3257 | 580,266,986 | MDExOlB1bGxSZXF1ZXN0Mzg3NTEwMzU5 | 3,257 | ELECTRA | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=h1) Report\n> Merging [#3257](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/012d775b14d1ab673aab7eae151823a74a8525a6&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3257 +/- ##\n=======================================\n Coverage 77.54% 77.54% \n=======================================\n Files 103 103 \n Lines 17268 17268 \n=======================================\n Hits 13390 13390 \n Misses 3878 3878 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=footer). Last update [012d775...012d775](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Does this add the ability to train a language model using method in ELECTRA? (Thanks!)",
"It doesn't, we're currently working on a pre-training script incorporating the ELECTRA method but it's still a few weeks out.",
"> It doesn't, we're currently working on a pre-training script incorporating the ELECTRA method but it's still a few weeks out.\r\n\r\nIs this an active branch/issue? I'm interested in contributing if so, but I can't find it",
"Not public yet, will let you know when it is @shoarora!"
] | 1,584 | 1,586 | 1,585 | MEMBER | null | Adds ELECTRA to the library.
The script I'm using to compare the different models is this [Github gist](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed), coupled to [a modified version of the ELECTRA repository](https://github.com/LysandreJik/electra).
- [x] add model/configuration/tokenization classes
- [x] add conversion scripts
- [x] add tests
- [x] finalize
Let's detail what should be done at each step
## Adding model/configuration/tokenization classes
Here is the workflow for adding model/configuration/tokenization classes:
- [x] copy the python files from the present folder to the main folder and rename them, replacing `xxx` with your model name,
- [x] edit the files to replace `XXX` (with various casing) with your model name
- [x] copy-paste or create a simple configuration class for your model in the `configuration_...` file
- [x] copy-paste or create the code for your model in the `modeling_...` files (PyTorch)
- [x] copy-paste or create the code for your model in the `modeling_...` files (TF 2.0)
- [x] copy-paste or create a tokenizer class for your model in the `tokenization_...` file
# Adding conversion scripts
Here is the workflow for the conversion scripts:
- [x] copy the conversion script (`convert_...`) from the present folder to the main folder.
- [x] edit this script to convert your original checkpoint weights to the current pytorch ones.
# Adding tests:
Here is the workflow for the adding tests:
- [x] copy the python files from the `tests` sub-folder of the present folder to the `tests` subfolder of the main folder and rename them, replacing `xxx` with your model name,
- [x] edit the tests files to replace `XXX` (with various casing) with your model name
- [x] edit the tests code as needed
# Final steps
You can then finish the addition step by adding imports for your classes in the common files:
- [x] add import for all the relevant classes in `__init__.py`
- [x] add your configuration in `configuration_auto.py`
- [x] add your PyTorch and TF 2.0 model respectively in `modeling_auto.py` and `modeling_tf_auto.py`
- [x] add your tokenizer in `tokenization_auto.py`
- [x] add your models and tokenizer to `pipeline.py`
- [x] add a link to your conversion script in the main conversion utility (in `commands/convert.py`)
- [x] edit the PyTorch to TF 2.0 conversion script to add your model in the `convert_pytorch_checkpoint_to_tf2.py` file
- [x] add a mention of your model in the doc: `README.md` and the documentation itself at `docs/source/pretrained_models.rst`.
- [x] upload the pretrained weigths, configurations and vocabulary files. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3257/reactions",
"total_count": 31,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 18,
"confused": 0,
"heart": 5,
"rocket": 8,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3257/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3257",
"html_url": "https://github.com/huggingface/transformers/pull/3257",
"diff_url": "https://github.com/huggingface/transformers/pull/3257.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3257.patch",
"merged_at": 1585937455000
} |
https://api.github.com/repos/huggingface/transformers/issues/3256 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3256/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3256/comments | https://api.github.com/repos/huggingface/transformers/issues/3256/events | https://github.com/huggingface/transformers/issues/3256 | 580,235,096 | MDU6SXNzdWU1ODAyMzUwOTY= | 3,256 | Implement Electra | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We are currently working on it :-) \r\nCheck out PR: #3257"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
https://github.com/google-research/electra
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Getting better results for example in QA tasks.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I have no idea how to implement, caused by the experience with this library | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3256/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3255 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3255/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3255/comments | https://api.github.com/repos/huggingface/transformers/issues/3255/events | https://github.com/huggingface/transformers/pull/3255 | 580,140,231 | MDExOlB1bGxSZXF1ZXN0Mzg3NDA4MjY5 | 3,255 | add BART to README | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3255/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3255",
"html_url": "https://github.com/huggingface/transformers/pull/3255",
"diff_url": "https://github.com/huggingface/transformers/pull/3255.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3255.patch",
"merged_at": 1584056285000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3254 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3254/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3254/comments | https://api.github.com/repos/huggingface/transformers/issues/3254/events | https://github.com/huggingface/transformers/pull/3254 | 580,130,186 | MDExOlB1bGxSZXF1ZXN0Mzg3Mzk5NTMx | 3,254 | Bump psutil from 5.6.3 to 5.6.6 in /examples/distillation | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=h1) Report\n> Merging [#3254](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab?src=pr&el=desc) will **decrease** coverage by `1.1%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3254 +/- ##\n==========================================\n- Coverage 77.93% 76.82% -1.11% \n==========================================\n Files 98 98 \n Lines 16666 16666 \n==========================================\n- Hits 12988 12804 -184 \n- Misses 3678 3862 +184\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.46% <0%> (-3.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.56% <0%> (-0.14%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=footer). Last update [2e81b9d...28fce2c](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | CONTRIBUTOR | null | Bumps [psutil](https://github.com/giampaolo/psutil) from 5.6.3 to 5.6.6.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/giampaolo/psutil/blob/master/HISTORY.rst">psutil's changelog</a>.</em></p>
<blockquote>
<h1>5.6.6</h1>
<p>2019-11-25</p>
<p><strong>Bug fixes</strong></p>
<ul>
<li>1179_: [Linux] Process cmdline() now takes into account misbehaving processes
renaming the command line and using inappropriate chars to separate args.</li>
<li>1616_: use of Py_DECREF instead of Py_CLEAR will result in double free and
segfault
(<code>CVE-2019-18874 <https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18874></code>__).
(patch by Riccardo Schirone)</li>
<li>1619_: [OpenBSD] compilation fails due to C syntax error. (patch by Nathan
Houghton)</li>
</ul>
<h1>5.6.5</h1>
<p>2019-11-06</p>
<p><strong>Bug fixes</strong></p>
<ul>
<li>1615_: remove pyproject.toml as it was causing installation issues.</li>
</ul>
<h1>5.6.4</h1>
<p>2019-11-04</p>
<p><strong>Enhancements</strong></p>
<ul>
<li>1527_: [Linux] added Process.cpu_times().iowait counter, which is the time
spent waiting for blocking I/O to complete.</li>
<li>1565_: add PEP 517/8 build backend and requirements specification for better
pip integration. (patch by Bernát Gábor)</li>
</ul>
<p><strong>Bug fixes</strong></p>
<ul>
<li>875_: [Windows] Process' cmdline(), environ() or cwd() may occasionally fail
with ERROR_PARTIAL_COPY which now gets translated to AccessDenied.</li>
<li>1126_: [Linux] cpu_affinity() segfaults on CentOS 5 / manylinux.
cpu_affinity() support for CentOS 5 was removed.</li>
<li>1528_: [AIX] compilation error on AIX 7.2 due to 32 vs 64 bit differences.
(patch by Arnon Yaari)</li>
<li>1535_: 'type' and 'family' fields returned by net_connections() are not
always turned into enums.</li>
<li>1536_: [NetBSD] process cmdline() erroneously raise ZombieProcess error if
cmdline has non encodable chars.</li>
<li>1546_: usage percent may be rounded to 0 on Python 2.</li>
</ul>
</tr></table> ... (truncated)
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/giampaolo/psutil/commit/c6cd256da95ffe9599792759b1c2586ba24fa047"><code>c6cd256</code></a> pre release</li>
<li><a href="https://github.com/giampaolo/psutil/commit/b2414b83d3d728ec34ea0e35bfb21517ee231401"><code>b2414b8</code></a> revert <a href="https://github-redirect.dependabot.com/giampaolo/psutil/issues/1595">#1595</a></li>
<li><a href="https://github.com/giampaolo/psutil/commit/c63369e999b458ecbd559bdde895c344b4db2841"><code>c63369e</code></a> updat HISTORY</li>
<li><a href="https://github.com/giampaolo/psutil/commit/edb20f664f28653dcdd24f0bf0191984738dca6e"><code>edb20f6</code></a> linux, cmdline(), fix for <a href="https://github-redirect.dependabot.com/giampaolo/psutil/issues/1179">#1179</a>, comment 552984549: sometimes string ends wit...</li>
<li><a href="https://github.com/giampaolo/psutil/commit/d739cbb1a5b207212d467b219dfc25b017911530"><code>d739cbb</code></a> use PROCESS_QUERY_LIMITED_INFORMATION</li>
<li><a href="https://github.com/giampaolo/psutil/commit/f7e898b0987f97352c7551bdd9b29b594e1236f6"><code>f7e898b</code></a> <a href="https://github-redirect.dependabot.com/giampaolo/psutil/issues/1595">#1595</a>: use psutil_pid_is_running() instead of GetExitCodeProcess</li>
<li><a href="https://github.com/giampaolo/psutil/commit/72c84cb4edb5c0968a83c1f45ad5cc51235e0af3"><code>72c84cb</code></a> #fix <a href="https://github-redirect.dependabot.com/giampaolo/psutil/issues/1595">#1595</a> / windows: kill() may not raise AccessDenied</li>
<li><a href="https://github.com/giampaolo/psutil/commit/1f8d432db12a907544ac533b66a5a61ba25321fb"><code>1f8d432</code></a> Merge branch 'master' of github.com:giampaolo/psutil</li>
<li><a href="https://github.com/giampaolo/psutil/commit/e6faebcd7adaa327d1ce57385cbebe7724d02350"><code>e6faebc</code></a> release gil around users()/BSD (<a href="https://github-redirect.dependabot.com/giampaolo/psutil/issues/1425">#1425</a>)</li>
<li><a href="https://github.com/giampaolo/psutil/commit/5cb1b0b526765720253fdb2e8eff0bf380bbe0a8"><code>5cb1b0b</code></a> Merge branch 'master' of github.com:giampaolo/psutil</li>
<li>Additional commits viewable in <a href="https://github.com/giampaolo/psutil/compare/release-5.6.3...release-5.6.6">compare view</a></li>
</ul>
</details>
<br />
[](https://help.github.com/articles/configuring-automated-security-fixes)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3254/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3254",
"html_url": "https://github.com/huggingface/transformers/pull/3254",
"diff_url": "https://github.com/huggingface/transformers/pull/3254.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3254.patch",
"merged_at": 1584062097000
} |
https://api.github.com/repos/huggingface/transformers/issues/3253 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3253/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3253/comments | https://api.github.com/repos/huggingface/transformers/issues/3253/events | https://github.com/huggingface/transformers/issues/3253 | 580,057,898 | MDU6SXNzdWU1ODAwNTc4OTg= | 3,253 | bug in run_glue | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If you could include your environment and steps to replicate the issue, that would help. Its working for me on version 2.5.1 (from master branch).",
"Hi. Sorry then I must have used the old version of transformer, I will close the issue and reopen if needed. thanks. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,589 | 1,584 | NONE | null | Hi I got this error when running the run_glue.py
from transformers import glue_compute_metrics as compute_metrics
ImportError: cannot import name 'glue_compute_metrics' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3253/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3252 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3252/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3252/comments | https://api.github.com/repos/huggingface/transformers/issues/3252/events | https://github.com/huggingface/transformers/issues/3252 | 580,052,979 | MDU6SXNzdWU1ODAwNTI5Nzk= | 3,252 | batch_encode_plus cannot work properly | {
"login": "PosoSAgapo",
"id": 33200481,
"node_id": "MDQ6VXNlcjMzMjAwNDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/33200481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PosoSAgapo",
"html_url": "https://github.com/PosoSAgapo",
"followers_url": "https://api.github.com/users/PosoSAgapo/followers",
"following_url": "https://api.github.com/users/PosoSAgapo/following{/other_user}",
"gists_url": "https://api.github.com/users/PosoSAgapo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PosoSAgapo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PosoSAgapo/subscriptions",
"organizations_url": "https://api.github.com/users/PosoSAgapo/orgs",
"repos_url": "https://api.github.com/users/PosoSAgapo/repos",
"events_url": "https://api.github.com/users/PosoSAgapo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PosoSAgapo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is quite a weird way to encode your data with `ds[0][0:3]`. There are two cases:\r\n\r\n1) You have a single string you want to encode:\r\n```\r\ninput_str = 'hello, what time is it?'\r\ninput_ids_dict = tokenizer.encode_plus(input_str)\r\n```\r\n\r\n2) You have a batch of input string data that you want to encode:\r\n\r\n```\r\ninput_str_batch = ['hello what time is it' , 'hello, how are you?', 'Hey, I'm Peter']\r\ninput_ids_dict = tokenizer.batch_encode_plus(input_str_batch, pad_to_max_length=True)\r\n```\r\n\r\nAlso see #3237"
] | 1,584 | 1,584 | 1,584 | NONE | null | I have data like below:
```
>>> ds[0][0:3]
["John was writing lyrics for his new album.He started experiencing writer 's block.He tried to force himself to write but it would n't do anythingHe tried to force himself to write but it would n't do anything.He took a walk , hung out with some friends , and looked at natureHe took a walk , hung out with some friends , and looked at natureHe took a walk , hung out with some friends , and looked at nature", 'Franny did not particularly like all of the immigration happening.She thought immigrants were coming to cause social problemsShe thought immigrants were coming to cause social problems.Franny was upset when an immigrant moved in next door.The immigrant , Sal , was kind and became friends with FrannyThe immigrant , Sal , was kind and became friends with Franny', 'Ari spends $ 20 a day on pickles.He decides to make his own to save money.He puts the pickles in brine.Ari waits 2 weeks for his pickles to get sour']
>>> ds[1][0:3]
['He felt inspiration and then went back home to write', 'When he finished his paper he went to bedWhen he finished his paper he went to bed', 'Trudey hoped self-publishing would be more profitable']
```
I was trying to do next sentence prediction using BERT model, it is necessary to use text pair to finish this problem.However,when I was tring to encode ds[0] and ds[1] as a batched text pair,I have met following problem.
```
>>> input_wsrq1ids=tokenizer.batch_encode_plus([(ds[0][0:3],ds[1][0:3])],add_special_tokens=True,return_tensors='pt')
>>> input_wsrq1ids
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'inp_wsrq1ids' is not defined
>>> input_wsrq1ids
{'input_ids': tensor([[101, 100, 100, 100, 102, 100, 100, 100, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 1, 1, 1, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1]])}
```
The answer is definitely not the right encoding fot this data. I did not quite understand this documentation of batch_encod_plus method,it says for text_pair,look for encode_plus for details.However,the encode_plus only works for non-batched data, and there is no clue and example code to show us how to use batch_enocde_plus properly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3252/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3251 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3251/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3251/comments | https://api.github.com/repos/huggingface/transformers/issues/3251/events | https://github.com/huggingface/transformers/issues/3251 | 580,049,804 | MDU6SXNzdWU1ODAwNDk4MDQ= | 3,251 | Why is the seq_len dimension hard coded to be the first dimension of BERT's input? | {
"login": "entslscheia",
"id": 15921425,
"node_id": "MDQ6VXNlcjE1OTIxNDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/15921425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/entslscheia",
"html_url": "https://github.com/entslscheia",
"followers_url": "https://api.github.com/users/entslscheia/followers",
"following_url": "https://api.github.com/users/entslscheia/following{/other_user}",
"gists_url": "https://api.github.com/users/entslscheia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/entslscheia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/entslscheia/subscriptions",
"organizations_url": "https://api.github.com/users/entslscheia/orgs",
"repos_url": "https://api.github.com/users/entslscheia/repos",
"events_url": "https://api.github.com/users/entslscheia/events{/privacy}",
"received_events_url": "https://api.github.com/users/entslscheia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"My wild guess is maybe there is no specific reason for this. It's just an implementation issue. If I really want do feed a tensor of different number of dimensions I can just do reshape twice (i.e., one before forward one after)."
] | 1,584 | 1,586 | 1,586 | NONE | null | According to your [code](https://github.com/huggingface/transformers/blob/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab/src/transformers/modeling_bert.py#L164), the `seq_len` dimension always corresponds to the first dimension of the input tensor. I found this problem when I tried to feed a tensor of shape `(batch_size, num_seq, seq_len)` and error occurred. Is there any specific reason for you to adopt this setting? Given that normally a PyTorch` Embedding` layer can take an input of any shape, it doesn't look so natural to me to have this setting. Any suggestions? Many thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3251/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3250 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3250/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3250/comments | https://api.github.com/repos/huggingface/transformers/issues/3250/events | https://github.com/huggingface/transformers/issues/3250 | 580,029,285 | MDU6SXNzdWU1ODAwMjkyODU= | 3,250 | UnicodeDecodeError when loading BART from fairseq checkpoint | {
"login": "marmg",
"id": 25741926,
"node_id": "MDQ6VXNlcjI1NzQxOTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/25741926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marmg",
"html_url": "https://github.com/marmg",
"followers_url": "https://api.github.com/users/marmg/followers",
"following_url": "https://api.github.com/users/marmg/following{/other_user}",
"gists_url": "https://api.github.com/users/marmg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marmg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marmg/subscriptions",
"organizations_url": "https://api.github.com/users/marmg/orgs",
"repos_url": "https://api.github.com/users/marmg/repos",
"events_url": "https://api.github.com/users/marmg/events{/privacy}",
"received_events_url": "https://api.github.com/users/marmg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Unfortunately, I don't think the conversion you are describing is currently supported. \r\nIf I were to attempt this on my own, I would try to modify https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bart_original_pytorch_checkpoint_to_pytorch.py to take in a path to a checkpoint, rather than a `torch.hub` alias.",
"This function converts from saved fairseq checkpoints: https://github.com/huggingface/transformers/blob/7a7fdf71f80452fcae064bd016f06e9a0f0f19ed/src/transformers/convert_bart_original_pytorch_checkpoint_to_pytorch.py#L81\r\n\r\nLet me know if that helps!"
] | 1,584 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
When trying to load a checpoint from the fariseq library I'm getting an UnicodeError
## Information
Model I am using (Bert, XLNet ...): BART
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Train a checkpoint with the fairseq library
2. Load it using BartForMaskedLM.from_pretrained
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Get the checpoint loaded into a BART model.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: linux
- Python version: 3.6.7
- PyTorch version (GPU?): 1.4.0 / Yes
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?:
- `fairseq` version: master | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3250/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3249 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3249/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3249/comments | https://api.github.com/repos/huggingface/transformers/issues/3249/events | https://github.com/huggingface/transformers/issues/3249 | 580,008,556 | MDU6SXNzdWU1ODAwMDg1NTY= | 3,249 | Using FP16 on BartModel | {
"login": "AOZMH",
"id": 49521559,
"node_id": "MDQ6VXNlcjQ5NTIxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/49521559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AOZMH",
"html_url": "https://github.com/AOZMH",
"followers_url": "https://api.github.com/users/AOZMH/followers",
"following_url": "https://api.github.com/users/AOZMH/following{/other_user}",
"gists_url": "https://api.github.com/users/AOZMH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AOZMH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AOZMH/subscriptions",
"organizations_url": "https://api.github.com/users/AOZMH/orgs",
"repos_url": "https://api.github.com/users/AOZMH/repos",
"events_url": "https://api.github.com/users/AOZMH/events{/privacy}",
"received_events_url": "https://api.github.com/users/AOZMH/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer May I ask could you reproduce the error in your machine? I ran the same code on a Linux machine with master-branch of transformers, but still got the same error. I'm planning to use BartModel these days so please notify me at your earliest convenience if there're any updates. Many thanks!",
"Yes, will try to fix it today! Thanks for reporting!",
"> Yes, will try to fix it today! Thanks for reporting!\r\n\r\nThanks Sam,\r\n\r\nThe code works well this time! Thanks again for the contribution."
] | 1,584 | 1,608 | 1,584 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BART
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: CNN/DM
* [ ] my own task or dataset: (give details below)
## To reproduce
I've installed the master branch of transformers but I still encountered the same issue as #3117 when using FP16 BartModel. I just initialized the model without loading the pretarined weights, but I guess the model should still be able to correctly forward the input LongTensor(batch, seq_length). The code is shown below, simply initialize a model and forward an input:
```
model = BartModel(BartConfig())
model = model.cuda().half()
cur_inputs = torch.zeros(4,16,dtype=torch.long).cuda()
cur_res = model(cur_inputs)
```
The error is:
>~\Anaconda3\envs\pytorch\lib\site-packages\transformers\modeling_bart.py in forward(self, query, key, value, key_padding_mask, layer_state, need_weights, static_kv, attn_mask)
assert v is not None
--> attn_output = torch.bmm(attn_probs, v)
assert attn_output.size() == (bsz * self.num_heads, tgt_len, self.head_dim)
attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #2 'mat2' in call to _th_bmm
@sshleifer The model is quite novel to me, so am I using it incorrectly or there's still a bug in BertModel class? Thanks in advance for the help!
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master branch
- Platform: Windows
- Python version: 3.7.0
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): /
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3249/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3248 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3248/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3248/comments | https://api.github.com/repos/huggingface/transformers/issues/3248/events | https://github.com/huggingface/transformers/pull/3248 | 579,962,693 | MDExOlB1bGxSZXF1ZXN0Mzg3MjYzMTQ5 | 3,248 | [model_cards] polbert: simplify usage example with pipelines | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,584 | 1,584 | 1,584 | MEMBER | null | Co-Authored-By: Darek Kłeczek | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3248/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3248",
"html_url": "https://github.com/huggingface/transformers/pull/3248",
"diff_url": "https://github.com/huggingface/transformers/pull/3248.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3248.patch",
"merged_at": 1584021941000
} |
https://api.github.com/repos/huggingface/transformers/issues/3247 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3247/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3247/comments | https://api.github.com/repos/huggingface/transformers/issues/3247/events | https://github.com/huggingface/transformers/pull/3247 | 579,923,877 | MDExOlB1bGxSZXF1ZXN0Mzg3MjMwNzg1 | 3,247 | Improved Error message when loading config/model with .from_pretrained() | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks to @mariamabarham for pointing this out :-) ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=h1) Report\n> Merging [#3247](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dc848c29944265e04f1473cd0312eeffc1842276&el=desc) will **decrease** coverage by `0.36%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3247 +/- ##\n==========================================\n- Coverage 78.32% 77.95% -0.37% \n==========================================\n Files 98 98 \n Lines 16665 16665 \n==========================================\n- Hits 13053 12992 -61 \n- Misses 3612 3673 +61 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.82% <ø> (ø)` | |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0.00%> (-27.60%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.40% <0.00%> (+4.11%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=footer). Last update [dc848c2...cd5998e](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,584 | 1,584 | 1,584 | MEMBER | null | Given the previous error message, it can be quite time-consuming to find out that the only problem was that the /path/to/model/dir was incorrect :D | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3247/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3247",
"html_url": "https://github.com/huggingface/transformers/pull/3247",
"diff_url": "https://github.com/huggingface/transformers/pull/3247.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3247.patch",
"merged_at": 1584348511000
} |
https://api.github.com/repos/huggingface/transformers/issues/3246 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3246/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3246/comments | https://api.github.com/repos/huggingface/transformers/issues/3246/events | https://github.com/huggingface/transformers/issues/3246 | 579,825,303 | MDU6SXNzdWU1Nzk4MjUzMDM= | 3,246 | How do you do inference in production? | {
"login": "ZhuoranLyu",
"id": 8801452,
"node_id": "MDQ6VXNlcjg4MDE0NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8801452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhuoranLyu",
"html_url": "https://github.com/ZhuoranLyu",
"followers_url": "https://api.github.com/users/ZhuoranLyu/followers",
"following_url": "https://api.github.com/users/ZhuoranLyu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhuoranLyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhuoranLyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhuoranLyu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhuoranLyu/orgs",
"repos_url": "https://api.github.com/users/ZhuoranLyu/repos",
"events_url": "https://api.github.com/users/ZhuoranLyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhuoranLyu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Did you try out to just use this `save_...` function: https://github.com/huggingface/transformers/blob/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab/src/transformers/modeling_tf_utils.py#L232 ? \r\n\r\n-> \r\n```\r\ntf_model = TFGPT2LMHeadModel.from_pretrained(\"tmp/\", from_pt=True)\r\ntf_model.save_pretrained(\"./tf_model\")\r\ntf_model = TFGPT2LMHeadModel.from_pretrained(\"./tf_model\")\r\n```",
"> Did you try out to just use this `save_...` function:\r\n> \r\n> https://github.com/huggingface/transformers/blob/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab/src/transformers/modeling_tf_utils.py#L232\r\n> \r\n> ?\r\n> ->\r\n> \r\n> ```\r\n> tf_model = TFGPT2LMHeadModel.from_pretrained(\"tmp/\", from_pt=True)\r\n> tf_model.save_pretrained(\"./tf_model\")\r\n> tf_model = TFGPT2LMHeadModel.from_pretrained(\"./tf_model\")\r\n> ```\r\n\r\nHi, thanks for the reply. But what I want to do is to save it as a pb file in order to serve the model using tensorflow-serving.",
"Can we re-open this? It's still an issue.",
"> Can we re-open this? It's still an issue.\r\n\r\nHow to open this issue?",
"Sure, sorry I guess I closed this too early!",
"Any progress on this issue? \r\nHow to save the model for production? ",
"Hmm, I am not really familiar with tensorflow protobuf saving -> @LysandreJik @jplu do you know more about this maybe?",
"Hello !\r\n\r\nTo create a saved model you have to run something like the following lines:\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import TFXXXModel, XXXTokenizer\r\n\r\nhf_model = TFXXXModel.from_pretrained('model/location/path')\r\ntokenizer = XXXTokenizer.from_pretrained(\"tokenizer/location/path\")\r\nfeatures = tokenizer.encode_plus(\"Sentence to featurize\", add_special_tokens=True, return_tensors=\"tf\")\r\nhf_model._set_inputs(features)\r\ntf.saved_model.save(hf_model, \"saved_model/location/path\")\r\n```\r\n\r\nReplace XXX by the model name you plan to save.\r\n\r\nIt is also planed to add a `to_saved_model()` method in the trainer, to allow anybody to autimatically create a saved model without to run those lines.",
"Hi! \r\n\r\nSorry. I misunderstood it. I thought all TF models were saved by TF Trainer and all TF trainer saved models would have a hard time with inference in production. So I thought this post is similar to mine: https://github.com/huggingface/transformers/issues/4758\r\n\r\nAfter finishing with the sample code and sample data, I checked the \"output_dir/saved_model\" folder, it is empty. Then I restarted the code to save the model to a new directory.\r\n\r\n```\r\nmodel = TFAutoModelForTokenClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_pt=bool(\".bin\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n )\r\n\r\nmodel.save('saved_model/my_model')\r\nnewmodel = tf.keras.models.load_model('saved_model/my_model')\r\n```\r\n\r\nI get the message that the model is not compiled:\r\n`WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually.`\r\n\r\nI am wondering how to extract the fine-tuned local model for inference. Thanks.",
"Look at the piece of code I have done, it is totally different :) Also you are not using the load and save from the lib, the error message is normal.",
"```\r\nhf_model = TFXXXModel.from_pretrained('model/location/path')\r\ntokenizer = XXXTokenizer.from_pretrained(\"tokenizer/location/path\")\r\n```\r\nAre these the official models like 'bert-base-uncased'? If yes, then it's not trained.\r\nIf it is local model, I don't know where the local model is because the \"saved_model\" folder is empty.\r\n\r\n",
"'you are not using the load and save from the lib, the error message is normal.' \r\n--- which lib are you referring? I followed only the official tensorflow manual: https://www.tensorflow.org/guide/saved_model",
"Ok then sorry I didn't get what you meant. If I recall well, what you are looking for is to load a trained model and run an inference with it? Right?",
"Right. \r\nI also wish to serve the model through TF serving. ",
"Ok then at first try the following piece of code and tell me if it works for you:\r\n\r\n```python\r\nfrom transformers import BertTokenizer, TFBertForTokenClassification\r\nimport tensorflow as tf\r\n\r\nmodel = TFBertForTokenClassification.from_pretrained(\"bert-base-uncased\")\r\ntf.saved_model.save(model, \"saved_model\")\r\n\r\nloaded_model = tf.saved_model.load(\"saved_model\")\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\nfeatures = {\"input_ids\": tokenizer.encode(\"it is me\", add_special_tokens=True, return_tensors=\"tf\")}\r\nprint(loaded_model(features, training=False))\r\n```\r\n\r\nIf this works you can do the same for your trained model, just specify your output dir in `.from_pretrained()` function. If you want to create a more elaborate signature than the default one, you have to follow this part of the [documentation](https://www.tensorflow.org/guide/saved_model#specifying_signatures_during_export)\r\n\r\nLater the TF Trainer will create a saved model in same time than the usual h5 file. Therefore it will be more user friendly to have its own saved model and then use it in production with TF serving.",
"Yes, the above code works.\r\n\r\nI still have some doubts on how TFTrainer loads the saved model. When it is set to the prediction mode, even if I changed the output_dir to nonsense, it still can do the prediction. I also noticed the output_dir/saved_model folder is empty. If so, how can TF Trainer load the model? I asked these still with the intention to make sure I save my fine-tuned model to a right place, then load, and serve it.\r\n\r\n`python3 run_tf_ner.py --data_dir ./ \\ \r\n--labels ./labels.txt \\ \r\n--model_name_or_path $BERT_MODEL \\ \r\n--output_dir $OUTPUT_DIR \\ \r\n--max_seq_length $MAX_LENGTH \\ \r\n--num_train_epochs $NUM_EPOCHS \\ \r\n--per_device_train_batch_size $BATCH_SIZE \\ \r\n--save_steps $SAVE_STEPS \\ \r\n--seed $SEED \\ \r\n--do_predict`\r\n\r\nIf I train my model this way and would like to save the model, I need to set the code to prediction mode, with the trainer initialized, save the model through `tf.saved_model.save(model, \"saved_model\")`. correct?",
"I tested it. That way would not be able to save the model. \r\nhttps://colab.research.google.com/drive/1uPCpR31U5VRMT3dArGyDK9WT6hKQa0bv?usp=sharing\r\n\r\nThen I am still wondering how to save the pb model through TF Trainer trained model. ",
"> If I train my model this way and would like to save the model, I need to set the code to prediction mode, with the trainer initialized, save the model through tf.saved_model.save(model, \"saved_model\"). correct?\r\n\r\nNo, you have just have to open your Python prompt and run these three lines:\r\n1. ```from transformers import TFAutoModelForTokenClassification```\r\n2. ```model = TFAutoModelForTokenClassification.from_pretrained(\"<OUTPUT_DIR>\")```\r\n3. ```tf.saved_model.save(model, \"saved_model\")```\r\n\r\nAnd of course replace `<OUTPUT_DIR>` with the propoer localtion of where your model is.\r\n\r\nThe trainer is only here to train a model and not to serve a model :) That's why it is called trainer ;)\r\n\r\nIf you want a saved model you have to create it yourself with the piece of code I gave you. I suggest you to create also your own signature (as indicated in the TF documentation linked above) and then run it as detailed in this [documentation section](https://www.tensorflow.org/guide/saved_model#details_of_the_savedmodel_command_line_interface).\r\n\r\nFor now the models saved by the TF trainer are not compliant with served models, you have to do it yourself manually but this will change in a near future.",
"1. If trainer is just used for training, why in _run_tf_ner.py_ line 246, there is a prediction done with the trainer: \r\n`predictions, label_ids, metrics = trainer.predict(test_dataset.get_dataset())`\r\n\r\nIf I set the mode to prediction, initialize the trainer with a nonsense output_dir, replace `test_dataset.get_dataset()`, with my own data, I can actually get the predictions. I guess it is initiated through checkpoints dir. \r\n\r\nIt seems that rather than `model.predict(sentence)`, with the logic written in _run_tf_ner.py,_ we need to do prediction through Trainer `trainer.predict(sentence)`. I am not sure if I am right, but line 246 is there, and I can succeed in getting predicted results with the initiated trainer in prediction mode. \r\n\r\n2. If I use the code discussed in this post to save and load the model, the _loaded model_ would not convert the sentence to features.\r\n\r\n```\r\nfrom transformers import TFAutoModelForTokenClassification, BertTokenizer, TFBertForTokenClassification\r\nimport tensorflow as tf\r\n\r\noutput_dir = \"model\"\r\nsaved_model_dir = \"tf2_0606_german\"\r\n\r\nmodel = TFAutoModelForTokenClassification.from_pretrained(output_dir)\r\ntf.saved_model.save(model, saved_model_dir)\r\nloaded_nodel = tf.saved_model.load(saved_model_dir)\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-multilingual-cased\")\r\nsentence = \"1951 bis 1953 wurde der nördliche Teil als Jugendburg des Kolpingwerkes gebaut .\"\r\nfeatures = {\"input_ids\": tokenizer.encode(sentence, add_special_tokens=True, return_tensors=\"tf\")}\r\n\r\nprint(model(features, training=False))\r\nprint(loaded_model(features, training=False))\r\n\r\n```\r\nError message can be found \r\nhttps://colab.research.google.com/drive/1uPCpR31U5VRMT3dArGyDK9WT6hKQa0bv?usp=sharing#scrollTo=SBCchEi-qlnA\r\n\r\n\r\nMy suspicion is \"output_dir\" does not save all the information it needs, and \"checkpoint\" directory is where the trainer get initialized when it is set to the prediction mode. But I am not sure how to recover the model information for production with these two directories.\r\n\r\n```\r\n06/06/2020 07:53:52 - INFO - transformers.trainer_tf - Saving checkpoint for step 1500 at checkpoint/ckpt-3\r\n06/06/2020 07:53:55 - INFO - transformers.trainer_tf - Saving model in model\r\n06/06/2020 07:53:55 - INFO - transformers.trainer_tf - Saving model in model/saved_model\r\n```\r\n\r\n\r\n",
"I also found one more complication. The code you showed works only for sentences containing three words or less. If \"it is me\" is changed to \"it is me again\", the code will return the same argument error message I mentioned in the last response. \r\n\r\n```\r\nfrom transformers import BertTokenizer, TFBertForTokenClassification\r\nimport tensorflow as tf\r\n\r\nmodel = TFBertForTokenClassification.from_pretrained(\"bert-base-uncased\")\r\ntf.saved_model.save(model, \"saved_model\")\r\n\r\nloaded_model = tf.saved_model.load(\"saved_model\")\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\nfeatures = {\"input_ids\": tokenizer.encode(\"it is me again\", add_special_tokens=True, return_tensors=\"tf\")}\r\nprint(loaded_model(features, training=False))\r\n```",
"> If trainer is just used for training, why in run_tf_ner.py line 246, there is a prediction done with the trainer:\r\n\r\nThis part is only here to evaluate the model and output the predictions on the test set into a file and not for inference in production. It is two distinct cases.\r\n\r\n> If I set the mode to prediction, initialize the trainer with a nonsense output_dir, replace test_dataset.get_dataset(), with my own data, I can actually get the predictions. I guess it is initiated through checkpoints dir.\r\n\r\nYes, it is normal because the predict is just here to evaluate your model on a dataset, and it is not initatied from the checkpoint dir but from the `.h5` file in your model folder only.\r\n\r\n> If I use the code discussed in this post to save and load the model, the saved model can convert the sentence to features, but it cannot do any prediction; the loaded model would not convert the sentence to features.\r\n\r\nThis is normal because your input doesn't correspond to the signature. The big picture is that from the `loaded_model(...)` line you don't get features, you get the real output of the model, this is what does a saved model. A tensor of values for each token where each value is the prob of the corresponding label.\r\n\r\nHence once you get your saved model, run the command:\r\n\r\n```\r\ntensorflow_model_server \\\r\n --rest_api_port=8501 \\\r\n --model_name=ner \\\r\n --model_base_path=\"tf2_0606_german\" >server.log 2>&1\r\n```\r\n\r\nNow, you have an API that wraps your model. Finally, in a Python script you can do:\r\n\r\n```python\r\nimport json\r\nimport numpy\r\nimport requests\r\nmy_features = # call here the tokenizer\r\ndata = json.dumps({\"signature_name\": \"serving_default\",\r\n \"instances\": my_features})\r\nheaders = {\"content-type\": \"application/json\"}\r\njson_response = requests.post('http://localhost:8501/v1/models/ner:predict',\r\n data=data, headers=headers)\r\npredictions = numpy.array(json.loads(json_response.text)[\"predictions\"])\r\n```\r\n\r\nFinally, you get your predictions and you have to code the translation preds -> text.\r\n\r\n> I also found one more complication. The code you showed works only for sentences containing three words or less. If \"it is me\" is changed to \"it is me again\", the code will return the same argument error message I mentioned in the last response.\r\n\r\nThis is totally normal, as I told you, you have to code your own signature as it is showed in the TF documentation that I linked you in my previous post.\r\n\r\nFor now, nothing is implemented in the `transformers` lib to do what you are looking for with a saved model. It means that, to do inference in production with a saved model you have to code all the logic I explained above by yourself. It is planned to integrate this part in a near future, it is even an ongoing work, but far to be finished.",
"Thanks so much for your elaborate response! I did not fully appreciate what signature means... Thanks!!! ",
"@jplu thanks for the great answer. I was wondering if it is possible to include the tokenizer inside the saved model (or something similar in order to make the tokenization inside TF serving ) ? Or do we have to use the tokenizer before doing the request ?",
"It is currently not possible to integrate the tokenizers in a saved model as preprocessing, you have to do that by yourself before to use the saved model.",
"@jplu Thanks for your great answer. But I have a question, in this part \r\n```\r\nimport json\r\nimport numpy\r\nimport requests\r\nmy_features = # call here the tokenizer\r\ndata = json.dumps({\"signature_name\": \"serving_default\",\r\n \"instances\": my_features})\r\nheaders = {\"content-type\": \"application/json\"}\r\njson_response = requests.post('http://localhost:8501/v1/models/ner:predict',\r\n data=data, headers=headers)\r\npredictions = numpy.array(json.loads(json_response.text)[\"predictions\"])\r\n```\r\ncan you give an example about how to do `# call here the tokenizer` part?",
"You have plenty of examples on how to use the tokenizers, such as in the examples [folder](https://github.com/huggingface/transformers/tree/master/examples) or inside the [source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_bert.py#L800).",
"hi, @jplu thank you for your answer. I forgot to remove `return_tensor=tf` in tokenizer before so it is failing. I have been working based on your answer on this issue and this [reference](https://colab.research.google.com/drive/1kEg0SnYNtw_IJwu_kl5y3qRVs-BKBmNO#scrollTo=9wilS_mw6wPk) to do inference with Tensorflow Serving Saved Model on Sentiment Analysis task. Please see here for my complete attempt [link to the collab](https://colab.research.google.com/drive/1cQx28aD2GpR_GUwzQfbdZyZSuUz-vh7W?usp=sharing)\r\n\r\n> This is totally normal, as I told you, you have to code your own signature as it is showed in the TF documentation that I linked you in my previous post.\r\nI try to do this by making it like this\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import *\r\ntf.config.optimizer.set_jit(True)\r\n\r\nclass WrappedModel(tf.Module):\r\n\tdef __init__(self):\r\n\t\tsuper(WrappedModel, self).__init__()\r\n\t\tself.model = TFAutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english')\r\n\[email protected]\r\n\tdef __call__(self, x):\r\n\t\treturn self.model(x)\r\n\r\nmodel = WrappedModel()\r\n\r\ncall = model.__call__.get_concrete_function(tf.TensorSpec([None, None], tf.int32, name='input_ids'))\r\ntf.saved_model.save(model, saved_model_path, signatures=call, )\r\n```\r\n\r\n\r\nit is working fine I try to predict one example or couple examples with the same length of sequences\r\n```\r\nimport json\r\nimport numpy as np\r\nimport requests\r\nmy_features = {\"input_ids\": tokenizer.encode(\"it is really great, I don't think I will use this\", add_special_tokens=True)}\r\nmy_instances = [my_features, my_features]\r\nprint(my_instances)\r\ndata = json.dumps({\"signature_name\": \"serving_default\",\r\n \"instances\": [my_features, my_features]})\r\nheaders = {\"content-type\": \"application/json\"}\r\njson_response = requests.post('http://localhost:8503/v1/models/sentiment_analysis2:predict',\r\n data=data, headers=headers)\r\nprint(json_response)\r\npredictions = numpy.array(json.loads(json_response.text)[\"predictions\"])\r\nfor prediction in predictions:\r\n print(np.argmax(prediction))\r\n```\r\nbut when there is more than 1 variation of sequence length, it is not working. So I think this is because the tensor shape for every example must be the same so I try to do padding into `max_seq_length`. But something weird happens, the prediction result for the same sentence are different between the [padding](https://colab.research.google.com/drive/1cQx28aD2GpR_GUwzQfbdZyZSuUz-vh7W?authuser=1#scrollTo=bRnLQlyPyTPo&line=2&uniqifier=1) and the [non-padding version](https://colab.research.google.com/drive/1cQx28aD2GpR_GUwzQfbdZyZSuUz-vh7W?authuser=1#scrollTo=jgYV1TJ3jQeV&line=16&uniqifier=1). The more padding tokens added the more model thinks that the sentence is having negative sentiment (probability for label 0 is increasing and for label 1 is decreasing).\r\n\r\nCan you please tell me what that I did wrong? \r\nAlso, I am looking to integrate the preprocessing step, inference into Tensorflow Serving and prediction result in step so it can be done automatically instead of manually running separate code. Can you please tell me what option I have regarding this? \r\nThank you in advance! @jplu ",
"> Can you please tell me what that I did wrong?\r\n\r\nNothing, the results depends of the model itself, so you should ask to the person who has uploaded the model.\r\n\r\n> Can you please tell me what option I have regarding this?\r\n\r\nCurrently no options, you cannot do this.",
"@jplu Thank you very much for your quick reply.\r\n\r\n> Nothing, the results depends of the model itself, so you should ask to the person who has uploaded the model.\r\n\r\nSo if I understand correctly there is no mistake in my code but it is because of the model I use right? I will try with other models then, thank you.\r\n\r\n> Currently no options, you cannot do this.\r\n\r\nOk, thank you.",
"@jplu @kevin-yauris to be able to perform the same task with batch_encoding_plus, \r\nhow should we modify the callback function to achieve that?\r\n\r\nwith existing piece of code, \r\nfor an instance, input to the model looks like\r\n<tf.Tensor: shape=(1, 8), dtype=int32, numpy=array([[ 101, 7592, 1010, 2049, 1037, 4408, 2154, 102]], dtype=int32)>\r\n\r\nwith batch encoding, \r\nit might look something like \r\n{'input_ids': <tf.Tensor: shape=(2, 6), dtype=int32, numpy=\r\narray([[ 101, 7592, 102, 0, 0, 0],\r\n [ 101, 2054, 1037, 2204, 2154, 102]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 6), dtype=int32, numpy=\r\narray([[1, 1, 1, 0, 0, 0],\r\n [1, 1, 1, 1, 1, 1]], dtype=int32)>}\r\n\r\nIn which case how should this call function look like?\r\ncall = model.__call__.get_concrete_function(tf.TensorSpec([None, None], tf.int32, name='input_ids'))\r\n\r\nThanks in advance"
] | 1,584 | 1,674 | 1,588 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I was wondering how do you guys do inference in production? I tried to convert this model to tensorflow model but failed.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
This is what I tried:
```
tf_model = TFGPT2LMHeadModel.from_pretrained("tmp/", from_pt=True)
tf.saved_model.save(tf_model,"tmp/saved")
loaded = tf.saved_model.load("tmp/saved")
print(list(loaded.signatures.keys()))
```
And it returns an empty list
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/52826134/keras-model-subclassing-examples | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3246/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3246/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3245 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3245/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3245/comments | https://api.github.com/repos/huggingface/transformers/issues/3245/events | https://github.com/huggingface/transformers/issues/3245 | 579,760,631 | MDU6SXNzdWU1Nzk3NjA2MzE= | 3,245 | pad error in BertTokenizer.batch_encode_plus | {
"login": "PosoSAgapo",
"id": 33200481,
"node_id": "MDQ6VXNlcjMzMjAwNDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/33200481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PosoSAgapo",
"html_url": "https://github.com/PosoSAgapo",
"followers_url": "https://api.github.com/users/PosoSAgapo/followers",
"following_url": "https://api.github.com/users/PosoSAgapo/following{/other_user}",
"gists_url": "https://api.github.com/users/PosoSAgapo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PosoSAgapo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PosoSAgapo/subscriptions",
"organizations_url": "https://api.github.com/users/PosoSAgapo/orgs",
"repos_url": "https://api.github.com/users/PosoSAgapo/repos",
"events_url": "https://api.github.com/users/PosoSAgapo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PosoSAgapo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This will greatly effect the output of BERT model,I do not quite know whether this is the problem of the code or the problem of the package.\r\nd[0] and d[1] are lists of sentences,which is used to train the model.",
"When I was trying to decode the input_ids, the result shows that the text_pair is not encoded properly.\r\n`tokenizer.encode(input_wsrq1ids['input_ids'][0])`\r\n`\"[CLS] john was writing lyrics for his new album. he started experiencing writer's block. he tried to force himself to write but it wouldn't do anythinghe tried to force himself to write but it wouldn't do anything. he took a walk, hung out with some friends, and looked at naturehe took a walk, hung out with some friends, and looked at naturehe took a walk, hung out with some friends, and looked at nature [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]\"`\r\nwhich I should expect the following answer:\r\n`[CLS] john was writing lyrics for his new album. he started experiencing writer's block. he tried to force himself to write but it wouldn't do anythinghe tried to force himself to write but it wouldn't do anything. he took a walk, hung out with some friends, and looked at naturehe took a walk, hung out with some friends, and looked at naturehe took a walk, hung out with some friends, and looked at nature [SEP] He felt inspiration and then went back home to write [PAD]`\r\nWhich seems did not concatenate the sentences in d[0] and d[1]",
"The `encode_plus` methods creates the attention mask according to the length of the passed input and the max length you're asking it to encode to. It doesn't look into the list to see which tokens are padding tokens, as it expects to perform the padding itself.\r\n\r\nInstead of padding the sequences yourself, you could use a combination of the `pad_to_max_length` and `max_length` flags for `encode_plus`/`batch_encode_pus`. The attention mask will be correct then.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,584 | 1,590 | 1,590 | NONE | null | As the document writes,the value that bert do not attend should have attention value of 0.However, I was using the BertTokenizer,the result is different.
This is my code:
```
input_wsrq1ids=tokenizer.batch_encode_plus(d[0],text_pair=d[1],add_special_tokens=True,return_tensors='pt')
>>> input_wsrq1ids['input_ids'][0]
tensor([ 101, 2198, 2001, 3015, 4581, 2005, 2010, 2047, 2201, 1012,
2002, 2318, 13417, 3213, 1005, 1055, 3796, 1012, 2002, 2699,
2000, 2486, 2370, 2000, 4339, 2021, 2009, 2052, 1050, 1005,
1056, 2079, 2505, 5369, 2699, 2000, 2486, 2370, 2000, 4339,
2021, 2009, 2052, 1050, 1005, 1056, 2079, 2505, 1012, 2002,
2165, 1037, 3328, 1010, 5112, 2041, 2007, 2070, 2814, 1010,
1998, 2246, 2012, 3267, 5369, 2165, 1037, 3328, 1010, 5112,
2041, 2007, 2070, 2814, 1010, 1998, 2246, 2012, 3267, 5369,
2165, 1037, 3328, 1010, 5112, 2041, 2007, 2070, 2814, 1010,
1998, 2246, 2012, 3267, 102, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0])
>>> input_wsrq1ids['attention_mask'][0]
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
```
The attention_mask are all ones,however, the input_ids are padded with zero,which seems do not match the attention.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3245/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3244 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3244/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3244/comments | https://api.github.com/repos/huggingface/transformers/issues/3244/events | https://github.com/huggingface/transformers/issues/3244 | 579,751,004 | MDU6SXNzdWU1Nzk3NTEwMDQ= | 3,244 | Get word seperator char for tokenization | {
"login": "Ricocotam",
"id": 9447752,
"node_id": "MDQ6VXNlcjk0NDc3NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9447752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ricocotam",
"html_url": "https://github.com/Ricocotam",
"followers_url": "https://api.github.com/users/Ricocotam/followers",
"following_url": "https://api.github.com/users/Ricocotam/following{/other_user}",
"gists_url": "https://api.github.com/users/Ricocotam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ricocotam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ricocotam/subscriptions",
"organizations_url": "https://api.github.com/users/Ricocotam/orgs",
"repos_url": "https://api.github.com/users/Ricocotam/repos",
"events_url": "https://api.github.com/users/Ricocotam/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ricocotam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052129,
"node_id": "MDU6TGFiZWwxODM0MDUyMTI5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature",
"name": "High-Level feature",
"color": "f7c9a3",
"default": false,
"description": ""
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I don't think you should delete the issue since it's kinda useful to have and not having it may be embarassing. But do as you want !",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,595 | 1,595 | NONE | null | # 🚀 Feature request
When using some of tokenization models (e.g. : CamemBERT) you can find the char used to separate words after some investigation ([start](https://github.com/huggingface/transformers/blob/a4c75f149269099a98613f51b76cd0b579a109ee/src/transformers/tokenization_camembert.py#L274), [first jump](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_camembert.py#L27), [last jump](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlnet.py#L43)). Making it a general rule seems interesting.
## Motivation
With this you can detect whether a token is a subword or the start of a word (and then use this info for masking). But this is inconsistent across models. After several hours I couldn't find this information for RoBERTa (and GPT2) in the code. We can obviously get this char experimentally but it is not robust to new models or even different versions of the model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3244/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3243 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3243/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3243/comments | https://api.github.com/repos/huggingface/transformers/issues/3243/events | https://github.com/huggingface/transformers/issues/3243 | 579,750,540 | MDU6SXNzdWU1Nzk3NTA1NDA= | 3,243 | seems TFBertForSequenceClassification cannot load tf1.x model? | {
"login": "fword",
"id": 2551601,
"node_id": "MDQ6VXNlcjI1NTE2MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2551601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fword",
"html_url": "https://github.com/fword",
"followers_url": "https://api.github.com/users/fword/followers",
"following_url": "https://api.github.com/users/fword/following{/other_user}",
"gists_url": "https://api.github.com/users/fword/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fword/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fword/subscriptions",
"organizations_url": "https://api.github.com/users/fword/orgs",
"repos_url": "https://api.github.com/users/fword/repos",
"events_url": "https://api.github.com/users/fword/events{/privacy}",
"received_events_url": "https://api.github.com/users/fword/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"All of our TensorFlow models are TF2+ only."
] | 1,583 | 1,584 | 1,584 | NONE | null | for TFPreTrainedModel
model.load_weights(resolved_archive_file, by_name=True)
seems by_name is always true.while i see in "tensorflow_core/python/keras/engine/network.py"
if save_format == 'tf':
status = self._trackable_saver.restore(filepath)
if by_name:
raise NotImplementedError(
'Weights may only be loaded based on topology into Models when '
'loading TensorFlow-formatted weights (got by_name=True to '
'load_weights).')
So, it will always be NotImplementedError | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3243/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3242 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3242/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3242/comments | https://api.github.com/repos/huggingface/transformers/issues/3242/events | https://github.com/huggingface/transformers/pull/3242 | 579,722,755 | MDExOlB1bGxSZXF1ZXN0Mzg3MDY0OTU3 | 3,242 | Update examples/ner/run_ner.py | {
"login": "lifefeel",
"id": 38556,
"node_id": "MDQ6VXNlcjM4NTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/38556?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lifefeel",
"html_url": "https://github.com/lifefeel",
"followers_url": "https://api.github.com/users/lifefeel/followers",
"following_url": "https://api.github.com/users/lifefeel/following{/other_user}",
"gists_url": "https://api.github.com/users/lifefeel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lifefeel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lifefeel/subscriptions",
"organizations_url": "https://api.github.com/users/lifefeel/orgs",
"repos_url": "https://api.github.com/users/lifefeel/repos",
"events_url": "https://api.github.com/users/lifefeel/events{/privacy}",
"received_events_url": "https://api.github.com/users/lifefeel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=h1) Report\n> Merging [#3242](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a4c75f149269099a98613f51b76cd0b579a109ee?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3242 +/- ##\n=========================================\n- Coverage 77.82% 77.8% -0.02% \n=========================================\n Files 98 98 \n Lines 16665 16665 \n=========================================\n- Hits 12970 12967 -3 \n- Misses 3695 3698 +3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.42% <0%> (-0.42%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=footer). Last update [a4c75f1...340f2a7](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I found that it is not suitable for guideline. \r\nClose this request."
] | 1,583 | 1,584 | 1,584 | CONTRIBUTOR | null | Update the example file by changing the name of AlbertForTokenClassification to AlbertForSequenceClassification. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3242/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3242",
"html_url": "https://github.com/huggingface/transformers/pull/3242",
"diff_url": "https://github.com/huggingface/transformers/pull/3242.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3242.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3241 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3241/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3241/comments | https://api.github.com/repos/huggingface/transformers/issues/3241/events | https://github.com/huggingface/transformers/pull/3241 | 579,692,911 | MDExOlB1bGxSZXF1ZXN0Mzg3MDQwMzkz | 3,241 | simplify polbert usage example with pipelines | {
"login": "kldarek",
"id": 15803781,
"node_id": "MDQ6VXNlcjE1ODAzNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/15803781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kldarek",
"html_url": "https://github.com/kldarek",
"followers_url": "https://api.github.com/users/kldarek/followers",
"following_url": "https://api.github.com/users/kldarek/following{/other_user}",
"gists_url": "https://api.github.com/users/kldarek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kldarek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kldarek/subscriptions",
"organizations_url": "https://api.github.com/users/kldarek/orgs",
"repos_url": "https://api.github.com/users/kldarek/repos",
"events_url": "https://api.github.com/users/kldarek/events{/privacy}",
"received_events_url": "https://api.github.com/users/kldarek/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Squashed into #3248 ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=h1) Report\n> Merging [#3241](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a4c75f149269099a98613f51b76cd0b579a109ee?src=pr&el=desc) will **increase** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3241 +/- ##\n==========================================\n+ Coverage 77.82% 78.01% +0.18% \n==========================================\n Files 98 98 \n Lines 16665 16665 \n==========================================\n+ Hits 12970 13001 +31 \n+ Misses 3695 3664 -31\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.56% <0%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.37% <0%> (+5.9%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=footer). Last update [a4c75f1...1fd6564](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,584 | 1,584 | CONTRIBUTOR | null | Indeed this is much simpler now! Pipelines look great :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3241/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3241",
"html_url": "https://github.com/huggingface/transformers/pull/3241",
"diff_url": "https://github.com/huggingface/transformers/pull/3241.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3241.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3240 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3240/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3240/comments | https://api.github.com/repos/huggingface/transformers/issues/3240/events | https://github.com/huggingface/transformers/pull/3240 | 579,687,742 | MDExOlB1bGxSZXF1ZXN0Mzg3MDM2MDI5 | 3,240 | Minor Bug Fix for Running Roberta on Glue | {
"login": "skarakulak",
"id": 8154783,
"node_id": "MDQ6VXNlcjgxNTQ3ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8154783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skarakulak",
"html_url": "https://github.com/skarakulak",
"followers_url": "https://api.github.com/users/skarakulak/followers",
"following_url": "https://api.github.com/users/skarakulak/following{/other_user}",
"gists_url": "https://api.github.com/users/skarakulak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skarakulak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skarakulak/subscriptions",
"organizations_url": "https://api.github.com/users/skarakulak/orgs",
"repos_url": "https://api.github.com/users/skarakulak/repos",
"events_url": "https://api.github.com/users/skarakulak/events{/privacy}",
"received_events_url": "https://api.github.com/users/skarakulak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,583 | 1,584 | 1,584 | CONTRIBUTOR | null | Since `RobertaTokenizer` does not generate `return_type_ids`, running Glue with Roberta throws errors. This fix overwrites the default behaviour of the tokenizers, and forces them to generate `return_type_ids`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3240/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3240",
"html_url": "https://github.com/huggingface/transformers/pull/3240",
"diff_url": "https://github.com/huggingface/transformers/pull/3240.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3240.patch",
"merged_at": 1584634111000
} |
https://api.github.com/repos/huggingface/transformers/issues/3239 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3239/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3239/comments | https://api.github.com/repos/huggingface/transformers/issues/3239/events | https://github.com/huggingface/transformers/pull/3239 | 579,683,455 | MDExOlB1bGxSZXF1ZXN0Mzg3MDMyNDY4 | 3,239 | Minor Bug Fix for Running Roberta on Glue | {
"login": "skarakulak",
"id": 8154783,
"node_id": "MDQ6VXNlcjgxNTQ3ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8154783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skarakulak",
"html_url": "https://github.com/skarakulak",
"followers_url": "https://api.github.com/users/skarakulak/followers",
"following_url": "https://api.github.com/users/skarakulak/following{/other_user}",
"gists_url": "https://api.github.com/users/skarakulak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skarakulak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skarakulak/subscriptions",
"organizations_url": "https://api.github.com/users/skarakulak/orgs",
"repos_url": "https://api.github.com/users/skarakulak/repos",
"events_url": "https://api.github.com/users/skarakulak/events{/privacy}",
"received_events_url": "https://api.github.com/users/skarakulak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Since `RobertaTokenizer` does not generate `return_type_ids`, running Glue with Roberta throws errors. This fix overwrites the default behaviour of the tokenizers, and forces them to generate `return_type_ids`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3239/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3239",
"html_url": "https://github.com/huggingface/transformers/pull/3239",
"diff_url": "https://github.com/huggingface/transformers/pull/3239.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3239.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3238 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3238/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3238/comments | https://api.github.com/repos/huggingface/transformers/issues/3238/events | https://github.com/huggingface/transformers/pull/3238 | 579,650,593 | MDExOlB1bGxSZXF1ZXN0Mzg3MDA2MTQz | 3,238 | add output_past option to BERT class | {
"login": "asahi417",
"id": 17395980,
"node_id": "MDQ6VXNlcjE3Mzk1OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17395980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asahi417",
"html_url": "https://github.com/asahi417",
"followers_url": "https://api.github.com/users/asahi417/followers",
"following_url": "https://api.github.com/users/asahi417/following{/other_user}",
"gists_url": "https://api.github.com/users/asahi417/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asahi417/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asahi417/subscriptions",
"organizations_url": "https://api.github.com/users/asahi417/orgs",
"repos_url": "https://api.github.com/users/asahi417/repos",
"events_url": "https://api.github.com/users/asahi417/events{/privacy}",
"received_events_url": "https://api.github.com/users/asahi417/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,594 | 1,594 | NONE | null | I need key-value of the present state like GPT class for BERT as well (I'm testing PPLM like architecture but with masked LM), and here I've added the option `output_past` to BERT to enable returning those statistics. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3238/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3238",
"html_url": "https://github.com/huggingface/transformers/pull/3238",
"diff_url": "https://github.com/huggingface/transformers/pull/3238.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3238.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3237 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3237/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3237/comments | https://api.github.com/repos/huggingface/transformers/issues/3237/events | https://github.com/huggingface/transformers/issues/3237 | 579,646,810 | MDU6SXNzdWU1Nzk2NDY4MTA= | 3,237 | How to encode a batch of sequence? | {
"login": "PosoSAgapo",
"id": 33200481,
"node_id": "MDQ6VXNlcjMzMjAwNDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/33200481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PosoSAgapo",
"html_url": "https://github.com/PosoSAgapo",
"followers_url": "https://api.github.com/users/PosoSAgapo/followers",
"following_url": "https://api.github.com/users/PosoSAgapo/following{/other_user}",
"gists_url": "https://api.github.com/users/PosoSAgapo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PosoSAgapo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PosoSAgapo/subscriptions",
"organizations_url": "https://api.github.com/users/PosoSAgapo/orgs",
"repos_url": "https://api.github.com/users/PosoSAgapo/repos",
"events_url": "https://api.github.com/users/PosoSAgapo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PosoSAgapo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"`batch_encode_plus` is the correct method :-) \r\n\r\n```\r\nfrom transformers import BertTokenizer\r\nbatch_input_str = ((\"Mary spends $20 on pizza\"), (\"She likes eating it\"), (\"The pizza was great\"))\r\ntok = BertTokenizer.from_pretrained('bert-base-uncased')\r\nprint(tok.batch_encode_plus(batch_input_str, pad_to_max_length=True))\r\n```"
] | 1,583 | 1,584 | 1,584 | NONE | null | Hi,I am trying to learn this transformers package.
I prepared the data as following format:
`(("Mary spends $20 on pizza"),("She likes eating it),("The pizza was great"))`
I saw methods like `tokenizer.encode`,`tokenizer.encode_plus`t and `tokenizer.batch_encode_plus`.However, the `tokenizer.encode` seems to only encode single sentence.
Because when I input the data below,the answer it gives are like this:
```
>>> d[0][0]
'John was writing lyrics for his new album'
>>> d[0][1]
'Franny did not particularly like all of the immigration happening'
>>> input_ids = torch.tensor(tokenizer.encode([d[0][0],d[0][1]]))
>>> input_ids
tensor([101, 100, 100, 102])
```
Obviously,this is not the rights answer for the encoding.
When I was try method tokenizer.encode_plust,it can't even work properly,as the document write
> "text (str or List[str]) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)"
It can't even work when I only input a single sentence:
```
>>> input_ids = torch.tensor(tokenizer.encode_plus(d[0][0]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Could not infer dtype of dict
```
And the method,tokenizer.batch_encode_plust gives the same error message.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3237/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3237/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3236 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3236/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3236/comments | https://api.github.com/repos/huggingface/transformers/issues/3236/events | https://github.com/huggingface/transformers/pull/3236 | 579,630,038 | MDExOlB1bGxSZXF1ZXN0Mzg2OTg5Nzg1 | 3,236 | [WIP] Add BART for summarization training with CNN/DM using pytorch-lightning | {
"login": "andr-ec",
"id": 16169185,
"node_id": "MDQ6VXNlcjE2MTY5MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/16169185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andr-ec",
"html_url": "https://github.com/andr-ec",
"followers_url": "https://api.github.com/users/andr-ec/followers",
"following_url": "https://api.github.com/users/andr-ec/following{/other_user}",
"gists_url": "https://api.github.com/users/andr-ec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andr-ec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andr-ec/subscriptions",
"organizations_url": "https://api.github.com/users/andr-ec/orgs",
"repos_url": "https://api.github.com/users/andr-ec/repos",
"events_url": "https://api.github.com/users/andr-ec/events{/privacy}",
"received_events_url": "https://api.github.com/users/andr-ec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=h1) Report\n> Merging [#3236](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d4a01905fad4f5eed2e6c1037dea9877711427a&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3236 +/- ##\n=======================================\n Coverage 77.56% 77.56% \n=======================================\n Files 100 100 \n Lines 16970 16970 \n=======================================\n Hits 13162 13162 \n Misses 3808 3808 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=footer). Last update [9d4a019...9d4a019](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Nice! @yjernite might be interested!",
"I made those requested changes. And yes I'm planning to run finetuning this weekend and share results. I only have access to a k80 so it'll take a while 🤷🏽♂️",
"This looks awesome. Let's coordinate with https://github.com/huggingface/transformers/pull/3290 as well to share whatever code is possible. ",
"@nateraw can you do a review of this PR as well?",
"@acarrera94 I will try to get this working this week. If you are in the pytorch-lightning open slack we can also chat a bit more about the design. ",
"@nateraw I've made all of those changes and it looks like #3290 has been merged, anything else that needs to change? Thanks!",
"It's blocked on me, I should be able to get to it tonight. ",
"New code looks great. Excited to try it out!",
"Thanks for sticking with it @acarrera I'm really impressed how concise this became. Next we can get some numbers. ",
"@acarrera94 `run_train.sh` is using 19GB on my system.\r\nDoes your system use less?\r\nI am also seeing no memory savings from adding `--fp16`. \r\nThanks!\r\n",
"@sshleifer I usually ran it using --max_seq_lengt=756. And that used less than 16gb of memory with a batch size of 4, so we might want to change that default. And I haven’t tried it using --fp16. That comes from BaseTransformer right? "
] | 1,583 | 1,585 | 1,585 | CONTRIBUTOR | null | This pull request adds to the example for BART for summarization. I used the [example for NER](https://github.com/huggingface/transformers/tree/master/examples/ner) using pytorch-lightning as guidance. This example will train on CNN/DM and evaluate, and get decent results, though I haven't trained it on the full dataset just yet. I'm sure there are better defaults for the hyperparams but these seem to work.
I based this PR on the code I wrote in this [colab](https://colab.research.google.com/drive/1C4jEf0fnLiz6Xdx4TDz1OoO4BRCjCx1m).
This would hopefully close https://github.com/huggingface/transformers/issues/3004
## TODO
- [x] Be able to train the model on a GPU.
- [x] remove unused args
- [x] add test step and save results.
Happy to hear any feedback! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3236/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3236/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3236",
"html_url": "https://github.com/huggingface/transformers/pull/3236",
"diff_url": "https://github.com/huggingface/transformers/pull/3236.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3236.patch",
"merged_at": 1585098024000
} |
https://api.github.com/repos/huggingface/transformers/issues/3235 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3235/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3235/comments | https://api.github.com/repos/huggingface/transformers/issues/3235/events | https://github.com/huggingface/transformers/pull/3235 | 579,573,974 | MDExOlB1bGxSZXF1ZXN0Mzg2OTQ0MTcy | 3,235 | Directories not found when saving checkpoints | {
"login": "nguyenhoan1988",
"id": 2057220,
"node_id": "MDQ6VXNlcjIwNTcyMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2057220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nguyenhoan1988",
"html_url": "https://github.com/nguyenhoan1988",
"followers_url": "https://api.github.com/users/nguyenhoan1988/followers",
"following_url": "https://api.github.com/users/nguyenhoan1988/following{/other_user}",
"gists_url": "https://api.github.com/users/nguyenhoan1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nguyenhoan1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nguyenhoan1988/subscriptions",
"organizations_url": "https://api.github.com/users/nguyenhoan1988/orgs",
"repos_url": "https://api.github.com/users/nguyenhoan1988/repos",
"events_url": "https://api.github.com/users/nguyenhoan1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/nguyenhoan1988/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do you mind making sure the code quality test runs before we merge? You can see how to do that in the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,594 | 1,594 | NONE | null | _rotate_checkpoints deletes some checkpoint directory which causes "directories not found" error | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3235/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3235",
"html_url": "https://github.com/huggingface/transformers/pull/3235",
"diff_url": "https://github.com/huggingface/transformers/pull/3235.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3235.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3234 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3234/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3234/comments | https://api.github.com/repos/huggingface/transformers/issues/3234/events | https://github.com/huggingface/transformers/pull/3234 | 579,568,049 | MDExOlB1bGxSZXF1ZXN0Mzg2OTM5MTIy | 3,234 | [model_cards] 🇹🇷 Add new (cased) DistilBERTurk model | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also cc @VictorSanh for the model distillation script.\r\n\r\nThanks @stefan-it this is awesome",
"I'm supposed to use the same line-breaking options as GitHub for markdown formatting (using marked.js), however this still seems to not render like on GitHub: https://huggingface.co/dbmdz/distilbert-base-turkish-cased\r\n\r\nwill need to investigate."
] | 1,583 | 1,583 | 1,583 | COLLABORATOR | null | Hi,
this PR adds a new distilled BERT model for Turkish: DistilBERTurk 🤗
It was trained with the official Hugging Face [implementation](https://github.com/huggingface/transformers/tree/master/examples/distillation) for model distillation. It uses 7GB of the original training data of BERTurk, and uses the cased BERTurk model as teacher model.
DistilBERTurk was trained for 5 days on 4 RTX 2080 TI.
Performance is really promising: for PoS tagging the model outperforms the 24-layer XLM-RoBERTa and is only 0.69% behind the teacher model. For NER there's a performance diff of 0.44% compared to mBERT and 1.68% compared to the teacher model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3234/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3234",
"html_url": "https://github.com/huggingface/transformers/pull/3234",
"diff_url": "https://github.com/huggingface/transformers/pull/3234.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3234.patch",
"merged_at": 1583966439000
} |
https://api.github.com/repos/huggingface/transformers/issues/3233 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3233/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3233/comments | https://api.github.com/repos/huggingface/transformers/issues/3233/events | https://github.com/huggingface/transformers/pull/3233 | 579,559,306 | MDExOlB1bGxSZXF1ZXN0Mzg2OTMxODMy | 3,233 | Bart: update example for #3140 compatibility | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Had to temporarily pause the self-hosted CI runner while I debug while it's been failing, @sshleifer "
] | 1,583 | 1,584 | 1,584 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3233/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3233",
"html_url": "https://github.com/huggingface/transformers/pull/3233",
"diff_url": "https://github.com/huggingface/transformers/pull/3233.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3233.patch",
"merged_at": 1584023797000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3232 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3232/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3232/comments | https://api.github.com/repos/huggingface/transformers/issues/3232/events | https://github.com/huggingface/transformers/issues/3232 | 579,546,895 | MDU6SXNzdWU1Nzk1NDY4OTU= | 3,232 | [TorchHub]Repo's layout is not compatible with TorchHub anymore since 2.0 | {
"login": "chenliu0831",
"id": 1504463,
"node_id": "MDQ6VXNlcjE1MDQ0NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1504463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenliu0831",
"html_url": "https://github.com/chenliu0831",
"followers_url": "https://api.github.com/users/chenliu0831/followers",
"following_url": "https://api.github.com/users/chenliu0831/following{/other_user}",
"gists_url": "https://api.github.com/users/chenliu0831/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenliu0831/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenliu0831/subscriptions",
"organizations_url": "https://api.github.com/users/chenliu0831/orgs",
"repos_url": "https://api.github.com/users/chenliu0831/repos",
"events_url": "https://api.github.com/users/chenliu0831/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenliu0831/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any plan to fix this?",
"I missed that issue, but this was fixed a couple of weeks ago, and it's even covered by CI now: https://github.com/huggingface/transformers/blob/master/.github/workflows/github-torch-hub.yml"
] | 1,583 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
When I try loading a model/tokenizer from the [pytorch hub](https://pytorch.org/hub/huggingface_pytorch-transformers/) page, the hub loading code is not working anymore. Pre transformers 2.0.0, the same loading code is working.
On a quick look, I believe it's related to the repo's fold layout since 2.0 where the `transformers` module is moved inside `src` but `hub_conf` is still assuming `transformers` exists in the same level and we get a module not found error. For context, torch hub insert the repo root directory into `sys.path` to enable import.
## To reproduce
Run
```
import torch
tokenizer = torch.hub.load('huggingface/transformers:v2.5.0', 'tokenizer', 'bert-base-cased')
```
Stack trace:
```
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-5-155f4aa294f1> in <module>
1 import torch
----> 2 tokenizer = torch.hub.load('huggingface/transformers:v2.5.0', 'tokenizer', 'bert-base-cased')
3
4 text_1 = "Who was Jim Henson ?"
5 text_2 = "Jim Henson was a puppeteer"
~/miniconda3/envs/poc/lib/python3.7/site-packages/torch/hub.py in load(github, model, *args, **kwargs)
354 sys.path.insert(0, repo_dir)
355
--> 356 hub_module = import_module(MODULE_HUBCONF, repo_dir + '/' + MODULE_HUBCONF)
357
358 entry = _load_entry_from_hubconf(hub_module, model)
~/miniconda3/envs/poc/lib/python3.7/site-packages/torch/hub.py in import_module(name, path)
70 spec = importlib.util.spec_from_file_location(name, path)
71 module = importlib.util.module_from_spec(spec)
---> 72 spec.loader.exec_module(module)
73 return module
74 elif sys.version_info >= (3, 0):
~/miniconda3/envs/poc/lib/python3.7/importlib/_bootstrap_external.py in exec_module(self, module)
~/miniconda3/envs/poc/lib/python3.7/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
~/.cache/torch/hub/huggingface_transformers_v2.5.0/hubconf.py in <module>
----> 1 from transformers import (
2 AutoConfig,
3 AutoModel,
4 AutoModelForQuestionAnswering,
5 AutoModelForSequenceClassification,
ModuleNotFoundError: No module named 'transformers'
```
## Expected behavior
This was working before 2.0.0.
```
import torch
tokenizer = torch.hub.load('huggingface/transformers:1.2.0', 'tokenizer', 'bert-base-cased')
```
## Environment info
- `transformers` version: >2.0.0
- Platform: all
- Python version: 3.7
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?): N/A
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3232/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3231 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3231/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3231/comments | https://api.github.com/repos/huggingface/transformers/issues/3231/events | https://github.com/huggingface/transformers/issues/3231 | 579,455,241 | MDU6SXNzdWU1Nzk0NTUyNDE= | 3,231 | train dev test split with BERT | {
"login": "ksatvat",
"id": 43620635,
"node_id": "MDQ6VXNlcjQzNjIwNjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/43620635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksatvat",
"html_url": "https://github.com/ksatvat",
"followers_url": "https://api.github.com/users/ksatvat/followers",
"following_url": "https://api.github.com/users/ksatvat/following{/other_user}",
"gists_url": "https://api.github.com/users/ksatvat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksatvat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksatvat/subscriptions",
"organizations_url": "https://api.github.com/users/ksatvat/orgs",
"repos_url": "https://api.github.com/users/ksatvat/repos",
"events_url": "https://api.github.com/users/ksatvat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksatvat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can write or add your own version easily..",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | Does `run_multiple_choice.py` work on train dev test splits?
I need to run BERT on 3 labeled datasets. Train it on my training set, validate it on my validation set (tune hyperparameters and calculate loss), and evaluate it on my test set (report performance). I finally want to do prediction on a forth unlabeled dataset.
I am wondering which of the codes in your repository includes these 3 modes. Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3231/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3230 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3230/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3230/comments | https://api.github.com/repos/huggingface/transformers/issues/3230/events | https://github.com/huggingface/transformers/pull/3230 | 579,375,007 | MDExOlB1bGxSZXF1ZXN0Mzg2NzgwMTE1 | 3,230 | Create README.md for bio+discharge summary BERT | {
"login": "EmilyAlsentzer",
"id": 7334040,
"node_id": "MDQ6VXNlcjczMzQwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7334040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EmilyAlsentzer",
"html_url": "https://github.com/EmilyAlsentzer",
"followers_url": "https://api.github.com/users/EmilyAlsentzer/followers",
"following_url": "https://api.github.com/users/EmilyAlsentzer/following{/other_user}",
"gists_url": "https://api.github.com/users/EmilyAlsentzer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EmilyAlsentzer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EmilyAlsentzer/subscriptions",
"organizations_url": "https://api.github.com/users/EmilyAlsentzer/orgs",
"repos_url": "https://api.github.com/users/EmilyAlsentzer/repos",
"events_url": "https://api.github.com/users/EmilyAlsentzer/events{/privacy}",
"received_events_url": "https://api.github.com/users/EmilyAlsentzer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=h1) Report\n> Merging [#3230](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e43afb1bb87f01470d0bd16cac2d2aac50a76d7a?src=pr&el=desc) will **increase** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3230 +/- ##\n==========================================\n+ Coverage 77.94% 78.02% +0.08% \n==========================================\n Files 98 98 \n Lines 16665 16665 \n==========================================\n+ Hits 12989 13003 +14 \n+ Misses 3676 3662 -14\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3230/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <0%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3230/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.37% <0%> (+2.14%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=footer). Last update [e43afb1...2ed661c](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Add Bio+ Discharge Summary BERT from Publicly Available Clinical BERT Embeddings | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3230/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3230",
"html_url": "https://github.com/huggingface/transformers/pull/3230",
"diff_url": "https://github.com/huggingface/transformers/pull/3230.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3230.patch",
"merged_at": 1583944620000
} |
https://api.github.com/repos/huggingface/transformers/issues/3229 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3229/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3229/comments | https://api.github.com/repos/huggingface/transformers/issues/3229/events | https://github.com/huggingface/transformers/pull/3229 | 579,370,967 | MDExOlB1bGxSZXF1ZXN0Mzg2Nzc2NzUy | 3,229 | Add Bio+ Clinical BERT model card | {
"login": "EmilyAlsentzer",
"id": 7334040,
"node_id": "MDQ6VXNlcjczMzQwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7334040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EmilyAlsentzer",
"html_url": "https://github.com/EmilyAlsentzer",
"followers_url": "https://api.github.com/users/EmilyAlsentzer/followers",
"following_url": "https://api.github.com/users/EmilyAlsentzer/following{/other_user}",
"gists_url": "https://api.github.com/users/EmilyAlsentzer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EmilyAlsentzer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EmilyAlsentzer/subscriptions",
"organizations_url": "https://api.github.com/users/EmilyAlsentzer/orgs",
"repos_url": "https://api.github.com/users/EmilyAlsentzer/repos",
"events_url": "https://api.github.com/users/EmilyAlsentzer/events{/privacy}",
"received_events_url": "https://api.github.com/users/EmilyAlsentzer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Adding Bio+ Clinical BERT model from Publicly Available Clinical BERT Embeddings paper | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3229/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3229",
"html_url": "https://github.com/huggingface/transformers/pull/3229",
"diff_url": "https://github.com/huggingface/transformers/pull/3229.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3229.patch",
"merged_at": 1583944594000
} |
https://api.github.com/repos/huggingface/transformers/issues/3228 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3228/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3228/comments | https://api.github.com/repos/huggingface/transformers/issues/3228/events | https://github.com/huggingface/transformers/pull/3228 | 579,321,478 | MDExOlB1bGxSZXF1ZXN0Mzg2NzM2NDQ2 | 3,228 | Support T5 Generation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=h1) Report\n> Merging [#3228](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68ef0a111f8740f06ca4e5a00374ec4e2adb0a6d&el=desc) will **increase** coverage by `0.12%`.\n> The diff coverage is `97.56%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3228 +/- ##\n==========================================\n+ Coverage 77.48% 77.60% +0.12% \n==========================================\n Files 99 99 \n Lines 16799 16828 +29 \n==========================================\n+ Hits 13017 13060 +43 \n+ Misses 3782 3768 -14 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <ø> (ø)` | |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.55% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `93.28% <ø> (+3.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.17% <94.20%> (-0.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.47% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.26% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.20% <100.00%> (+0.10%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `68.62% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.82% <100.00%> (+1.60%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.99% <100.00%> (+0.28%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=footer). Last update [68ef0a1...62cf76f](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> I noticed that the T5 tokenizer does not have a BOS token and since we require at the moment to use a BOS token for encoder-decoder generation, I set the bos_token_id to the pad_token_id which is probably not the best way to do it.\r\n\r\nThis is actually the correct thing to do, see e.g. https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/transformer.py#L1744",
"What is currently happening in `model.generation()` for encoder-decoder models (Bart and T5) is the following: \r\n\r\nThe `input_ids` variable of the generate() is given to the variable `encoder_input_ids`, which is then **always** put into the forward() of `BartLMHeadModel` and `T5LMHeadModel`. The `input_ids` variable is then initialized with the `BOS` token and auto-regressively updated. \r\n\r\nAfter the first step the `encoder_output_ids` are calculated and handed to the `past` variable, which from then on is also **always** put into forward() of `BartLMHeadModel` and `T5LMHeadModel`. \r\n\r\nAt the moment, the `encoder_input_ids` are always put in the forward() of Bart and T5 and then ignored there. This is probably not the cleanest way to do it. Other possibilities might be:\r\n1. calculate the encoder_outputs one time before going into the auto-regressive loop and setting them to the `past` variable already on the first step. \r\n2. leave it as it is now, but set `encoder_input_ids` to None in `prepare_inputs_for_generation()`\r\n\r\nOr other ideas? \r\n\r\nI think option 1 is clean - I think Bart and T5 let's you calculate only the encoder_outputs @sshleifer , @craffel no?\r\n\r\n@craffel @thomwolf @sshleifer",
"> calculate the encoder_outputs one time before going into the auto-regressive loop and setting them to the past variable already on the first step.\r\n\r\nThis definitely seems best - the model should compute the encoder outputs and then treat them as fixed (to be attended to) as the decoder generates.",
"UPDATE 2: \r\n\r\nI'm pretty happy with the current version now. \r\n\r\nTo summarize:\r\nThis PR allows `generate()` for TF & PT T5Model.\r\n\r\nThree important changes to mention:\r\n\r\n1. remove the if `decoder_start_token_id` != `bos_token_id` statement in `generate()` to keep generate() generic (for more explanation, see comments above)\r\n2. remove `encoder()` abstraction method in `Bart` and `T5` and replace by `get_encoder()`. \r\n3. **IMPORTANT**: Move responsablitiy to transform `input_ids` to `inputs_embeds` from `T5ForConditionalGeneration.call()` to `encoder.call()` and `decoder.call()` -> Reasons:\r\n a) This way, the encoder is a complete model which can transform `input_ids` to `input_embeds`\r\n b) cleaner code in `T5ForConditionalGeneration.call()`\r\n c) Bart had this behavior already implemented - make API more similar\r\n NOTE: this led to some problems with TF scopes, but thanks to @mfuntowicz and @craffel is solved now by injecting the correct absolute scope to the call method and wrapping the Embedding layer (see comments above). This will issue will also be important when translating Bart to T5Bart @sshleifer \r\n\r\n4. `T5Models.call()` arguments are renamed to `BartModel` argument names.\r\n\r\nT5 produces same good translation results (same as results mentioned on the top) and Bart tests all pass. \r\n\r\n@craffel @thomwolf @sshleifer ",
"when i use decoder of bart to call generate(), there's mistake of 'has no attribute 'get_encoder',and the decoder is a tensorRT engine Inherited from GenerationMixin.\r\n\r\n\r\nIs any one knows how to fix it? very many thanks!\r\n@patrickvonplaten @craffel @codecov-io @jplu ",
"> when i use decoder of bart to call generate(), there's mistake of 'has no attribute 'get_encoder',and the decoder is a tensorRT engine Inherited from GenerationMixin. \r\n> \r\n> Is any one knows how to fix it? very many thanks! @patrickvonplaten @craffel @codecov-io @jplu\r\n\r\n@yuanhuachao - could you please open a new issue for this?"
] | 1,583 | 1,642 | 1,584 | MEMBER | null | In this PR some first commits are added to make T5 work for generation.
the `T5WithLMHeadModel.forward()` has a special API due to its encoder-decoder nature.
This is why we need to add a `prepare_inputs_for_generation()` in `t5_modeling_utils.py` to
correctly prepare t5's inputs for generation.
Some easy translation seems to give alright results (same results for TF):
```
model = T5WithLMHeadModel.from_pretrained('t5-base')
tok = T5Tokenizer.from_pretrained('t5-base')
text = "translate English to German: How old are you?"
input_ids = tok.encode(text, return_tensors='pt')
outputs = model.generate(input_ids, bos_token_id=tok.pad_token_id, max_length=22, num_beams=4, do_sample=False, early_stopping=True)
print(tok.decode(outputs[0], skip_special_tokens=True))
# prints:
# Wie alt bist du?st du?st du?st
```
UPDATE:
Updated generate() in both TF and PT to compute the `encoder_outputs` only once for `encoder-decoder` models as discussed below.
Tests `RUN_SLOW=1` for `test_modeling_bart.py`, `test_modeling_gpt2.py` and `test_modeling_tf_gpt2.py` all pass.
#### **FUTURE PR**:
- [ ] add generation integration test for T5 in PT and TF (could be similar to what is done in
e.g. OR better compare numbers to original T5 model numbers @craffel.
Good for me to merge!
@thomwolf @sshleifer @craffel | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3228/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3228",
"html_url": "https://github.com/huggingface/transformers/pull/3228",
"diff_url": "https://github.com/huggingface/transformers/pull/3228.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3228.patch",
"merged_at": 1584656304000
} |
https://api.github.com/repos/huggingface/transformers/issues/3227 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3227/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3227/comments | https://api.github.com/repos/huggingface/transformers/issues/3227/events | https://github.com/huggingface/transformers/issues/3227 | 579,302,501 | MDU6SXNzdWU1NzkzMDI1MDE= | 3,227 | An Error report about pipeline | {
"login": "SizhaoXu",
"id": 50722884,
"node_id": "MDQ6VXNlcjUwNzIyODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/50722884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SizhaoXu",
"html_url": "https://github.com/SizhaoXu",
"followers_url": "https://api.github.com/users/SizhaoXu/followers",
"following_url": "https://api.github.com/users/SizhaoXu/following{/other_user}",
"gists_url": "https://api.github.com/users/SizhaoXu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SizhaoXu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SizhaoXu/subscriptions",
"organizations_url": "https://api.github.com/users/SizhaoXu/orgs",
"repos_url": "https://api.github.com/users/SizhaoXu/repos",
"events_url": "https://api.github.com/users/SizhaoXu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SizhaoXu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1843377584,
"node_id": "MDU6TGFiZWwxODQzMzc3NTg0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Version%20mismatch",
"name": "Version mismatch",
"color": "ddea7c",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I have this same issue, but have no problems running:\r\n\r\nnlp = pipeline(\"question-answering\")\r\n\r\n\r\nNote: To install the library, I had to install tokenizers version 0.6.0 separately, git clone the transformers repo and edit the setup.py file before installing as per @dafraile's answer for issue: https://github.com/huggingface/transformers/issues/2831\r\n\r\nUpdate: This error was fixed when I installed tokenizers==0.5.2",
"I sadly have this issue too with the newest transformers 2.6.0 version.\r\n\r\nTokenizers is at version 0.5.2. But newest version of tokenizers sadly also doesn't work.\r\n\r\nAnd solutions to fix this issue?",
"I have the same issue here. I first ran with my own tokenizer, but it failed, and then I tried to run the 03-pipelines.ipynb code with QnA example and I get the following error code.\r\n\r\nEnvironment:\r\ntensorflow==2.0.0\r\ntensorflow-estimator==2.0.1\r\ntensorflow-gpu==2.0.0\r\ntorch==1.4.0\r\ntransformers==2.5.1\r\ntokenizers==0.6.0\r\n\r\nCode that I ran:\r\nnlp_qa = pipeline('question-answering')\r\nnlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?')\r\n\r\nError code:\r\n\r\nHBox(children=(FloatProgress(value=0.0, description='Downloading', max=230.0, style=ProgressStyle(description_…\r\n\r\nconvert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s]\r\n---------------------------------------------------------------------------\r\nRemoteTraceback Traceback (most recent call last)\r\nRemoteTraceback: \r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/home/brandon/anaconda3/envs/transformers/lib/python3.7/multiprocessing/pool.py\", line 121, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/brandon/anaconda3/envs/transformers/lib/python3.7/multiprocessing/pool.py\", line 44, in mapstar\r\n return list(map(*args))\r\n File \"/home/brandon/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/data/processors/squad.py\", line 198, in squad_convert_example_to_features\r\n p_mask = np.array(span[\"token_type_ids\"])\r\nKeyError: 'token_type_ids'\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-6-95614263b54d> in <module>()\r\n 1 nlp_qa = pipeline('question-answering')\r\n----> 2 nlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?')\r\n\r\n~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *texts, **kwargs)\r\n 968 False,\r\n 969 )\r\n--> 970 for example in examples\r\n 971 ]\r\n 972 all_answers = []\r\n\r\n~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/pipelines.py in <listcomp>(.0)\r\n 968 False,\r\n 969 )\r\n--> 970 for example in examples\r\n 971 ]\r\n 972 all_answers = []\r\n\r\n~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/data/processors/squad.py in squad_convert_examples_to_features(examples, tokenizer, max_seq_length, doc_stride, max_query_length, is_training, return_dataset, threads)\r\n 314 p.imap(annotate_, examples, chunksize=32),\r\n 315 total=len(examples),\r\n--> 316 desc=\"convert squad examples to features\",\r\n 317 )\r\n 318 )\r\n\r\n~/anaconda3/envs/transformers/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)\r\n 1106 fp_write=getattr(self.fp, 'write', sys.stderr.write))\r\n 1107 \r\n-> 1108 for obj in iterable:\r\n 1109 yield obj\r\n 1110 # Update and possibly print the progressbar.\r\n\r\n~/anaconda3/envs/transformers/lib/python3.7/multiprocessing/pool.py in <genexpr>(.0)\r\n 323 result._set_length\r\n 324 ))\r\n--> 325 return (item for chunk in result for item in chunk)\r\n 326 \r\n 327 def imap_unordered(self, func, iterable, chunksize=1):\r\n\r\n~/anaconda3/envs/transformers/lib/python3.7/multiprocessing/pool.py in next(self, timeout)\r\n 746 if success:\r\n 747 return value\r\n--> 748 raise value\r\n 749 \r\n 750 __next__ = next # XXX\r\n\r\nKeyError: 'token_type_ids'\r\n",
"Any help would be greatly appreciated!",
"use :\r\npip install transformers==2.5.1\r\ninstead of :\r\npip install transformers",
"Thank you @paras55. your solution worked for me!",
"Installing `v2.7.0` should work as well.",
"2.7.0 fails with the same error (at least with tokenizers==0.5.2)"
] | 1,583 | 1,585 | 1,585 | NONE | null | # 🐛 Bug
## Information
This may be an easy question, but it has been bothering me all day.
When I run the code:
nlp = pipeline("question-answering")
It always tells me:
Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad-modelcard.json' to download model card file.
Creating an empty model card.
If I ignore it and continue to run the rest of the code:
nlp({
'question': 'What is the name of the repository ?',
'context': 'Pipeline have been included in the huggingface/transformers repository'
})
The error will appear:
KeyError: 'token_type_ids' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3227/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3226 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3226/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3226/comments | https://api.github.com/repos/huggingface/transformers/issues/3226/events | https://github.com/huggingface/transformers/issues/3226 | 579,272,442 | MDU6SXNzdWU1NzkyNzI0NDI= | 3,226 | Strange behaviour after using BertTokenizer.add_tokens() | {
"login": "Saltychtao",
"id": 9932507,
"node_id": "MDQ6VXNlcjk5MzI1MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9932507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saltychtao",
"html_url": "https://github.com/Saltychtao",
"followers_url": "https://api.github.com/users/Saltychtao/followers",
"following_url": "https://api.github.com/users/Saltychtao/following{/other_user}",
"gists_url": "https://api.github.com/users/Saltychtao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saltychtao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saltychtao/subscriptions",
"organizations_url": "https://api.github.com/users/Saltychtao/orgs",
"repos_url": "https://api.github.com/users/Saltychtao/repos",
"events_url": "https://api.github.com/users/Saltychtao/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saltychtao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, this is expected behavior. If it was returning `['inv', 'ol', 've']`, then the model would identify each token as being a beginning of word token, whereas the last two are actually part of words following a beginning of a word.",
"Thanks for your response. But since both 'involve' and `['inv', 'ol', 've']` are in the vocabulary, shouldn't 'involve' be kept unsplit instead of split into subword? I am expecting the output to be `'involve'` instead of `['inv','##ol','##ve']`."
] | 1,583 | 1,584 | 1,584 | NONE | null | When a word is in origin Bert vocabulary, it would not be split. But after adding several its subtokens, it would be split into pieces, which seems to inconsistent with the longest match first algorithm.
To reproduce:
```
import transformers
tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased)
tokenizer.tokenize('involve')
tokenizer.add_tokens(['inv','ol','ve'])
tokenizer.tokenize('involve')
```
Line 2 returns 'involve' , and line 4 returns ['inv','##ol','##ve'].
Is this behaviour expected or is it a bug? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3226/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3225 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3225/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3225/comments | https://api.github.com/repos/huggingface/transformers/issues/3225/events | https://github.com/huggingface/transformers/pull/3225 | 579,271,255 | MDExOlB1bGxSZXF1ZXN0Mzg2Njk1MDA0 | 3,225 | Complete merge Seq-2-Seq generation into default generation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=h1) Report\n> Merging [#3225](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab?src=pr&el=desc) will **increase** coverage by `0.1%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3225 +/- ##\n=========================================\n+ Coverage 77.93% 78.03% +0.1% \n=========================================\n Files 98 98 \n Lines 16666 16668 +2 \n=========================================\n+ Hits 12988 13007 +19 \n+ Misses 3678 3661 -17\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.86% <100%> (+0.15%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.55% <0%> (+2.86%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=footer). Last update [2e81b9d...6a82f77](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Another issue that came up today from @yjernite, Bart does not support `do_sample=True` before 3140 this was clear and now it is :(",
"`Bart.generate()` with `do_sample=True` does not throw any errors. If it should not be done this way then we can just write in the docs that `do_sample` should be set to `False` and show an example. I don't see a problem here. We could think about a `Bart.summarize()` that calls `Bart.generate()` with the correct parameters if it's just for the prettier API. ",
"I prefer option #3, it looks like very little code and the best metrics.\r\nCould you also fix the kwarg in `examples/summarization/bart/evaluate_cnn.py`?",
"> I prefer option #3, it looks like very little code and the best metrics.\r\n> Could you also fix the kwarg in `examples/summarization/bart/evaluate_cnn.py`?\r\n\r\ndone :-) ",
"Linked to PR: https://github.com/huggingface/transformers/pull/3264 \r\nMight need to fix eos_token_id conflicts when rebasing/merging.",
"This one is ok for me.\r\nIs it currently option 3 which is implemented?\r\n\r\nAlso, I guess we could default to `do_sample==False` in `generate()`. Seems like the default expectation from the user is simple greedy decoding to me.",
"Yeah currently option 3 is implemented. \r\n`do_sample` used to default to `False`, but was changed to `True` by @LysandreJik (can't find the issue/PR anymore :-/) Does not matter too much for me what is chosen, just would need to update some tests in `modeling_utils.py`",
"Yeah I'd love to merge this! Having trouble connecting to brutasse to run rouge, but afaict it will be the same as pre 3140 :) ",
"Good to merge for me! \r\nchanging `do_sample=False` can be done in another PR, I think. \r\n\r\nIMPORNTANT: \r\nthe `config.json` files of:\r\n\r\n- `bart-large-cnn`\r\n- `bart-large-mnli`\r\n- `bart-large `\r\n\r\nhave to be updated on AWS to pass all slow tests. All `special_tokens_id` parameters should be deleted there. For the moment, we will go with the solution: \r\n#3264 .",
"Ok merging. I let you continue in other PRs and ping me.",
"> Good to merge for me!\r\n> changing `do_sample=False` can be done in another PR, I think.\r\n> \r\n> IMPORNTANT:\r\n> the `config.json` files of:\r\n> \r\n> * `bart-large-cnn`\r\n> * `bart-large-mnli`\r\n> * `bart-large `\r\n> \r\n> have to be updated on AWS to pass all slow tests. All `special_tokens_id` parameters should be deleted there. For the moment, we will go with the solution:\r\n> #3264 .\r\n\r\nChanged the configs on AWS. All slow tests pass now."
] | 1,583 | 1,584 | 1,584 | MEMBER | null | This is a follow-up PR to finalize #3140 .
There was still no conclusion on how to handle the fairseq tricks for generation. To summarize:
I think we have three options:
1. Remove all faiseq tricks. Here the ROUGE score is: **20.285**
2. Implement the fairseq tricks EXCEPT leaving the starting decoding_inputs_tokens to be the BOS token instead of EOS. Here the ROUGE score is: **19.369**
3. Add all fairseq tricks and maybe add a new argument to `generate()` which is called `decoder_start_token_id=bos_token_id` , but can be overriden to be the `eos_token_id` in the case of Bart. Here the ROUGE score is: **21.072**
ROUGE scores from @sshleifer
For comparison:

UPDATE:
Given the above scores, option 1. was chosen for the moment to have the same scores as fairseq.
This means that we have to start the decoder ids with a EOS token (which might be weird and fairseq specific). Therefore, a new argument `decoder_start_token_id` was added to the generate function that defaults to the `bos_token_id`. When using Bart generate this argument should be set to the `eos_token_id` to have good results. To see how `Bart.generate()` should be used take a look at:
https://github.com/huggingface/transformers/blob/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab/tests/test_modeling_bart.py#L470
At the moment option 2. is implemented which seems to give the worst results and is also not the cleanest option.
This PR implements option 3.
For me either option 1. or option 3. is fine. Up to discuss @thomwolf , @julien-c , @LysandreJik @sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3225/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3225",
"html_url": "https://github.com/huggingface/transformers/pull/3225",
"diff_url": "https://github.com/huggingface/transformers/pull/3225.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3225.patch",
"merged_at": 1584194939000
} |
https://api.github.com/repos/huggingface/transformers/issues/3224 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3224/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3224/comments | https://api.github.com/repos/huggingface/transformers/issues/3224/events | https://github.com/huggingface/transformers/issues/3224 | 579,181,160 | MDU6SXNzdWU1NzkxODExNjA= | 3,224 | Problem with PreTrainedTokenizerFast and return_offsets_mapping | {
"login": "mary-design-testing",
"id": 54845028,
"node_id": "MDQ6VXNlcjU0ODQ1MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/54845028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mary-design-testing",
"html_url": "https://github.com/mary-design-testing",
"followers_url": "https://api.github.com/users/mary-design-testing/followers",
"following_url": "https://api.github.com/users/mary-design-testing/following{/other_user}",
"gists_url": "https://api.github.com/users/mary-design-testing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mary-design-testing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mary-design-testing/subscriptions",
"organizations_url": "https://api.github.com/users/mary-design-testing/orgs",
"repos_url": "https://api.github.com/users/mary-design-testing/repos",
"events_url": "https://api.github.com/users/mary-design-testing/events{/privacy}",
"received_events_url": "https://api.github.com/users/mary-design-testing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1920687293,
"node_id": "MDU6TGFiZWwxOTIwNjg3Mjkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Fast%20Tokenizers",
"name": "Fast Tokenizers",
"color": "b60205",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I am running into the same issue. Any progress on getting this into a release?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert, bert-large-uncased-whole-word-masking-finetuned-squad
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts:
* [X] my own modified scripts:
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
## To reproduce
Script to reproduce the behavior:
```python
# transformers v2.5.1 (https://github.com/huggingface/transformers/releases)
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
model_name = "bert-large-uncased-whole-word-masking-finetuned-squad"
tokeniser = AutoTokenizer.from_pretrained(model_name, use_fast=True)
inputs = tokeniser.encode_plus("Who is Bert?", "Bert is a puppet by Jim Henson", add_special_tokens=True, return_tensors="pt", return_offsets_mapping=True)
```
This script produces the following wrong output:
```
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2018.3.5\helpers\pydev\pydevd.py", line 1741, in <module>
main()
File "C:\Program Files\JetBrains\PyCharm 2018.3.5\helpers\pydev\pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm 2018.3.5\helpers\pydev\pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2018.3.5\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:/OneDrive/RC/Segovia/DEMO-11/Adam11/_QUESTION_ANSWERER.py", line 8, in <module>
inputs = tokeniser.encode_plus("Who is Bert?", "Bert is a puppet by Jim Henson", add_special_tokens=True, return_tensors="pt", return_offsets_mapping=True)
File "C:\Users\mary\.conda\envs\adam11\lib\site-packages\transformers\tokenization_utils.py", line 1889, in encode_plus
**kwargs,
File "C:\Users\mary\.conda\envs\adam11\lib\site-packages\transformers\tokenization_utils.py", line 1843, in batch_encode_plus
stack = torch.stack(stack, dim=0)
TypeError: expected Tensor as element 0 in argument 0, but got list
```
## Expected behavior
According to the documentation that is available in file "tokenization_utils.py", the behaviour should be as follows:
>return_offsets_mapping:
>(optional) Set to True to return (char_start, char_end) for each token (default False).
>If using Python's tokenizer, this method will raise NotImplementedError. This one is only available on Rust-based tokenizers inheriting from PreTrainedTokenizerFast.
## Environment info
- `transformers` version: 2.5.1
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## My Patch
I have isolated the problem in file `tokenization_utils.py`. Take a look at method `batch_encode_plus`. It has the following statements:
```python
# Sanitize the output to have dict[list] from list[dict]
sanitized = {}
for key in tokens[0].keys():
stack = [e for item in tokens for e in item[key]]
if return_tensors == "tf":
stack = tf.stack(stack, axis=0)
elif return_tensors == "pt":
stack = torch.stack(stack, dim=0)
elif not return_tensors and len(stack) == 1:
stack = stack[0]
sanitized[key] = stack
```
The problem is that `stack` may be a list with a toch.Tensor, but also a list of tuples with start/end offsets when `return_offsets_mapping=True`. In such cases, `tf.stack` or `torch.stack` will fail because they expect a list of tensors as argument, not a list of tuples.
I have patched my local transformers installation as follows, and it seems to work well:
```python
if return_tensors and len(stack) == 1 and isinstance(stack[0], torch.Tensor):
if return_tensors == "tf":
stack = tf.stack(stack, axis=0)
elif return_tensors == "pt":
stack = torch.stack(stack, dim=0)
elif not return_tensors and len(stack) == 1:
stack = stack[0]
```
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3224/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3223 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3223/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3223/comments | https://api.github.com/repos/huggingface/transformers/issues/3223/events | https://github.com/huggingface/transformers/issues/3223 | 579,098,154 | MDU6SXNzdWU1NzkwOTgxNTQ= | 3,223 | torch.distributed.barrier() have NCCL error | {
"login": "Limtle",
"id": 47511735,
"node_id": "MDQ6VXNlcjQ3NTExNzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/47511735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Limtle",
"html_url": "https://github.com/Limtle",
"followers_url": "https://api.github.com/users/Limtle/followers",
"following_url": "https://api.github.com/users/Limtle/following{/other_user}",
"gists_url": "https://api.github.com/users/Limtle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Limtle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Limtle/subscriptions",
"organizations_url": "https://api.github.com/users/Limtle/orgs",
"repos_url": "https://api.github.com/users/Limtle/repos",
"events_url": "https://api.github.com/users/Limtle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Limtle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,583 | 1,599 | 1,583 | NONE | null | ## Details
<!-- Description of your issue -->
**singularity :** ```singularity build pytorch20.02.simg docker://nvcr.io/nvidia/pytorch:20.02-py3```
I use Slurm and singularity to run run_glue.py but have NCCL error on torch.distributed.barrier()
### test.sh
```
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=4
#SBATCH --gres=gpu:2
#SBATCH --mem=161920
module purge
module load compiler/gnu/7.3.0 openmpi3 singularity
singularity exec pytorch20.02.simg python -m torch.distributed.launch --nproc_per_node 2 run_glue.py --model_type bert --model_name_or_path bert-base-uncased --task_name MRPC --do_train --do_eval --do_lower_case --data_dir ./glue_data/MRPC --max_seq_length 128 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/MRPC/
```
### Output
```
Traceback (most recent call last):
Traceback (most recent call last):
File "run_glue.py", line 701, in <module>
File "run_glue.py", line 701, in <module>
main()
File "run_glue.py", line 618, in main
main()
File "run_glue.py", line 641, in main
torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1489, in barrier
torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1489, in barrier
work = _default_pg.barrier()
RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:450, unhandled system error, NCCL version 2.5.6
work = _default_pg.barrier()
RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:450, unhandled system error, NCCL version 2.5.6
[E ProcessGroupNCCL.cpp:284] NCCL watchdog thread terminated
[E ProcessGroupNCCL.cpp:284] NCCL watchdog thread terminated
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 263, in <module>
main()
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 259, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'run_glue.py', '--local_rank=1', '--model_type', 'bert', '--model_name_or_path', 'bert-base-uncased', '--task_name', 'MRPC', '--do_train', '--do_eval', '--do_lower_case', '--data_dir', '.glue_data/MRPC', '--max_seq_length', '128', '--per_gpu_eval_batch_size=8', '--per_gpu_train_batch_size=8', '--learning_rate', '2e-5', '--num_train_epochs', '3.0', '--output_dir', '/tmp/MRPC/']' returned non-zero exit status 1.
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
```
Sloved
```
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=4
#SBATCH --gres=gpu:2
#SBATCH --mem=161920
module purge
module load compiler/gnu/7.3.0 openmpi3 singularity
singularity exec --nv pytorch20.02.simg python -m torch.distributed.launch --nproc_per_node 2 run_glue.py --model_type bert --model_name_or_path bert-base-uncased --task_name MRPC --do_train --do_eval --do_lower_case --data_dir ./glue_data/MRPC --max_seq_length 128 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/MRPC/
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3223/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3222 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3222/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3222/comments | https://api.github.com/repos/huggingface/transformers/issues/3222/events | https://github.com/huggingface/transformers/issues/3222 | 579,092,811 | MDU6SXNzdWU1NzkwOTI4MTE= | 3,222 | Why the pre-trained models will be downloaded each times? | {
"login": "xf05888",
"id": 33285394,
"node_id": "MDQ6VXNlcjMzMjg1Mzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/33285394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xf05888",
"html_url": "https://github.com/xf05888",
"followers_url": "https://api.github.com/users/xf05888/followers",
"following_url": "https://api.github.com/users/xf05888/following{/other_user}",
"gists_url": "https://api.github.com/users/xf05888/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xf05888/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xf05888/subscriptions",
"organizations_url": "https://api.github.com/users/xf05888/orgs",
"repos_url": "https://api.github.com/users/xf05888/repos",
"events_url": "https://api.github.com/users/xf05888/events{/privacy}",
"received_events_url": "https://api.github.com/users/xf05888/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"They should definitely be cached locally.\r\n\r\nDo you have a sample code showing the behaviour of re-downloading every time?",
"Yes, here is a example (not happen every time but after system restart):\r\n```\r\nimport torch\r\nfrom transformers import AlbertTokenizer\r\nalbert_tokenizer = AlbertTokenizer.from_pretrained(\"albert-xxlarge-v2\")\r\n```\r\nIt will show this progress bar each time I run this after shutdown or reboot:\r\n**`Downloading: 100%|██████████████████████████████████████| 760k/760k [00:01<00:00, 556kB/s]`**\r\n\r\nAnd I search the whole file system, no file match `albert` (even if I didn't restart the system).\r\nSo which directory should the model be under normal circumstances?",
"The models are by default cached to a hidden directory.",
"@AdityaSoni19031997 \r\nYes, I searched all the hidden directory as well and there's no eligible model file.\r\n\r\nEven if there is, it will disappear after restart.\r\n\r\nAnd what is the name of the hidden directory?",
"@julien-c I found that it did cache in the local, but why it will disappear after **reboot**?\r\n\r\n",
"Is their now a solution for that? In my case it tries to download the LLAMA-2 Model with almost 10 GB each time. "
] | 1,583 | 1,689 | 1,585 | NONE | null | I often use pre-trained models, but when I want to import it (after restart).
But each time, I have to download again. So is there a function that can automatically save the model you downloaded in a directory? In Gluon-NLP, it will save the model you used in `.mxnet` in your home directory. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3222/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3221 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3221/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3221/comments | https://api.github.com/repos/huggingface/transformers/issues/3221/events | https://github.com/huggingface/transformers/pull/3221 | 579,060,490 | MDExOlB1bGxSZXF1ZXN0Mzg2NTI1NDA2 | 3,221 | Model card for dkleczek/bert-base-polish-uncased-v1 | {
"login": "kldarek",
"id": 15803781,
"node_id": "MDQ6VXNlcjE1ODAzNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/15803781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kldarek",
"html_url": "https://github.com/kldarek",
"followers_url": "https://api.github.com/users/kldarek/followers",
"following_url": "https://api.github.com/users/kldarek/following{/other_user}",
"gists_url": "https://api.github.com/users/kldarek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kldarek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kldarek/subscriptions",
"organizations_url": "https://api.github.com/users/kldarek/orgs",
"repos_url": "https://api.github.com/users/kldarek/repos",
"events_url": "https://api.github.com/users/kldarek/events{/privacy}",
"received_events_url": "https://api.github.com/users/kldarek/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=h1) Report\n> Merging [#3221](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6de6423baf02a971d38ee69824104a1f0f85ad2&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3221 +/- ##\n==========================================\n- Coverage 78.14% 78.14% -0.01% \n==========================================\n Files 98 98 \n Lines 16668 16668 \n==========================================\n- Hits 13026 13025 -1 \n- Misses 3642 3643 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.54% <0.00%> (-0.20%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=footer). Last update [d6de642...aa7c949](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for sharing => [model page](https://huggingface.co/dkleczek/bert-base-polish-uncased-v1)",
"will fix images + lowercase the language tag in the next commit"
] | 1,583 | 1,584 | 1,583 | CONTRIBUTOR | null | Model card for dkleczek/bert-base-polish-uncased-v1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3221/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3221/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3221",
"html_url": "https://github.com/huggingface/transformers/pull/3221",
"diff_url": "https://github.com/huggingface/transformers/pull/3221.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3221.patch",
"merged_at": 1583932368000
} |
https://api.github.com/repos/huggingface/transformers/issues/3220 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3220/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3220/comments | https://api.github.com/repos/huggingface/transformers/issues/3220/events | https://github.com/huggingface/transformers/issues/3220 | 579,043,912 | MDU6SXNzdWU1NzkwNDM5MTI= | 3,220 | How to tokenize word to characeter | {
"login": "ynebula",
"id": 22788865,
"node_id": "MDQ6VXNlcjIyNzg4ODY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22788865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ynebula",
"html_url": "https://github.com/ynebula",
"followers_url": "https://api.github.com/users/ynebula/followers",
"following_url": "https://api.github.com/users/ynebula/following{/other_user}",
"gists_url": "https://api.github.com/users/ynebula/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ynebula/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ynebula/subscriptions",
"organizations_url": "https://api.github.com/users/ynebula/orgs",
"repos_url": "https://api.github.com/users/ynebula/repos",
"events_url": "https://api.github.com/users/ynebula/events{/privacy}",
"received_events_url": "https://api.github.com/users/ynebula/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,590 | 1,590 | NONE | null | I am studying machine reading comprehension on xlmroberta.
My data is korquad.
I need to tokenize all word to character.
e.g. by english
This is a dog
-> _T h i s _i s _a _d o g
please let me know. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3220/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3219 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3219/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3219/comments | https://api.github.com/repos/huggingface/transformers/issues/3219/events | https://github.com/huggingface/transformers/pull/3219 | 578,938,295 | MDExOlB1bGxSZXF1ZXN0Mzg2NDI3Mzk1 | 3,219 | Typo in warning message | {
"login": "elgeish",
"id": 6879673,
"node_id": "MDQ6VXNlcjY4Nzk2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elgeish",
"html_url": "https://github.com/elgeish",
"followers_url": "https://api.github.com/users/elgeish/followers",
"following_url": "https://api.github.com/users/elgeish/following{/other_user}",
"gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elgeish/subscriptions",
"organizations_url": "https://api.github.com/users/elgeish/orgs",
"repos_url": "https://api.github.com/users/elgeish/repos",
"events_url": "https://api.github.com/users/elgeish/events{/privacy}",
"received_events_url": "https://api.github.com/users/elgeish/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=h1) Report\n> Merging [#3219](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6de6423baf02a971d38ee69824104a1f0f85ad2&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3219 +/- ##\n=======================================\n Coverage 78.14% 78.14% \n=======================================\n Files 98 98 \n Lines 16668 16668 \n=======================================\n Hits 13026 13026 \n Misses 3642 3642 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.83% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.54% <0.00%> (-0.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=footer). Last update [d6de642...0a77ca6](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,586 | 1,584 | CONTRIBUTOR | null | `T5Tokenizer` instead of `XLNetTokenizer` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3219/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3219",
"html_url": "https://github.com/huggingface/transformers/pull/3219",
"diff_url": "https://github.com/huggingface/transformers/pull/3219.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3219.patch",
"merged_at": 1584625766000
} |
https://api.github.com/repos/huggingface/transformers/issues/3218 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3218/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3218/comments | https://api.github.com/repos/huggingface/transformers/issues/3218/events | https://github.com/huggingface/transformers/pull/3218 | 578,929,015 | MDExOlB1bGxSZXF1ZXN0Mzg2NDE5OTky | 3,218 | Create README.md | {
"login": "dreasysnail",
"id": 2461039,
"node_id": "MDQ6VXNlcjI0NjEwMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2461039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dreasysnail",
"html_url": "https://github.com/dreasysnail",
"followers_url": "https://api.github.com/users/dreasysnail/followers",
"following_url": "https://api.github.com/users/dreasysnail/following{/other_user}",
"gists_url": "https://api.github.com/users/dreasysnail/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dreasysnail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dreasysnail/subscriptions",
"organizations_url": "https://api.github.com/users/dreasysnail/orgs",
"repos_url": "https://api.github.com/users/dreasysnail/repos",
"events_url": "https://api.github.com/users/dreasysnail/events{/privacy}",
"received_events_url": "https://api.github.com/users/dreasysnail/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=h1) Report\n> Merging [#3218](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6de6423baf02a971d38ee69824104a1f0f85ad2&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3218 +/- ##\n==========================================\n- Coverage 78.14% 78.14% -0.01% \n==========================================\n Files 98 98 \n Lines 16668 16668 \n==========================================\n- Hits 13026 13025 -1 \n- Misses 3642 3643 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.54% <0.00%> (-0.20%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=footer). Last update [d6de642...a3ccef9](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3218/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3218",
"html_url": "https://github.com/huggingface/transformers/pull/3218",
"diff_url": "https://github.com/huggingface/transformers/pull/3218.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3218.patch",
"merged_at": 1583932225000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3217 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3217/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3217/comments | https://api.github.com/repos/huggingface/transformers/issues/3217/events | https://github.com/huggingface/transformers/pull/3217 | 578,921,761 | MDExOlB1bGxSZXF1ZXN0Mzg2NDE0MjAx | 3,217 | Create README.md | {
"login": "dreasysnail",
"id": 2461039,
"node_id": "MDQ6VXNlcjI0NjEwMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2461039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dreasysnail",
"html_url": "https://github.com/dreasysnail",
"followers_url": "https://api.github.com/users/dreasysnail/followers",
"following_url": "https://api.github.com/users/dreasysnail/following{/other_user}",
"gists_url": "https://api.github.com/users/dreasysnail/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dreasysnail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dreasysnail/subscriptions",
"organizations_url": "https://api.github.com/users/dreasysnail/orgs",
"repos_url": "https://api.github.com/users/dreasysnail/repos",
"events_url": "https://api.github.com/users/dreasysnail/events{/privacy}",
"received_events_url": "https://api.github.com/users/dreasysnail/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3217/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3217",
"html_url": "https://github.com/huggingface/transformers/pull/3217",
"diff_url": "https://github.com/huggingface/transformers/pull/3217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3217.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3216 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3216/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3216/comments | https://api.github.com/repos/huggingface/transformers/issues/3216/events | https://github.com/huggingface/transformers/pull/3216 | 578,921,728 | MDExOlB1bGxSZXF1ZXN0Mzg2NDE0MTcw | 3,216 | Create README.md | {
"login": "dreasysnail",
"id": 2461039,
"node_id": "MDQ6VXNlcjI0NjEwMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2461039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dreasysnail",
"html_url": "https://github.com/dreasysnail",
"followers_url": "https://api.github.com/users/dreasysnail/followers",
"following_url": "https://api.github.com/users/dreasysnail/following{/other_user}",
"gists_url": "https://api.github.com/users/dreasysnail/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dreasysnail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dreasysnail/subscriptions",
"organizations_url": "https://api.github.com/users/dreasysnail/orgs",
"repos_url": "https://api.github.com/users/dreasysnail/repos",
"events_url": "https://api.github.com/users/dreasysnail/events{/privacy}",
"received_events_url": "https://api.github.com/users/dreasysnail/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3216/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3216",
"html_url": "https://github.com/huggingface/transformers/pull/3216",
"diff_url": "https://github.com/huggingface/transformers/pull/3216.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3216.patch",
"merged_at": 1583932258000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3215 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3215/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3215/comments | https://api.github.com/repos/huggingface/transformers/issues/3215/events | https://github.com/huggingface/transformers/pull/3215 | 578,921,352 | MDExOlB1bGxSZXF1ZXN0Mzg2NDEzODYy | 3,215 | Create README.md | {
"login": "dreasysnail",
"id": 2461039,
"node_id": "MDQ6VXNlcjI0NjEwMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2461039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dreasysnail",
"html_url": "https://github.com/dreasysnail",
"followers_url": "https://api.github.com/users/dreasysnail/followers",
"following_url": "https://api.github.com/users/dreasysnail/following{/other_user}",
"gists_url": "https://api.github.com/users/dreasysnail/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dreasysnail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dreasysnail/subscriptions",
"organizations_url": "https://api.github.com/users/dreasysnail/orgs",
"repos_url": "https://api.github.com/users/dreasysnail/repos",
"events_url": "https://api.github.com/users/dreasysnail/events{/privacy}",
"received_events_url": "https://api.github.com/users/dreasysnail/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3215/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3215",
"html_url": "https://github.com/huggingface/transformers/pull/3215",
"diff_url": "https://github.com/huggingface/transformers/pull/3215.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3215.patch",
"merged_at": 1583932214000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3214 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3214/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3214/comments | https://api.github.com/repos/huggingface/transformers/issues/3214/events | https://github.com/huggingface/transformers/pull/3214 | 578,920,229 | MDExOlB1bGxSZXF1ZXN0Mzg2NDEyOTM1 | 3,214 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3214/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3214",
"html_url": "https://github.com/huggingface/transformers/pull/3214",
"diff_url": "https://github.com/huggingface/transformers/pull/3214.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3214.patch",
"merged_at": 1583931812000
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.