url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/5220 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5220/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5220/comments | https://api.github.com/repos/huggingface/transformers/issues/5220/events | https://github.com/huggingface/transformers/issues/5220 | 644,003,541 | MDU6SXNzdWU2NDQwMDM1NDE= | 5,220 | run_language_modeling.py does not output vocab/config/etc files until training completes | {
"login": "apteryxlabs",
"id": 65966807,
"node_id": "MDQ6VXNlcjY1OTY2ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/65966807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apteryxlabs",
"html_url": "https://github.com/apteryxlabs",
"followers_url": "https://api.github.com/users/apteryxlabs/followers",
"following_url": "https://api.github.com/users/apteryxlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/apteryxlabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apteryxlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apteryxlabs/subscriptions",
"organizations_url": "https://api.github.com/users/apteryxlabs/orgs",
"repos_url": "https://api.github.com/users/apteryxlabs/repos",
"events_url": "https://api.github.com/users/apteryxlabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/apteryxlabs/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"**NOTE:** Currently retraining. At the 500-it checkpoint save point, the following is outputted:\r\n`\"loss\": 3.162974836349487, \"learning_rate\": 4.207104345068189e-05, \"epoch\": 0.47573739295908657, \"step\": 500}\r\n06/23/2020 10:25:35 - INFO - transformers.trainer - Saving model checkpoint to ./output_100k_run2/checkpoint-500\r\n06/23/2020 10:25:35 - INFO - transformers.configuration_utils - Configuration saved in ./output_100k_run2/checkpoint-500/config.json\r\n06/23/2020 10:25:35 - INFO - transformers.modeling_utils - Model weights saved in ./output_100k_run2/checkpoint-500/pytorch_model.bin\r\n/home/b/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.\r\n`\r\n\r\nI'm particularly concerned about that last part:\r\n`/home/b/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.`\r\n\r\nIs that not implemented by default? Do I have to pass in a specific argument to fix it? Or is this something that should be ignored? \r\n\r\nNot sure if this is related to the above bug, but... maybe?\r\n\r\n**NOTE 2:** The contents of the output folder at this point (500 iterations of training):\r\n\r\n`checkpoint-500`\r\n\r\nAnd the contents of a similar model output dir that I trained fully:\r\n`checkpoint-1000 config.json special_tokens_map.json vocab.json\r\ncheckpoint-1500 merges.txt tokenizer_config.json\r\ncheckpoint-500 pytorch_model.bin training_args.bin\r\n`\r\nHopefully that highlights the issue - all those supplementary files don't seem to be being saved until the end of training.\r\n",
"Hi, I'm also having this issue where `run_language_modeling.py` is not creating the files needed to resume the training until it finishes and by then you wont need the files to resume it as it will have finished, also cant generate text until the vocab,tokens,etc are created as those files are needed for the model to work correctly, in my case im using the code from [this repository](https://github.com/itsuncheng/fine-tuning-GPT2) but technically it's the same code found on the [example/language-modeling](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) folder with just some extra information on the readme file.\r\nHave you guys found any way to make it work ?",
"Looks like in #3921 a similar issue was fixed in https://github.com/huggingface/transformers/commit/c81152600452ad1bec4ab705356788d29a3573ee by adding `tokenizer.save_pretrained(training_args.output_dir)` to the end of the script. Could this be done for every checkpoint, instead of just at the final output step? If not, how are the checkpoints supposed to be used?\r\n\r\nThanks!",
"> I'm particularly concerned about that last part:\r\n> `/home/b/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.`\r\n\r\nI have the same warning. Sounds bad for me...\r\nCan I just ignore it? \r\n\r\n```bash\r\nUserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.\r\n```\r\n",
"This appears to still be an issue. Has anyone found a solution?",
"@apteryxlabs Here's a not-so-eloquent short term fix. I'm just forcing the optimizer to load around line 515 of trainer.py. optimizer = torch.load('path/to/optimizer.pt'). Again, not eloquent but it seems to work. Still trying to figure out why the optimizer isn't loading properly earlier on. \r\n",
"> @apteryxlabs Here's a not-so-eloquent short term fix. I'm just forcing the optimizer to load around line 515 of trainer.py. optimizer = torch.load('path/to/optimizer.pt'). Again, not eloquent but it seems to work. Still trying to figure out why the optimizer isn't loading properly earlier on.\r\n\r\nHi, thanks for the hint. Would you please elaborate which line around line 515? I saw an optimiser.pt appearing at line 623, another 463, not sure which one you are referring to. Also, is it just setting the path of this optimizer.pt brutally to the file under the /checkpoint-x directory?",
"I am having the same issues: not saving supplementary files at checkpoints and also the warning about loading the optimizer. Has anyone found a solution?",
"It seems that supplementary file saving at checkpoints has been fixed (I am now seeing checkpoint saving in finetune).\r\n\r\nBut I am still seeing \r\n\"Warning: Please also save or load the state of the optimzer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning)\"\r\n\r\n\r\nSteps to reproduce:\r\n1) clone transformers into new director\r\n2) cd transformers && pip install .e; cd examples && pip install -r requirements.txt\r\n3) cd seq2seq && ./finetune_t5_bart_tiny.sh\r\n\r\nObserve that the warning is printed:\r\n\r\n../python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: Could not log computational graph since the `model.example_input_array` attribute is not set or `input_array` was not given \r\nwarnings.warn(*args, **kwargs)\r\n.../python3.8/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.\r\n warnings.warn(SAVE_STATE_WARNING, UserWarning)\r\n\r\n(There is both the optimizer warning and the computational graph logging warning)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,608 | 1,608 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
GPT2
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* the official example scripts:
I began training using the following command (from Jupyter Lab) - output_100k is the specified (new) output folder for my fine tuned model (which in being trained on 100k US patents):
`!python run_language_modeling.py \
--output_dir=output_100k \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--block_size 100 \
--per_device_train_batch_size 3 \
--do_train \
--train_data_file=./train_100k.txt \
--do_eval \
--eval_data_file=./test_100k.txt`
This works fine; HOWEVER, no instructions are given in the example [README](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) on how to resume training later on.
**Including this information would be extremely helpful**. After some personal research, I deduced (perhaps incorrectly?) that I should point 'model_name_or_path' at the output_100k checkpoint folder like so (note: the README also doesn't specify whether or not it's okay to keep the original output-dir name, so I made a new one):
`!python run_language_modeling.py \
--output_dir=output_100k_resumed \
--model_name_or_path=./output_100k/checkpoint-12500 \
--block_size 100 \
--per_device_train_batch_size 3 \
--do_train \
--train_data_file=./train_100k.txt \
--do_eval \
--eval_data_file=./test_100k.txt`
This raises the following error:
`OSError: Model name './output_100k/checkpoint-12500' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed './output_100k/checkpoint-12500' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.`
I checked the directory, and indeed, there are no vocab.json, merges.txt, etc files included in the specified output directory from the initial training.
It appears that **run_language_modeling.py** does not output these files until the **end** of training (an examination of other models training using the script shows that these files are present in completed training sessions.
**What am I doing wrong?** Or is this a bug in the script itself? I'd imagine these vocab/merges files can be outputted relatively early in training. Note, when training on a new dataset starts, a file with the name of the form `cached_lm_GPT2Tokenizer_[SOME NUMBER]_train_clean.txt.lock` (and one without the .lock) is generated. Perhaps this contains some information that I'd need to resume training? Regardless, I see no documentation explaining this file's purpose linked in the README, which - again - could definitely provide more context.
The tasks I am working on is:
* my own task or dataset: (give details below)
The idea is to build an effective context-aware text generator for legal purposes. The dataset simply consists of a bunch of patent document texts.
## To reproduce
Steps to reproduce the behavior:
**See description above**
## Expected behavior
Model resumes training at the specified checkpoint.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-5.3.0-59-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No modifications to script, but yes, this computer uses GPU.
- Using distributed or parallel set-up in script?: No.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5220/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5220/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5219 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5219/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5219/comments | https://api.github.com/repos/huggingface/transformers/issues/5219/events | https://github.com/huggingface/transformers/pull/5219 | 643,994,324 | MDExOlB1bGxSZXF1ZXN0NDM4Njk1MzIw | 5,219 | [Longformer] Major Refactor | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=h1) Report\n> Merging [#5219](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9a473f1e43221348334b9e7f95bb45770b7ef268&el=desc) will **decrease** coverage by `0.81%`.\n> The diff coverage is `92.60%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5219 +/- ##\n==========================================\n- Coverage 77.85% 77.04% -0.82% \n==========================================\n Files 138 138 \n Lines 24314 24409 +95 \n==========================================\n- Hits 18930 18806 -124 \n- Misses 5384 5603 +219 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `91.66% <92.60%> (-1.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `73.37% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.39% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=footer). Last update [9a473f1...90d2aa6](https://codecov.io/gh/huggingface/transformers/pull/5219?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sshleifer and @ibeltagy - thanks a lot for your comments -> cleaned up the comments and some function naming.\r\n\r\nAll slow and normal tests pass on GPU => good to merge."
] | 1,592 | 1,593 | 1,593 | MEMBER | null | ## Longformer Refactor
This PR does a major refactoring of Longformer. Mainly, the Roberta abstraction is removed and compositionally is chosen instead. This has the following advantages:
- It's easier now to implement a `cross_attention_layer`
- The code is more readable and the logic stays in this file only
- A bug was corrected regarding the attention mask. @ibeltagy - maybe you can check this as well. Previously, if **no** `attention_mask` was inserted, the padding function that became before `super.forward()` in `LongformerModel` was not used, **but** if instead an `attention_mask = torch.tensor([1, ..., 1])` (attend to all tokens was passed, the padding function was applied and could lead to different outputs as when no `attention_mask` is passed. This should not be the case. `model(input_ids)` and `model(input_ids, attention_mask=torch.ones(input_ids.shape))` should always yield the same result. Removing the `super.forward()` abstraction makes the code much cleaner here so that a `attention_mask = torch.ones(input_ids.shape)` can be calculated before calling the longformer encoder. **IMPORTANT** Since in almost all tasks longformer somehow passes either a `global_attention_mask` or `attention_mask` to `LongformerModel`, this bug did not really become visible before.
- We don't have to "inject" a `self-attention layer` into another model anymore, which I did not like very much.
- Unnecessary code can be removed (head_mask, prev cross-attention layer inputs that do not work yet), ...
**Additionally**:
- Variable names are made more explicit and dead code (If statements that would have never occurred) was removed and code is simplified.
- The forward function of the self-attention layer is broken up into multiple helper functions. The advantage here is that quite some memory should be saved before `attention_probs` go out of scope after they are not used anymore and thus the memory bottleneck should be reduced.
- All longformer models are added to the tests (@sgugger) and a couple more tests are added.
Next step is to add cross attention layers to longformer.
**Review**
I made sure that besides the bug with `attention_mask = None` vs `attention_mask = torch.ones(...)` all outputs stay the same.
Would be great if @thomwolf @LysandreJik @sgugger @sshleifer @ibeltagy can do a quick review.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5219/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5219",
"html_url": "https://github.com/huggingface/transformers/pull/5219",
"diff_url": "https://github.com/huggingface/transformers/pull/5219.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5219.patch",
"merged_at": 1593618213000
} |
https://api.github.com/repos/huggingface/transformers/issues/5218 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5218/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5218/comments | https://api.github.com/repos/huggingface/transformers/issues/5218/events | https://github.com/huggingface/transformers/issues/5218 | 643,975,986 | MDU6SXNzdWU2NDM5NzU5ODY= | 5,218 | AttributeError: module 'tensorflow' has no attribute 'repeat' | {
"login": "whwhwwhh",
"id": 44458425,
"node_id": "MDQ6VXNlcjQ0NDU4NDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/44458425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whwhwwhh",
"html_url": "https://github.com/whwhwwhh",
"followers_url": "https://api.github.com/users/whwhwwhh/followers",
"following_url": "https://api.github.com/users/whwhwwhh/following{/other_user}",
"gists_url": "https://api.github.com/users/whwhwwhh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whwhwwhh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whwhwwhh/subscriptions",
"organizations_url": "https://api.github.com/users/whwhwwhh/orgs",
"repos_url": "https://api.github.com/users/whwhwwhh/repos",
"events_url": "https://api.github.com/users/whwhwwhh/events{/privacy}",
"received_events_url": "https://api.github.com/users/whwhwwhh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"@LysandreJik @jplu \r\n\r\nIt seems that Tensorflow does not include `repeat` in 2.0 and 2.0.1 (see https://github.com/tensorflow/tensorflow/issues/38839). Perhaps best to have 2.1 as a min requirement?",
"Indeed, TF < 2.1 doesn't have the `tf.repeat()` function. Put TF >= 2.1 as min requirement looks to be a good solution to me.\r\n\r\n@LysandreJik are you ok with this?",
"Yeah I checked tensorflow API and upgrade my tf yo 2.2 it works now\r\n\r\n\r\nOn 25 Jun 2020, at 4:50 pm, Julien Plu <[email protected]> wrote:\r\n\r\n\r\n\r\nIndeed, TF < 2.1 doesn't have the tf.repeat() function. Put TF >= 2.1 as min requirement looks to be a good solution to me.\r\n\r\n@LysandreJik<https://github.com/LysandreJik> are you ok with this?\r\n\r\n—\r\nYou are receiving this because you authored the thread.\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/5218#issuecomment-649272046>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AKTGDOKKTAPDE6SIYCKGTO3RYLXTNANCNFSM4OF2P2IQ>.\r\n",
"Sure, I'm okay with this!"
] | 1,592 | 1,593 | 1,593 | NONE | null | I tried to run the pipeline task 'summarization', but get a error with "module **'tensorflow' has no attribute 'repeat' "** Does anyone encountered the same problem? How to fix it?
**my installed tensorflow == 2.0.0**
error messages:
/home/ww/anaconda3/envs/environment_name/lib/python3.6/site-packages/transformers/pipelines.py", line 1446, in __call__
inputs["input_ids"], attention_mask=inputs["attention_mask"], **generate_kwargs,
File "/home/ww/anaconda3/envs/environment_name/lib/python3.6/site-packages/transformers/modeling_tf_utils.py", line 747, in generate
tf.repeat(tf.expand_dims(tf.range(batch_size), -1), repeats=num_beams * effective_batch_mult, axis=1),
AttributeError: module 'tensorflow' has no attribute 'repeat'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5218/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5217 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5217/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5217/comments | https://api.github.com/repos/huggingface/transformers/issues/5217/events | https://github.com/huggingface/transformers/pull/5217 | 643,914,401 | MDExOlB1bGxSZXF1ZXN0NDM4NjI5MzIx | 5,217 | Create README.md | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=h1) Report\n> Merging [#5217](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b28b53713161a6299c757c32f7179a2cb2d8cbd7&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5217 +/- ##\n==========================================\n+ Coverage 77.96% 77.98% +0.02% \n==========================================\n Files 138 138 \n Lines 23838 23838 \n==========================================\n+ Hits 18585 18590 +5 \n+ Misses 5253 5248 -5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.15% <0.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=footer). Last update [b28b537...6f847e8](https://codecov.io/gh/huggingface/transformers/pull/5217?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks! [model page](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512)"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | electra_large_discriminator_squad2_512 Question Answering LM | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5217/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5217",
"html_url": "https://github.com/huggingface/transformers/pull/5217",
"diff_url": "https://github.com/huggingface/transformers/pull/5217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5217.patch",
"merged_at": 1592988050000
} |
https://api.github.com/repos/huggingface/transformers/issues/5216 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5216/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5216/comments | https://api.github.com/repos/huggingface/transformers/issues/5216/events | https://github.com/huggingface/transformers/pull/5216 | 643,856,312 | MDExOlB1bGxSZXF1ZXN0NDM4NTgzMzc3 | 5,216 | [WIP - Don't merge yet][Pipeline] Make "task" a static class variable | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just noticed that this would break the Marian translation pipeline though. Maybe for the Translation pipeline we should keep the \"task\" as an `__init__` argument. ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=h1) Report\n> Merging [#5216](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1ae132a07d7f294cf58cd50f7db8723d00e282de&el=desc) will **decrease** coverage by `0.36%`.\n> The diff coverage is `94.11%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5216 +/- ##\n==========================================\n- Coverage 77.49% 77.12% -0.37% \n==========================================\n Files 138 138 \n Lines 23787 23815 +28 \n==========================================\n- Hits 18433 18368 -65 \n- Misses 5354 5447 +93 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `77.21% <94.11%> (+0.80%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5216/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=footer). Last update [1ae132a...e487570](https://codecov.io/gh/huggingface/transformers/pull/5216?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM! Thanks @patrickvonplaten 🙏 ",
"@sshleifer - can you check if this is fine for Marian translation?",
"Close this for now -> Pipelines will be updates when working on Pipelines v2 with @mfuntowicz "
] | 1,592 | 1,596 | 1,596 | MEMBER | null | This PR removes "task" from the "init" of the class and adds it as a static variable.
In my opinion, it is cleaner to have "task" as a static variable instead of an object attribute. This would also solve #5210 .
The problem we would run into then is that for the "Translation" pipeline we would need a class for each translation. I think this is still the better option though because it is less prone to errors (see #5210) and we could add a "get translation factory design" or something.
After some discussion with @mfuntowicz, I think the best option is to add two additional
parameters `src_lang` and `tgt_lang` to the pipelines function and delete the task names
`translation_en_to_fr` in favor of just `translation`.
The new recommended way of instantiating a translation pipeline is
```python
translation_en_to_fr = pipeline("translation", src_lang="en", tgt_lang="fr")
```
The option:
```python
translation_en_to_fr = pipeline("translation_en_to_fr")
```
is still supported with a future warning.
@mfuntowicz @julien-c @LysandreJik @sshleifer
What do you think?
**Backward Compatibility**
It should be fully backward compatible, but I added some warning statements to let the user know of Future Depreciation
### TODO:
If this PR is ok for you, I will update the docs and add tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5216/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5216",
"html_url": "https://github.com/huggingface/transformers/pull/5216",
"diff_url": "https://github.com/huggingface/transformers/pull/5216.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5216.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5215 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5215/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5215/comments | https://api.github.com/repos/huggingface/transformers/issues/5215/events | https://github.com/huggingface/transformers/issues/5215 | 643,772,630 | MDU6SXNzdWU2NDM3NzI2MzA= | 5,215 | TF2 support for Longformer | {
"login": "Pringled",
"id": 12988240,
"node_id": "MDQ6VXNlcjEyOTg4MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/12988240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pringled",
"html_url": "https://github.com/Pringled",
"followers_url": "https://api.github.com/users/Pringled/followers",
"following_url": "https://api.github.com/users/Pringled/following{/other_user}",
"gists_url": "https://api.github.com/users/Pringled/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pringled/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pringled/subscriptions",
"organizations_url": "https://api.github.com/users/Pringled/orgs",
"repos_url": "https://api.github.com/users/Pringled/repos",
"events_url": "https://api.github.com/users/Pringled/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pringled/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes, we are planning to add this in ~1 month",
"Great, looking forward to it, thanks!",
"+1 on this @patrickvonplaten any news? :)",
"+2 on this",
"Finished by end of the week :-) See https://github.com/huggingface/transformers/pull/5764. It's almost finished :-) ",
"@patrickvonplaten any reference on how to train unsupervised model for longformer (not fine-tuning)?"
] | 1,592 | 1,614 | 1,593 | NONE | null | # 🚀 Feature request
Hi,
I'm currently working on a project involving long documents (6000+ tokens). I normally work with Tensorflow and I was wondering if there are any plans for adding a Longformer TF model in the near future? My PyTorch knowledge is fairly limited, but given the potential of the Longformer for my project, I would want to learn the basics if there are no plans for adding TF support.
Kind regards
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5215/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5214 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5214/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5214/comments | https://api.github.com/repos/huggingface/transformers/issues/5214/events | https://github.com/huggingface/transformers/issues/5214 | 643,772,088 | MDU6SXNzdWU2NDM3NzIwODg= | 5,214 | How to predict on a batch? | {
"login": "mariusjohan",
"id": 49961316,
"node_id": "MDQ6VXNlcjQ5OTYxMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/49961316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariusjohan",
"html_url": "https://github.com/mariusjohan",
"followers_url": "https://api.github.com/users/mariusjohan/followers",
"following_url": "https://api.github.com/users/mariusjohan/following{/other_user}",
"gists_url": "https://api.github.com/users/mariusjohan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariusjohan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariusjohan/subscriptions",
"organizations_url": "https://api.github.com/users/mariusjohan/orgs",
"repos_url": "https://api.github.com/users/mariusjohan/repos",
"events_url": "https://api.github.com/users/mariusjohan/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariusjohan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
[Link](https://stackoverflow.com/questions/62533181/huggingface-transformers-library-predict-in-batches)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5214/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5213 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5213/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5213/comments | https://api.github.com/repos/huggingface/transformers/issues/5213/events | https://github.com/huggingface/transformers/issues/5213 | 643,700,047 | MDU6SXNzdWU2NDM3MDAwNDc= | 5,213 | Train EncoderDecoder Models for question generation | {
"login": "joachim-dublineau",
"id": 64142397,
"node_id": "MDQ6VXNlcjY0MTQyMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/64142397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joachim-dublineau",
"html_url": "https://github.com/joachim-dublineau",
"followers_url": "https://api.github.com/users/joachim-dublineau/followers",
"following_url": "https://api.github.com/users/joachim-dublineau/following{/other_user}",
"gists_url": "https://api.github.com/users/joachim-dublineau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joachim-dublineau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joachim-dublineau/subscriptions",
"organizations_url": "https://api.github.com/users/joachim-dublineau/orgs",
"repos_url": "https://api.github.com/users/joachim-dublineau/repos",
"events_url": "https://api.github.com/users/joachim-dublineau/events{/privacy}",
"received_events_url": "https://api.github.com/users/joachim-dublineau/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @joachim-dublineau , not a direct answer to your question, but here's a relevant discussion thread #4399",
"And can you post the code where you prepare the `decoder_input_ids` and `labels`?",
"Hi @patil-suraj ,\r\n\r\nThanks for your quick reply.\r\n\r\nI have indeed seen this topic previously without finding answers to my points.\r\n\r\nFor the code, I use the datacollator (https://github.com/huggingface/transformers/blob/5f721ad6e48c9d846de25c3fefa0e50a306cbf10/src/transformers/data/data_collator.py) and its function mask_tokens(labels_ids)",
"You won't need `mask_tokens`. `mask_tokens` is used for masked language modelling, it masks some tokens in the input, so maybe this why you are seeing the weird output.\r\n\r\nFor bart \r\n```\r\nsource_ids, source_mask, y = batch[\"input_ids\"], batch[\"attention_mask\"], batch[\"decoder_input_ids\"]\r\ny_ids = y[:, :-1].contiguous()\r\nlm_labels = y[:, 1:].clone()\r\nlm_labels[y[:, 1:] == pad_token_id] = -100\r\n```\r\n\r\n`input_ids` will be your tokenized context and `decoder_input_ids` will be tokenized question.\r\n\r\nfor enc-dec, you can pass the encoded input to input_ids and encoded question to `decoder_input_ids` and `lm_labels`\r\n```\r\nsource_ids, source_mask, y = batch[\"input_ids\"], batch[\"attention_mask\"], batch[\"decoder_input_ids\"]\r\nmodel(input_ids=source_ids, decoder_input_ids=y, lm_labels=y)\r\n```\r\n\r\nHope this is clear",
"So I shouldn't use mask_tokens, ok thank you !\r\n\r\nWhat I don't get is that if I provide the question in decoder_input_ids, first the decoder will have the ground truth and then why should I also use the labels argument?\r\n\r\nAnd what is y in your first code ?",
">What I don't get is that if I provide the question in decoder_input_ids, first the decoder will have the ground truth and then why should I also use the labels argument?\r\n\r\nThe `EncoderDecoder` model expects the input in this way. Basically it shifts the `lm_labels` or `labels` to the right.\r\n@patrickvonplaten is this correct ?\r\n\r\n`y` is decoder input shifted to the right ",
"Thank you @patil-suraj ! I will implement this and keep this post updated. ",
"I tried the same, using EncoderDecoder model for QG that I initialize from bert-base-uncased.\r\n\r\nThe model outputs somewhat readable questions:\r\n```\r\nwhat team won the 2015 nfl championship?\r\nwhat team did the nfl win in the 2015 super bowl?\r\nwhere was the super bowl held?\r\nwhat team won the 2015 nfl championship?\r\nwhat was the name of the team that was the first to be featured on the nfl network?\r\nwhat was the name of the game that the nfl used to celebrate the 2015 super bowl?\r\nwhen was the super bowl played?\r\n```\r\n\r\nHowever, the BLEU1 score is pretty low around 0.35.\r\n\r\nI wonder if someone got better results with EncoderDecoder architecture. Otherwise BART will probably be better for the task.",
"Hi @volker42maru, \r\nWhat parameters do you use for generation (Repetition Penalty and length penalty)? And for how long did you train your model ?\r\n\r\nBART seems to be appropriate but I personally have some difficulties making it work.",
"For generation I am using:\r\n```\r\nmax_length=30, temperature=0.95, num_beams=1, length_penalty=0.25, no_repeat_ngram_size=3\r\n```\r\nYou will get slightly better results with a bigger beam size, but the generation method seems incredibly slow (I wonder why that is?).\r\n\r\nI trained for 2 epochs on the squad1 train set. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,599 | 1,599 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
### How to train models for text generation
Hi Everyone,
I am trying to finetune an Encoder Decoder model on question generation task on SQuAD.
Input data are a concatenation of answer span and context and outputs are the question.
`inputs = tokenizer.encode_plus(example.answer, example.context, add_special_tokens=True, max_length=max_length, truncation='only_second')`
`label = tokenizer.encode_plus(example.question, add_special_tokens=True, max_length=max_length_label, truncation=True)`
`decoder_input_ids, label_ids = data_collator.mask_tokens(torch.tensor(label_ids).unsqueeze(0))`
I add padding to all of these arguments if necessary and pass them to the model which can be:
- an encoder decoder model:
`model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')`
`inputs = {'input_ids': batch[0], 'attention_mask': batch[1], 'token_type_ids' : batch[2], 'decoder_input_ids': batch[3], 'lm_labels' : batch[5]}`
`outputs = model(**inputs)`
- a BART model :
`model = BartForConditionalGeneration.from_pretrained(model_name)`
`inputs = {'input_ids': batch[0], 'attention_mask' : batch[1], 'decoder_input_ids': batch[2], 'labels' : batch[3] }`
I thought that everything was alright and I started training my two models. As the training progressed, the mlm_probability of the datacollator object increased from 0.20 to 0.40 and then to 1.
The learning rate and the optimizer are as follows: (lr around 3e-5)
`optimizer = AdamW(model.parameters(), lr=learning_rate, eps=adam_epsilon)`
`scheduler = get_cosine_with_hard_restarts_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps, num_training_steps=t_total, num_cycles=num_cycles)`
The eval loss was decreasing all along the 100 epochs for the BERT2BERT but it didn't looked like the questions were improving:
epoch 50:
what country did the french support in libya????? - 2013, 2014??
what country did nasser end to the coup? in 1989, 2007 and 2008 - 2011's
what country did the us state have to use a particular prohibition of fuel in its oil? 2007
epoch 100:
where was the fisafat for? islamic party in libya and al - farabut movement
what did the unfyadi want to end in 1990? - 1991, 2003 and gulf
what country did the oil industry stop its fuel and coaling? in a world, which countries
The observation remains the same for BART model:
100 steps:
? what was the name of normnormandy in frfrance.
? when did people in the first half of what began to give their
? who were the people that did not to swear fealty oath in
4 epochs:
normnormnaandyanye gave given offered name namesNames to forfor
normnormNormansons gave given granted their own original initial ancestral native
normnormdonaldansons descended originated originate originating from origins origin?ers
My questions are:
**Do you think that something is wrong with my training?
What do you think about the performances?
Do you have any suggestions for the question generation task?
How does the decoder input ids is supposed to change for Next word prediction loss ?
Should I use Next word prediction loss or should I use Masked lm loss ?
How to use dropout with pretrained model?**
Thank you in advance for your help and I hope that my post will be usefull to others, if need be I can share a bigger part of my code :)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5213/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5213/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5212 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5212/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5212/comments | https://api.github.com/repos/huggingface/transformers/issues/5212/events | https://github.com/huggingface/transformers/issues/5212 | 643,696,425 | MDU6SXNzdWU2NDM2OTY0MjU= | 5,212 | BartConfig wrong decoder_start_token_id? | {
"login": "Diego999",
"id": 1092464,
"node_id": "MDQ6VXNlcjEwOTI0NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1092464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Diego999",
"html_url": "https://github.com/Diego999",
"followers_url": "https://api.github.com/users/Diego999/followers",
"following_url": "https://api.github.com/users/Diego999/following{/other_user}",
"gists_url": "https://api.github.com/users/Diego999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Diego999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Diego999/subscriptions",
"organizations_url": "https://api.github.com/users/Diego999/orgs",
"repos_url": "https://api.github.com/users/Diego999/repos",
"events_url": "https://api.github.com/users/Diego999/events{/privacy}",
"received_events_url": "https://api.github.com/users/Diego999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for this issue we should update the documentation here!",
"@patrickvonplaten Thanks for the answer! Therefore what is expected from the model? EOS I guess?",
"Bart normally has the decoder_input_token_id defined in its config so there shoud be no problem",
"Hi, I also wondered about this.\r\n`facebook/bart-base` and `facebook/bart-large-mnli` do not have `decoder_start_token_id` defined in their config file so it defaults to 0 (`bos_token_id`), while all the other BART models have it as 2 (`eos_token_id`). \r\nIs there any reason for it?\r\nIn fairseq's implementation looks like it is always `bos`:\r\nhttps://github.com/pytorch/fairseq/blob/5d7ed6ab4f92d20ad10f8f792b8703e260a938ac/fairseq/models/bart/hub_interface.py#L123",
"`prefix_tokens` in fairseq is not the same as `config.decoder_start_token_id` It is more like `config.force_bos_token_to_be_generated`, if I remember correctly.\r\n\r\nFor the finetuned summarization versions,\r\nI have checked very aggressively and they work much better when `decoder_start_token_id=2`.\r\n\r\n@FomalhautB Do you have any empirical evidence that bart-base/bart-large are different?",
"> `prefix_tokens` in fairseq is not the same as `config.decoder_start_token_id` It is more like `config.force_bos_token_to_be_generated`, if I remember correctly.\r\n> \r\n> For the finetuned summarization versions,\r\n> I have checked very aggressively and they work much better when `decoder_start_token_id=2`.\r\n> \r\n> @FomalhautB Do you have any empirical evidence that bart-base/bart-large are different?\r\n\r\nI was training an autoregressive model based on Bart. It works fine until the day the config changed. After training my model for a few iterations, it only generates `<s></s>` and never changed for the iterations after that. I fixed this by forcing `decoder_start_token_id` to be 0. I didn't write anything about the `decoder_start_token_id` before and I didn't change the way that Bart generates text. I am not sure if this is also the case for the original Bart model.",
"That's super interesting, thanks for reporting this!\r\n\r\nWould you mind seeing if leaving `decoder_start_token_id=2`, but adding `force_bos_token_to_be_generated=True` changes anything?\r\n\r\nI'd also be interested in seeing what a batch of your data looks like during training/what finetuning code you are using if you are willing to share.\r\n\r\n",
"Any updates on this issue? I'm also confused",
"Hey @sshleifer, could you take a second look at this issue?",
"@patrickvonplaten @sshleifer \r\nHello, any updates?\r\nif `labels`'s prefix of `bos` is added automatically by `BartTokenizer`, using `eos` as the first token to start generate seems unreasonable, right? \r\n\r\nBut it seems that it is deliberately designed rather than a bug, why is that?\r\n\r\n"
] | 1,592 | 1,627 | 1,618 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bart
Language I am using the model on (English, Chinese ...): English
## To reproduce
Steps to reproduce the behavior:
```
from transformers import BartConfig, BartTokenizer
config = BartConfig.from_pretrained('facebook/bart-large')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
config.decoder_start_token_id
>>> 2
tokenizer.bos_token_id
>>> 0 # != config.decoder_start_token_id
tokenizer.eos_token_id
>>> 2
```
It is misleading in the documentation of the function ```generate````
*decoder_start_token_id=None – (optional) int If an encoder-decoder model starts decoding with a different token than BOS. Defaults to None and is changed to BOS later.*
## Expected behavior
I expect that decoder_start_token_id = tokenizer.bos_token_id, but maybe the model is designed to start decoding with EOS token.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5212/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5211 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5211/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5211/comments | https://api.github.com/repos/huggingface/transformers/issues/5211/events | https://github.com/huggingface/transformers/pull/5211 | 643,668,951 | MDExOlB1bGxSZXF1ZXN0NDM4NDMyMTU3 | 5,211 | Remove wandb warning as it is unnecessary | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=h1) Report\n> Merging [#5211](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1ae132a07d7f294cf58cd50f7db8723d00e282de&el=desc) will **increase** coverage by `0.49%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5211 +/- ##\n==========================================\n+ Coverage 77.49% 77.98% +0.49% \n==========================================\n Files 138 138 \n Lines 23787 23786 -1 \n==========================================\n+ Hits 18433 18550 +117 \n+ Misses 5354 5236 -118 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `81.81% <ø> (+3.55%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5211/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.92% <0.00%> (+75.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=footer). Last update [1ae132a...ea3c239](https://codecov.io/gh/huggingface/transformers/pull/5211?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"no strong opinion on that",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any chance of merging this? :)",
"The issue is that people who installed it for automatic logging won't understand why it's not working",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hey @abhishekkrthakur, how to remove this wandb warning ?",
"@parthplc I guess by having the API key? 🤔 sorry, i dont use wandb. maybe someone else can help."
] | 1,592 | 1,604 | 1,603 | MEMBER | null | Remove wandb warning as it is unnecessary. If wandb is installed, this throws a warning which just makes noise.
Some images might have wandb installed but the user doesn't want to use it. If the user knows what wandb is they will have the API key set anyways. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5211/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5211",
"html_url": "https://github.com/huggingface/transformers/pull/5211",
"diff_url": "https://github.com/huggingface/transformers/pull/5211.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5211.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5210 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5210/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5210/comments | https://api.github.com/repos/huggingface/transformers/issues/5210/events | https://github.com/huggingface/transformers/pull/5210 | 643,668,852 | MDExOlB1bGxSZXF1ZXN0NDM4NDMyMDc2 | 5,210 | Increase the default max_length parameter when using TransfoXL & XLnet. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't think it is necessarly needed actually because the `max_length` is overwritten by the model's specific configs in this case. See XLNet config for example: https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json. \r\n\r\nI would prefer to not set the `max_length` here, but let the user insert it if needed. After discussion with @thomwolf we decided to not hardcode parameter values in the pipelines.py file, but only set them via the `task_specific_params` in the configs.\r\n\r\nI think the problem is that the task specific params are not overwritten correctly here because the task-name is not correct. \r\nIt is important the the task-name `text-generation` is given to the pipeline, so that this line works as expected:\r\nhttps://github.com/huggingface/transformers/blob/1ae132a07d7f294cf58cd50f7db8723d00e282de/src/transformers/pipelines.py#L400\r\n\r\n\r\n\r\n",
"I tried to explain that here as well:https://github.com/huggingface/transformers/pull/5086#issuecomment-645847015 and think we should actually not pass the task name to the `__init__` function of the pipelines, but change the variable `task` to a static class variable @julien-c @LysandreJik @mfuntowicz @thomwolf ",
"@patrickvonplaten Ok, thanks for the hints, I'll check the point you mentioned 👌 ",
"Just a note that in the inference-api we *do* pass the correct \"text-generation\" task name, so there might be something else going on here.",
"Ok I'll check as well",
"I think I know what's going on - will do a PR that should fix it",
"This line in the inference api should not be called `causal-lm`, but `text-generation` IMO: https://github.com/huggingface/api-inference/blob/9cab899965d164f85c4961f0deafbc5034523e45/shared.py#L47 \r\nThis way the correct parameters would be loaded from the config.\r\n\r\nBut I would prefer to actually make the \"task\" name a static variable as is shown here:\r\nhttps://github.com/huggingface/transformers/pull/5216\r\n\r\n@julien-c @mfuntowicz ",
"Oh yes my bad @patrickvonplaten, this was actually on a branch: https://github.com/huggingface/api-inference/pull/3/files"
] | 1,592 | 1,651 | 1,592 | MEMBER | null | This is needed because the prepended PADDING_TEXT constant is already bigger than the default max_length parameter on the generate method, thus leading to no token being generated.
Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5210/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5210",
"html_url": "https://github.com/huggingface/transformers/pull/5210",
"diff_url": "https://github.com/huggingface/transformers/pull/5210.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5210.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5209 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5209/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5209/comments | https://api.github.com/repos/huggingface/transformers/issues/5209/events | https://github.com/huggingface/transformers/pull/5209 | 643,647,663 | MDExOlB1bGxSZXF1ZXN0NDM4NDE0NDI0 | 5,209 | [Reformer] Axial Pos Emb Improve mem usage reformer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=h1) Report\n> Merging [#5209](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5209 +/- ##\n=======================================\n Coverage 77.28% 77.28% \n=======================================\n Files 133 133 \n Lines 22134 22135 +1 \n=======================================\n+ Hits 17107 17108 +1 \n Misses 5027 5027 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.21% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5209/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=footer). Last update [355954f...cd443f7](https://codecov.io/gh/huggingface/transformers/pull/5209?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | MEMBER | null | This PR improves memory usage of Axial Position Encodings by cutting position encodings only to the required length before applying contiguous pytorch operations. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5209/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5209",
"html_url": "https://github.com/huggingface/transformers/pull/5209",
"diff_url": "https://github.com/huggingface/transformers/pull/5209.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5209.patch",
"merged_at": 1592902159000
} |
https://api.github.com/repos/huggingface/transformers/issues/5208 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5208/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5208/comments | https://api.github.com/repos/huggingface/transformers/issues/5208/events | https://github.com/huggingface/transformers/issues/5208 | 643,604,453 | MDU6SXNzdWU2NDM2MDQ0NTM= | 5,208 | Train RobertaModel from scratch for my dataset | {
"login": "raj5287",
"id": 11444890,
"node_id": "MDQ6VXNlcjExNDQ0ODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/11444890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raj5287",
"html_url": "https://github.com/raj5287",
"followers_url": "https://api.github.com/users/raj5287/followers",
"following_url": "https://api.github.com/users/raj5287/following{/other_user}",
"gists_url": "https://api.github.com/users/raj5287/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raj5287/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raj5287/subscriptions",
"organizations_url": "https://api.github.com/users/raj5287/orgs",
"repos_url": "https://api.github.com/users/raj5287/repos",
"events_url": "https://api.github.com/users/raj5287/events{/privacy}",
"received_events_url": "https://api.github.com/users/raj5287/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Are you using transformers from source or from pip install ?\r\n`labels` is introduced in a recent commit and available on master, if you are on version 2.11.0 from pip install then use `lm_labels`",
"> Are you using transformers from source or from pip install ?\r\n> `labels` is introduced in a recent commit and available on master, if you are on version 2.11.0 from pip install then use `lm_labels`\r\n\r\n@patil-suraj yes I am on version 2.11.0 so where should I use `lm_labels`",
"If you installed using pip then yes, use `lm_labels`\r\n\r\nYou'll need to change `DataCollatorForLanguageModeling`, also as you are using `Roberta` which means you training it for maksed language modelling, so you'll need set `mlm `to `True` ",
"but I am training a RobertaModel not a RobertaMaskedLM model.",
"What is your pre-training objective ? Roberta is pre-trained using masked language modelling objective ",
"i am training it from scratch for my own dataset. I want to use the vectors obtained from last layer for classification task",
"You can train `RobertaForMaskedLM` using the `mlm` objective and then load it in `RoberatForSequenceClassification` for classification .\r\n`RoberatForSequenceClassification` will take care of taking last layer vector and feeding it to a classification layer.",
"yes I have done that now I want to compare it with others RF or GDBTs and extract some features and that's why I want to train RobertModel.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | I am trying to train RobertaModel from scratch. I am following [this](https://huggingface.co/blog/how-to-train) blog but instead of `model = RobertaForMaskedLM(config=config)`, I am starting with `configuration = RobertaConfig()
model = RobertaModel(configuration)` and then continuing with other steps. But I am getting error `TypeError: forward() got an unexpected keyword argument 'labels'`. The whole code piece:
```
configuration = RobertaConfig()
model = RobertaModel(configuration)
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="./train.txt",
block_size=128,
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./Model1",
overwrite_output_dir=True,
num_train_epochs=1,
per_gpu_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
prediction_loss_only=True,
)
trainer.train()
```
Is there some other way to do pre-training? Am i missing something here? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5208/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5208/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5207 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5207/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5207/comments | https://api.github.com/repos/huggingface/transformers/issues/5207/events | https://github.com/huggingface/transformers/issues/5207 | 643,587,522 | MDU6SXNzdWU2NDM1ODc1MjI= | 5,207 | How to build Bimodel to search code snippets? [CodeBERTa] | {
"login": "hmdgit",
"id": 59701320,
"node_id": "MDQ6VXNlcjU5NzAxMzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/59701320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hmdgit",
"html_url": "https://github.com/hmdgit",
"followers_url": "https://api.github.com/users/hmdgit/followers",
"following_url": "https://api.github.com/users/hmdgit/following{/other_user}",
"gists_url": "https://api.github.com/users/hmdgit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hmdgit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hmdgit/subscriptions",
"organizations_url": "https://api.github.com/users/hmdgit/orgs",
"repos_url": "https://api.github.com/users/hmdgit/repos",
"events_url": "https://api.github.com/users/hmdgit/events{/privacy}",
"received_events_url": "https://api.github.com/users/hmdgit/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"CodeBERTa was indeed trained on just the code so you would need to tweak the approach.\r\n\r\nDid you read the paper for CodeSearchNet (https://arxiv.org/abs/1909.09436) by @hamelsmu?",
"Thanks Julien for your response.\r\n\r\nI have taken an overview of the paper and its [code](https://github.com/github/CodeSearchNet), and I will try it.\r\n\r\nBut, can it be possible to solve it by using BERT huggingface library? What kind of tweaks do I need to apply in CodeBERT fine tune [code](https://huggingface.co/huggingface/CodeBERTa-language-id)? \r\n\r\nCan it be solved by finetuning BertForQuestionAnswering [code](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering)?",
"Maybe CodeBERT([https://arxiv.org/abs/2002.08155](https://arxiv.org/abs/2002.08155)) is suitable for you.",
"This paper is of my high interest....\r\nIs there fine tuning source code for that paper publicly available? or are there any short snippets available which can help in fine-tuning?\r\n",
"You can visit this link ([https://github.com/microsoft/CodeBERT](https://github.com/microsoft/CodeBERT)) . ",
"Thanks for sharing. I will check and let you know about related concerns on the shared github repository",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,599 | 1,599 | NONE | null | Hi,
I would like build a code search engine model. The main purpose is that when I pass docstring, it should give me top-k associated code snippets as results.
I have a data in the form of (docstring, code), which means each docstring is associated with mentioned code snippet.
I have seen CodeBERTa fine tune [code](https://huggingface.co/huggingface/CodeBERTa-language-id), but it is not using docstring in it. Is it possible to use this model?
Can you please give me some entry points to solve this problem by using hugging-face library?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5207/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5206 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5206/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5206/comments | https://api.github.com/repos/huggingface/transformers/issues/5206/events | https://github.com/huggingface/transformers/pull/5206 | 643,505,005 | MDExOlB1bGxSZXF1ZXN0NDM4Mjk1ODA0 | 5,206 | [fix] remove unused import | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5206/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5206",
"html_url": "https://github.com/huggingface/transformers/pull/5206",
"diff_url": "https://github.com/huggingface/transformers/pull/5206.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5206.patch",
"merged_at": 1592883545000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5205 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5205/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5205/comments | https://api.github.com/repos/huggingface/transformers/issues/5205/events | https://github.com/huggingface/transformers/pull/5205 | 643,502,075 | MDExOlB1bGxSZXF1ZXN0NDM4MjkzNDQ3 | 5,205 | [fix] mobilebert had wrong path, causing slow test failure | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Also deleted redundant slow test. `test_inference_no_head` covers this completely. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5205/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5205",
"html_url": "https://github.com/huggingface/transformers/pull/5205",
"diff_url": "https://github.com/huggingface/transformers/pull/5205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5205.patch",
"merged_at": 1592883097000
} |
https://api.github.com/repos/huggingface/transformers/issues/5204 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5204/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5204/comments | https://api.github.com/repos/huggingface/transformers/issues/5204/events | https://github.com/huggingface/transformers/issues/5204 | 643,482,324 | MDU6SXNzdWU2NDM0ODIzMjQ= | 5,204 | T5 Model : What is maximum sequence length that can be used with pretrained T5 (3b model) checkpoint? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes you can, but you should be aware that memory requirements quadruple when doubling the input sequence length for \"normal\" self-attention (as in T5).\r\n\r\nSo you will quickly run out of memory.\r\n\r\nHere a snippet that shows that you can run input ids longer than `config.max_postion_embeddings`.\r\n\r\n```python \r\nimport torch\r\nfrom transformers import T5ForConditionalGeneration\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-base\")\r\nmodel.config.max_position_embeddings # 512\r\ninput_ids = torch.tensor([600 * [0]]) # shape (1, 600)\r\nmodel(input_ids, decoder_input_ids=input_ids) # => no error\r\n```\r\n\r\nFor more memory efficient models, you should take a look at `Reformer` and `Longformer`",
"I hope we will soon have these models ready for summarization",
"Thanks for the quick help. \r\n\r\nSo basically, the T5 model in hugging face can handled arbitrary sequence length outputs right? \r\nSo the second line (**model.config.max_position_embeddings**) basically shows the default max input seq length right ?\r\n\r\nWhat do you think of the following code (Here I simply modify the tokenizer max_length):\r\n\r\n```\r\nmodel = T5ForConditionalGeneration.from_pretrained('t5-small')\r\n tokenizer = T5Tokenizer.from_pretrained('t5-small')\r\n t5_prepared_Text = \"summarize: \"+some_preprocess_text \r\n tokenized_text = tokenizer.encode(t5_prepared_Text, max_length=1024,return_tensors=\"pt\")\r\n\r\n summary_ids = model.generate(tokenized_text,\r\n num_beams=4,\r\n no_repeat_ngram_size=2,\r\n min_length=30,\r\n max_length=100,\r\n early_stopping=True)\r\n\r\n\r\n```\r\n",
"Hi, I checked two summary outputs of T5, after using 1024 and 512 sequence lengths. I do not see any difference in generated summaries. Any idea for this behavior?",
"> Hi, I checked two summary outputs of T5, after using 1024 and 512 sequence lengths. I do not see any difference in generated summaries. Any idea for this behavior?\r\n\r\nHi I have the same question. Do you happen to figure out why?",
"Hi,\n\nThose days I haven't had much of idea on huggiface models. Since we can add\nany length as the input.. the main parameter should be minimum generation\nlength.\n\nTry to change it.\n",
"> Hi, Those days I haven't had much of idea on huggiface models. Since we can add any length as the input.. the main parameter should be minimum generation length. Try to change it.\r\n\r\nI am still very new to huggiface. I have a pretty long text about 1500 words. The issue I was having is when I set max_length=512 or 1024, they kinda return the same summary. Do you know why?",
"I think it is because minimum length is unchanged. Regardless of the\ninput.. algorthm tries to generate a text until it gets the EOS (end of\nsentence) token. So it is common to get same length summary even if u add\nfew more sentence to the original input.\n\nOn Mon, Feb 15, 2021, 16:40 mars997 <[email protected]> wrote:\n\n> Hi, Those days I haven't had much of idea on huggiface models. Since we\n> can add any length as the input.. the main parameter should be minimum\n> generation length. Try to change it.\n>\n> I am still very new to huggiface. I have a pretty long text about 1500\n> words. The issue I was having is when I set max_length=512 or 1024, they\n> kinda return the same summary. Do you know why?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/5204#issuecomment-778917211>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGXCWKQKTGQML5LWTPLS7CJSLANCNFSM4OFG7QHA>\n> .\n>\n",
"Hi, do we have to fine-tune the model when changing the ``model.config.max_position_embeddings``?",
"No really, cz T5 uses relative positional embeddings.",
"> I think it is because minimum length is unchanged. Regardless of the input.. algorthm tries to generate a text until it gets the EOS (end of sentence) token. So it is common to get same length summary even if u add few more sentence to the original input.\r\n> […](#)\r\n> On Mon, Feb 15, 2021, 16:40 mars997 ***@***.***> wrote: Hi, Those days I haven't had much of idea on huggiface models. Since we can add any length as the input.. the main parameter should be minimum generation length. Try to change it. I am still very new to huggiface. I have a pretty long text about 1500 words. The issue I was having is when I set max_length=512 or 1024, they kinda return the same summary. Do you know why? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <[#5204 (comment)](https://github.com/huggingface/transformers/issues/5204#issuecomment-778917211)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEA4FGXCWKQKTGQML5LWTPLS7CJSLANCNFSM4OFG7QHA> .\r\n\r\nPersonally, I think there is another reason: \r\n\r\nFirst, if you use the off-the-shelf T5-base model to summarize directly (i.e., no fine-tuning), a longer input would result in the same output as the original input. Because the T5-base model was pre-trained with `max_source_length==512`, those tokens exceeding `512 `may not be attended by the T5Attention layer. \r\n\r\nBut after fine-tuning the T5-base model with a longer `max_source_length`, an input with a longer `max_source_length` perhaps gives you a different output than `512`.",
"What is the maximum sequence length for the T5-large?"
] | 1,592 | 1,693 | 1,592 | CONTRIBUTOR | null | As the paper described, T5 uses a relative attention mechanism and the answer for this [issue](https://github.com/google-research/text-to-text-transfer-transformer/issues/273) says, T5 can use any sequence length were the only constraint is memory.
According to this, can I use T5 to summarize inputs that have more than 512 tokens in a sequence? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5204/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5203/comments | https://api.github.com/repos/huggingface/transformers/issues/5203/events | https://github.com/huggingface/transformers/issues/5203 | 643,471,687 | MDU6SXNzdWU2NDM0NzE2ODc= | 5,203 | Can you release the code for Write For Transformer? | {
"login": "BigSalmon2",
"id": 61605789,
"node_id": "MDQ6VXNlcjYxNjA1Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BigSalmon2",
"html_url": "https://github.com/BigSalmon2",
"followers_url": "https://api.github.com/users/BigSalmon2/followers",
"following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}",
"gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions",
"organizations_url": "https://api.github.com/users/BigSalmon2/orgs",
"repos_url": "https://api.github.com/users/BigSalmon2/repos",
"events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}",
"received_events_url": "https://api.github.com/users/BigSalmon2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | I'd like to use my own model, so is this possible? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5203/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5202/comments | https://api.github.com/repos/huggingface/transformers/issues/5202/events | https://github.com/huggingface/transformers/pull/5202 | 643,450,897 | MDExOlB1bGxSZXF1ZXN0NDM4MjUyMTI1 | 5,202 | examples/seq2seq supports translation | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1841528858,
"node_id": "MDU6TGFiZWwxODQxNTI4ODU4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Summarization",
"name": "Summarization",
"color": "b6f97f",
"default": false,
"description": ""
},
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
},
{
"id": 2009457320,
"node_id": "MDU6TGFiZWwyMDA5NDU3MzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/translation",
"name": "translation",
"color": "b2d2f4",
"default": false,
"description": "machine translation utilities and models"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=h1) Report\n> Merging [#5202](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76e5af4cfd821c0c610b9927a2d2cd58a02f43e4&el=desc) will **increase** coverage by `2.46%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5202 +/- ##\n==========================================\n+ Coverage 75.49% 77.96% +2.46% \n==========================================\n Files 138 138 \n Lines 23839 23846 +7 \n==========================================\n+ Hits 17998 18592 +594 \n+ Misses 5841 5254 -587 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `87.50% <50.00%> (-7.63%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <0.00%> (+0.36%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <0.00%> (+1.42%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+1.44%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.70% <0.00%> (+1.72%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <0.00%> (+2.57%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/5202/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=footer). Last update [76e5af4...c546c3e](https://codecov.io/gh/huggingface/transformers/pull/5202?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Gunna merge this to avoid tweeting broken links. \r\nComments welcome, I will probably need to do more cleanup."
] | 1,592 | 1,593 | 1,593 | CONTRIBUTOR | null | - renames `examples/summarization` -> `examples/seq2seq`
- finetune.py and run_eval.py support mbart, marian and t5.
- task_specific_params are used
- if you specify task='translation', then your metric becomes BLEU instead of ROUGE.
- improved `README.md`
- lots of test coverage
- scripts to reproduce distilbart results
TODO:
- [x] verified distilbart commands replicate posted results.
- [x] new xsum shared task URL.
- [x] mini models for marian
- [x] mbart finetuning unittests.
Postponed and made issues for:
- [ ] check bleu scores for translation models with run_eval.py
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5202/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5202",
"html_url": "https://github.com/huggingface/transformers/pull/5202",
"diff_url": "https://github.com/huggingface/transformers/pull/5202.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5202.patch",
"merged_at": 1593057492000
} |
https://api.github.com/repos/huggingface/transformers/issues/5201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5201/comments | https://api.github.com/repos/huggingface/transformers/issues/5201/events | https://github.com/huggingface/transformers/issues/5201 | 643,424,957 | MDU6SXNzdWU2NDM0MjQ5NTc= | 5,201 | Linformer | {
"login": "Laksh1997",
"id": 59830552,
"node_id": "MDQ6VXNlcjU5ODMwNTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/59830552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laksh1997",
"html_url": "https://github.com/Laksh1997",
"followers_url": "https://api.github.com/users/Laksh1997/followers",
"following_url": "https://api.github.com/users/Laksh1997/following{/other_user}",
"gists_url": "https://api.github.com/users/Laksh1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laksh1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laksh1997/subscriptions",
"organizations_url": "https://api.github.com/users/Laksh1997/orgs",
"repos_url": "https://api.github.com/users/Laksh1997/repos",
"events_url": "https://api.github.com/users/Laksh1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laksh1997/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Duplicate #4967"
] | 1,592 | 1,592 | 1,592 | NONE | null | # 🌟 New model addition
## Model description
https://arxiv.org/pdf/2006.04768.pdf
This model is very simple, it just projects the key tensor into a lower dimensional space (e.g k=128) along the sequence axis, then computes attention (seq_len x k), softmax, and matmul with the value tensor (note, the value tensor must also be projected to dimension k along the length dimension).
Could this be added?
## Open source status
Pytorch sketch implementation: https://github.com/tatp22/linformer-pytorch
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5201/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5200/comments | https://api.github.com/repos/huggingface/transformers/issues/5200/events | https://github.com/huggingface/transformers/issues/5200 | 643,422,770 | MDU6SXNzdWU2NDM0MjI3NzA= | 5,200 | tokenizer.convert_ids_to_tokens(tokenizer.convert_tokens_to_ids(x)) returning a different result | {
"login": "LeonieWeissweiler",
"id": 30300891,
"node_id": "MDQ6VXNlcjMwMzAwODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/30300891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeonieWeissweiler",
"html_url": "https://github.com/LeonieWeissweiler",
"followers_url": "https://api.github.com/users/LeonieWeissweiler/followers",
"following_url": "https://api.github.com/users/LeonieWeissweiler/following{/other_user}",
"gists_url": "https://api.github.com/users/LeonieWeissweiler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeonieWeissweiler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeonieWeissweiler/subscriptions",
"organizations_url": "https://api.github.com/users/LeonieWeissweiler/orgs",
"repos_url": "https://api.github.com/users/LeonieWeissweiler/repos",
"events_url": "https://api.github.com/users/LeonieWeissweiler/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeonieWeissweiler/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I have a trained Esperanto Tokenizer, the training code was taken from the how_to_train example. I'm using Roberta Tokenizer as RobertaTokenizerFast doesn't work with trainer.py yet (or last time I checked).
```python
from transformers import RobertaTokenizer
alt_tokenizer = RobertaTokenizer.from_pretrained("./drive/My Drive/EsperBERTo", max_len=512)
alt_tokenizer.convert_ids_to_tokens(alt_tokenizer.convert_tokens_to_ids(alt_tokenizer.tokenize('Neniu atendas la Hispanan Inkvizicion.')))
```
output is: ['Neniu', 'Ġatendas', 'Ġla', 'ĠHispan', 'an', 'ĠIn', 'kvizi', 'cion', '.']
## Expected behavior
the same output as input
## Environment info
tokenizers 0.8.0rc1
transformers 2.11.0
- `transformers` version: 2.11.0
- tokenizers version: 0.8.0rc1
- Platform: colab
- Python version: 3.6
- Using GPU in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5200/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5199/comments | https://api.github.com/repos/huggingface/transformers/issues/5199/events | https://github.com/huggingface/transformers/pull/5199 | 643,397,840 | MDExOlB1bGxSZXF1ZXN0NDM4MjA4MTM3 | 5,199 | Update README.md | {
"login": "aodiniz",
"id": 6626805,
"node_id": "MDQ6VXNlcjY2MjY4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6626805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aodiniz",
"html_url": "https://github.com/aodiniz",
"followers_url": "https://api.github.com/users/aodiniz/followers",
"following_url": "https://api.github.com/users/aodiniz/following{/other_user}",
"gists_url": "https://api.github.com/users/aodiniz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aodiniz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aodiniz/subscriptions",
"organizations_url": "https://api.github.com/users/aodiniz/orgs",
"repos_url": "https://api.github.com/users/aodiniz/repos",
"events_url": "https://api.github.com/users/aodiniz/events{/privacy}",
"received_events_url": "https://api.github.com/users/aodiniz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=h1) Report\n> Merging [#5199](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34fb91d541bdb235a6c9fa96ecf11d51426ac84&el=desc) will **decrease** coverage by `0.89%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5199 +/- ##\n==========================================\n- Coverage 77.99% 77.10% -0.90% \n==========================================\n Files 138 138 \n Lines 23786 23786 \n==========================================\n- Hits 18553 18340 -213 \n- Misses 5233 5446 +213 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `19.92% <0.00%> (-75.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-0.30%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=footer). Last update [a34fb91...e374d82](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Fix/add information in README.md | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5199/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5199",
"html_url": "https://github.com/huggingface/transformers/pull/5199",
"diff_url": "https://github.com/huggingface/transformers/pull/5199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5199.patch",
"merged_at": 1592988167000
} |
https://api.github.com/repos/huggingface/transformers/issues/5198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5198/comments | https://api.github.com/repos/huggingface/transformers/issues/5198/events | https://github.com/huggingface/transformers/pull/5198 | 643,384,527 | MDExOlB1bGxSZXF1ZXN0NDM4MTk3MDg0 | 5,198 | [Reformer classification head] Implement the reformer model classification head for text classification | {
"login": "as-stevens",
"id": 61624036,
"node_id": "MDQ6VXNlcjYxNjI0MDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/61624036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/as-stevens",
"html_url": "https://github.com/as-stevens",
"followers_url": "https://api.github.com/users/as-stevens/followers",
"following_url": "https://api.github.com/users/as-stevens/following{/other_user}",
"gists_url": "https://api.github.com/users/as-stevens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/as-stevens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/as-stevens/subscriptions",
"organizations_url": "https://api.github.com/users/as-stevens/orgs",
"repos_url": "https://api.github.com/users/as-stevens/repos",
"events_url": "https://api.github.com/users/as-stevens/events{/privacy}",
"received_events_url": "https://api.github.com/users/as-stevens/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks cool :-) Will take a look soon!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=h1) Report\n> Merging [#5198](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `0.35%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5198 +/- ##\n==========================================\n+ Coverage 77.01% 77.36% +0.35% \n==========================================\n Files 128 146 +18 \n Lines 21615 25991 +4376 \n==========================================\n+ Hits 16646 20109 +3463 \n- Misses 4969 5882 +913 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (-3.60%)` | :arrow_down: |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (+0.32%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: |\n| ... and [113 more](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=footer). Last update [223084e...d2f3839](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I left a couple of comments that colud help you fix the PR :-) Also it would be great to add tests here",
"> I left a couple of comments that colud help you fix the PR :-) Also it would be great to add tests here\r\n\r\nCould you please point to some reference code base, that I can look at. This will give me a good idea of the testing standard in the application.",
"Sure, here we go: https://github.com/huggingface/transformers/blob/1ae132a07d7f294cf58cd50f7db8723d00e282de/tests/test_modeling_bert.py#L361\r\n\r\nand \r\n\r\nhttps://github.com/huggingface/transformers/blob/1ae132a07d7f294cf58cd50f7db8723d00e282de/tests/test_modeling_bert.py#L520\r\n\r\nAll this class should be added to `all_model_classes` in the Reformer test.",
"@patrickvonplaten I tried to fix the test case for the reformer classification head. But it is failing in one of the common test cases class, \r\ntest_modelling_common.py-> test_hidden_states_output\r\n\r\nwhere it tries to run the hidden states test for the Reformer Models, it fails only for classification head.\r\nSince this is a common workflow, I am wondering where am I going wrong. Also, the newly added QA head on the reformer is not failing which is quite surprising. It looks like it bypasses the QA head and skips the test.\r\n\r\nPlease let me know your thoughts/suggestions.\r\nThanks\r\nAmit",
"I think we are very close to merging this PR :-) After the minor changes as described above, I think we are good to merge",
"@patrickvonplaten Implemented the changes as suggested :)",
"Waiting for another approval - LGTM",
"Awesome PR @as-stevens, do you have a new example notebook? If not I take it that it's possible to just import the class you created in your notebook and have added to the codebase.",
"@jstremme the link to the notebook, where I tried to fine-tune the model for IMDB text classification. Since there are not many pre-trained models for Reformer, I used the Crime and Punishment for classification. \r\n[https://colab.research.google.com/drive/1l61NccWTGMfNFPj1T8kvMjnik2iEQWfe](url)\r\n\r\nThe model did not perform well, as it is not a bi-directional trained model. The link [https://github.com/huggingface/transformers/issues/5023](url) contains more information.\r\n\r\nThanks\r\nAmit",
"Thanks very much @as-stevens. Great work, and I'm hopeful I'll be able to pretrain and use my own bi-directional Reformer for text classification by following a similar approach. Happy to share high-level details with the transformers community as I go.\r\n\r\nUnfortunately, the links you shared don't work for me. Can you try updating?"
] | 1,592 | 1,597 | 1,594 | CONTRIBUTOR | null | This PR is an effort to implement the classification head on top of the plain vanilla reformer model.
References have been taken from the Roberta and XLNet classification head to implement the reformer classification head.
Tried testing the implementation changes, but it is failing error, link to google collab https://colab.research.google.com/drive/1KFsQxLqsMB6vBF4_bRmTFGhdGwkgx0zI?usp=sharing;
Test scenario details;
- IMDB movie review dataset.
- The cell3 has the reformer model classification head code.
- Since the length of the reviews is not more than 512 words( assuming there would be a few outliers, but that could be skipped)
- Initializing the model with the config.axial_pos_shape to (16,32) but the setting does not take effect and throws runtime error.
Please help.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5198/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5198",
"html_url": "https://github.com/huggingface/transformers/pull/5198",
"diff_url": "https://github.com/huggingface/transformers/pull/5198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5198.patch",
"merged_at": 1594710983000
} |
https://api.github.com/repos/huggingface/transformers/issues/5197 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5197/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5197/comments | https://api.github.com/repos/huggingface/transformers/issues/5197/events | https://github.com/huggingface/transformers/issues/5197 | 643,383,196 | MDU6SXNzdWU2NDMzODMxOTY= | 5,197 | batch_encode_plus() causes OOM, while encode_plus does not | {
"login": "yuhongqian",
"id": 26653166,
"node_id": "MDQ6VXNlcjI2NjUzMTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/26653166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuhongqian",
"html_url": "https://github.com/yuhongqian",
"followers_url": "https://api.github.com/users/yuhongqian/followers",
"following_url": "https://api.github.com/users/yuhongqian/following{/other_user}",
"gists_url": "https://api.github.com/users/yuhongqian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuhongqian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuhongqian/subscriptions",
"organizations_url": "https://api.github.com/users/yuhongqian/orgs",
"repos_url": "https://api.github.com/users/yuhongqian/repos",
"events_url": "https://api.github.com/users/yuhongqian/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuhongqian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm also running out of memory using `BertTokenizerFast.batch_encode_plus()`. I'm using the `BertTokenizer.batch_encode_plus()` now and it seems to be working but its very slow and single-threaded! I have 220 GB RAM and the dataset is under 2 GB 😞 .",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any new developments?",
"any solutions to this? facing the same issue"
] | 1,592 | 1,671 | 1,604 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I am running a sequence classification task using `DistilBertForSequenceClassfication`. I follow `examples/text_classfication/run_glue.py` and `src/transformers/data/processors/glue.py` to implement my data loading process. My dataset is a rather large one (~2.5 GB with 7M+ examples), compared to those of the GLUE tasks.
In the current `glue.py`, `_glue_convert_examples_to_features()` reads all the examples into a list, and then call `batch_encode_plus()` on that list. On my large dataset, this implementation caused an out-of-memory (OOM) error. Therefore, I switched to `encode_plus()`, and called it on individual data example while looping through the dataset. `encode_plus()` did not cause OOM.
I wonder if there is something wrong with `batch_encode_plus()` so that it cannot handle all the examples in a dataset at once? If that is the case, it might be a good idea to add a corresponding note to the documentation.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5197/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5196/comments | https://api.github.com/repos/huggingface/transformers/issues/5196/events | https://github.com/huggingface/transformers/pull/5196 | 643,374,148 | MDExOlB1bGxSZXF1ZXN0NDM4MTg4NTE4 | 5,196 | [HANS] Fix label_list for RoBERTa/BART (class flipping) | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=h1) Report\n> Merging [#5196](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c439752482759c94784e11a87dcbf08ce69dccf3&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5196 +/- ##\n==========================================\n- Coverage 78.07% 77.99% -0.09% \n==========================================\n Files 138 138 \n Lines 23786 23786 \n==========================================\n- Hits 18572 18552 -20 \n- Misses 5214 5234 +20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.61% <0.00%> (-0.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.15%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=footer). Last update [c439752...9f1bf7e](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM, though if we could do it again we would just flip the final layer weights when converting models... (discussed recently with @LysandreJik and @thomwolf)"
] | 1,592 | 1,593 | 1,593 | MEMBER | null | the MNLI checkpoints ported from fairseq (more specifically RoBERTa and BART) has flipping classes. @sshleifer fixed it for `run_glue.py` last week (see https://github.com/huggingface/transformers/pull/5141), this PR fixes the same problem for `run_hans.py`.
Now the HANS evaluation for [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) gives:
```
Heuristic entailed results:
lexical_overlap: 0.9932
subsequence: 0.9996
constituent: 0.9982
Heuristic non-entailed results:
lexical_overlap: 0.8002
subsequence: 0.263
constituent: 0.2074
```
For [roberta-large-mnli](https://huggingface.co/roberta-large-mnli):
```
Heuristic entailed results:
lexical_overlap: 1.0
subsequence: 1.0
constituent: 1.0
Heuristic non-entailed results:
lexical_overlap: 0.8746
subsequence: 0.2804
constituent: 0.1056
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5196/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5196",
"html_url": "https://github.com/huggingface/transformers/pull/5196",
"diff_url": "https://github.com/huggingface/transformers/pull/5196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5196.patch",
"merged_at": 1593023895000
} |
https://api.github.com/repos/huggingface/transformers/issues/5195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5195/comments | https://api.github.com/repos/huggingface/transformers/issues/5195/events | https://github.com/huggingface/transformers/pull/5195 | 643,372,615 | MDExOlB1bGxSZXF1ZXN0NDM4MTg3MjA0 | 5,195 | Add link to new comunity notebook (optimization) | {
"login": "pommedeterresautee",
"id": 1029874,
"node_id": "MDQ6VXNlcjEwMjk4NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1029874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pommedeterresautee",
"html_url": "https://github.com/pommedeterresautee",
"followers_url": "https://api.github.com/users/pommedeterresautee/followers",
"following_url": "https://api.github.com/users/pommedeterresautee/following{/other_user}",
"gists_url": "https://api.github.com/users/pommedeterresautee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pommedeterresautee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pommedeterresautee/subscriptions",
"organizations_url": "https://api.github.com/users/pommedeterresautee/orgs",
"repos_url": "https://api.github.com/users/pommedeterresautee/repos",
"events_url": "https://api.github.com/users/pommedeterresautee/events{/privacy}",
"received_events_url": "https://api.github.com/users/pommedeterresautee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=h1) Report\n> Merging [#5195](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1c5cd8e5f59154905c5ae0f47a8c8905618a12ff&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5195 +/- ##\n==========================================\n+ Coverage 77.98% 78.00% +0.01% \n==========================================\n Files 138 138 \n Lines 23786 23786 \n==========================================\n+ Hits 18550 18554 +4 \n+ Misses 5236 5232 -4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.18% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <0.00%> (+1.18%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=footer). Last update [1c5cd8e...438d62f](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That's really a great notebook and fits very well with what we are currently working on I think @mfuntowicz "
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | related to https://github.com/huggingface/transformers/issues/4842#event-3469184635
This notebook is about benchmarking model training with/without dynamic padding optimization.
https://github.com/ELS-RD/transformers-notebook
Using dynamic padding on MNLI provides a **4.7 times training time reduction**, with max pad length set to 512. The effect is strong because few examples are >> 400 tokens in this dataset. IRL, it will depend of the dataset, but it always bring improvement and, after more than 20 experiments listed in this [article](https://towardsdatascience.com/divide-hugging-face-transformers-training-time-by-2-or-more-21bf7129db9q-21bf7129db9e?source=friends_link&sk=10a45a0ace94b3255643d81b6475f409), it seems to not hurt performance.
Following advice from @patrickvonplaten I do the PR myself :-) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5195/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5195",
"html_url": "https://github.com/huggingface/transformers/pull/5195",
"diff_url": "https://github.com/huggingface/transformers/pull/5195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5195.patch",
"merged_at": 1592862454000
} |
https://api.github.com/repos/huggingface/transformers/issues/5194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5194/comments | https://api.github.com/repos/huggingface/transformers/issues/5194/events | https://github.com/huggingface/transformers/pull/5194 | 643,348,055 | MDExOlB1bGxSZXF1ZXN0NDM4MTY2OTY0 | 5,194 | [Use cache] Align logic of `use_cache` with output_attentions and output_hidden_states | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=h1) Report\n> Merging [#5194](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e9ef21175eed7121a3f785708f2264c61215cdc3&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5194 +/- ##\n==========================================\n+ Coverage 77.95% 78.01% +0.05% \n==========================================\n Files 138 138 \n Lines 23772 23798 +26 \n==========================================\n+ Hits 18531 18565 +34 \n+ Misses 5241 5233 -8 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.29% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.80% <100.00%> (+0.36%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `84.32% <100.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.66% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.09% <100.00%> (+0.36%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.41% <100.00%> (+0.10%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=footer). Last update [e9ef211...d02506e](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I'm confused what you mean by backwards compatibility. As long as the tests pass this looks fine to me.\r\n\r\nThe reason we need to be able to set `use_cache` outside of config is that in the validation_step of the `SummarizationModule` we want to call generate, and in the training_step we want to call `forward(use_cache=False)`.\r\n\r\n\r\n",
"Please check bart slow tests :)",
"> Please check bart slow tests :)\r\n\r\nOn CPU all slow tests pass. On GPU via `USE_CUDA=1` the test: `tests/test_modeling_bart.py::MBartIntegrationTests::test_enro_forward` fails, but it also uses half-precision on GPU and expects the same numbers as when not using half-precision. Do you think this is due to this PR @sshleifer ?",
"Sounds completely unrelated, unless any of them are super slow. I think the only risk is that you accidentally disable the cache.",
"> Great! Very cool to have added some tests as well. No tests were added for CTRL?\r\n\r\nI think I was to lazy back when I added the tests for GPT2 and T5 -> will add tests in a separate PR for CTRL"
] | 1,592 | 1,593 | 1,593 | MEMBER | null | This PR cleans the usage for `use_cache` the same way it is done for `output_hidden_states` and `output_attentions`.
The logic as for all non-tensor function arguments is that
1) if the function argument `use_cache` is specified use this one
2) if not then use the `config.use_cache`.
**Break in Backward Compatibilty**
There is a small break in backward compatibility, in that Bart now uses "use_cache=True" as a default value *if* both `input_ids` and `decoder_input_ids` are passed as an argument. Bart always used `use_cache=True` as a default for generation, so this should not concern many people IMO. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5194/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5194",
"html_url": "https://github.com/huggingface/transformers/pull/5194",
"diff_url": "https://github.com/huggingface/transformers/pull/5194.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5194.patch",
"merged_at": 1593007758000
} |
https://api.github.com/repos/huggingface/transformers/issues/5193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5193/comments | https://api.github.com/repos/huggingface/transformers/issues/5193/events | https://github.com/huggingface/transformers/pull/5193 | 643,341,896 | MDExOlB1bGxSZXF1ZXN0NDM4MTYxNzIz | 5,193 | Switch master/stable doc and add older releases | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | Switch the doc base url to the latest stable release and add master docs as well as some intermediate docs that were missing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5193/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5193",
"html_url": "https://github.com/huggingface/transformers/pull/5193",
"diff_url": "https://github.com/huggingface/transformers/pull/5193.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5193.patch",
"merged_at": 1592858333000
} |
https://api.github.com/repos/huggingface/transformers/issues/5192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5192/comments | https://api.github.com/repos/huggingface/transformers/issues/5192/events | https://github.com/huggingface/transformers/issues/5192 | 643,328,928 | MDU6SXNzdWU2NDMzMjg5Mjg= | 5,192 | Using segments ids in encoder-decoder model in generate function | {
"login": "mmsamiei",
"id": 12582703,
"node_id": "MDQ6VXNlcjEyNTgyNzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12582703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmsamiei",
"html_url": "https://github.com/mmsamiei",
"followers_url": "https://api.github.com/users/mmsamiei/followers",
"following_url": "https://api.github.com/users/mmsamiei/following{/other_user}",
"gists_url": "https://api.github.com/users/mmsamiei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmsamiei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmsamiei/subscriptions",
"organizations_url": "https://api.github.com/users/mmsamiei/orgs",
"repos_url": "https://api.github.com/users/mmsamiei/repos",
"events_url": "https://api.github.com/users/mmsamiei/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmsamiei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"`segments_tensors` are currently not implemented for `generate`. Why do you want to pass them to `generate`?",
"I'm working on a conditional dialogue task (knowledge grounded dialogue) in which I give the conversation history and fact sentence as inputs to the model. I want to show the model that fact sentence is different from conversation history, so I decided to use segment embedding to do this and learn different segment embeddings for history and fact sentence. in training I have no problem and pass token_type_ids as kwargs, but in inferencing phase, I have this mentioned problem. ",
"To use `token_type_ids`, you would actually need to change the line in the generate function where the encoder is called to something like this:\r\n`encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask, token_type_ids=model_specific_kwargs['token_type_ids'])`\r\n\r\nor maybe better:\r\n`encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask, token_type_ids=model_specific_kwargs.pop('token_type_ids', None))`\r\nIn case no `token_type_ids` are passed.\r\n\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L385\r\n\r\n",
"> To use `token_type_ids`, you would actually need to change the line in the generate function where the encoder is called to something like this:\r\n> `encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask, token_type_ids=model_specific_kwargs['token_type_ids'])`\r\n> \r\n> or maybe better:\r\n> `encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask, token_type_ids=model_specific_kwargs.pop('token_type_ids', None))`\r\n> In case no `token_type_ids` are passed.\r\n> \r\n> https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L385\r\n\r\nThank you very much for your guidance! 🙏"
] | 1,592 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
## Information
I have implemented an encoderdecoder model where both of them are bert, in encoder module i'm using segment ids (token_type_ids) and give it to model easily in this way and works correctly:
```
kwargs = {'token_type_ids':segments_tensors}
outputs = model(input_ids=encoder_input, decoder_input_ids=decoder_input, **kwargs)[0]
```
but in generate function i use below code to give segment ids:
```
kwargs = {'token_type_ids':segments_tensors}
generated = model.generate(encoder_input, decoder_start_token_id=101
num_return_sequences=16,num_beams=32, **kwargs)
```
I thought this should work but commenting or uncommenting kwargs shows me that no changes in output have happened. Is there another correct approach to give arguments of encoder module and token_type_ids through generate function or not? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5192/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5191/comments | https://api.github.com/repos/huggingface/transformers/issues/5191/events | https://github.com/huggingface/transformers/issues/5191 | 643,317,053 | MDU6SXNzdWU2NDMzMTcwNTM= | 5,191 | [Benchmark] Jetson Nano DistillBERT SQuAD benchmark | {
"login": "arijitx",
"id": 6756124,
"node_id": "MDQ6VXNlcjY3NTYxMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6756124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arijitx",
"html_url": "https://github.com/arijitx",
"followers_url": "https://api.github.com/users/arijitx/followers",
"following_url": "https://api.github.com/users/arijitx/following{/other_user}",
"gists_url": "https://api.github.com/users/arijitx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arijitx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arijitx/subscriptions",
"organizations_url": "https://api.github.com/users/arijitx/orgs",
"repos_url": "https://api.github.com/users/arijitx/repos",
"events_url": "https://api.github.com/users/arijitx/events{/privacy}",
"received_events_url": "https://api.github.com/users/arijitx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"For tensorrt you need to manage dynamic axis with a script in the nuphar folder of onnx runtime repo (`symbolic_shape_infer.py`).\r\nIt gives best result for small batch (like single example).\r\nIt requires some customization in parameters because default one are not that good (check the Nvidia doc for the workspace size, etc.)\r\nOf course, it requires a specific compilation but I suppose you did it.\r\nFor CPU you may want to install the onnx runtime without gpu support so you get openmp.",
"Ya I did `symbolic_shape_infer.py` but still for some reason, the model is not running with tensorrtProvider, it starts consuming memory and then crashes after the memory is loaded, I did try with workspace_size , iteration as mentioned in the [Onnx TensorRT Provider doc](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/TensorRT-ExecutionProvider.md) anything `ORT_TENSORRT_MAX_PARTITION_ITERATIONS > 1 ` crashes. \r\n\r\nStill Jetpack 4.4 is not officially supported by onnx, and I couldn't find a way to export pytorch models with LongTensors directly to TensorRT engines :( \r\n\r\nAm able to run successfully in the CUDAProvider, CPU can be optimized further with a openmp specific build.",
"My own XP with onnx has been painful too :-) \r\nMay be you will be interested in this article https://medium.com/@fanzongshaoxing/accelerate-pytorch-model-with-tensorrt-via-onnx-d5b5164b369 ?\r\nIt s about tensorrt on Python without onnx runtime (didn't try yet but was curious)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | CONTRIBUTOR | null | # 🖥 Benchmarking `transformers`
## Benchmark
DistillBERT SQuAD
## Set-up
- Jetson Nano
- Jetpack 4.4
- onnxruntime 1.13.1
Benchmark script https://gist.github.com/arijitx/1400d3d4e07fc517d6c5bfea506c2353
## Results
tokens/sec
| | Pytorch-GPU | Pytorch-CPU(4 cores) | onnx-CPU (4 core) | onnx-CUDA | onnx-TRT |
|--------------------|-------------|----------------------|-------------------|-----------|----------|
| Distill BERT SQuAD | 570 | 61 | 107 | 605 | fail |
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5191/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5190/comments | https://api.github.com/repos/huggingface/transformers/issues/5190/events | https://github.com/huggingface/transformers/pull/5190 | 643,289,330 | MDExOlB1bGxSZXF1ZXN0NDM4MTE4MzA2 | 5,190 | [bart] add config.extra_pos_embeddings to facilitate reuse | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=h1) Report\n> Merging [#5190](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b28b53713161a6299c757c32f7179a2cb2d8cbd7&el=desc) will **decrease** coverage by `0.87%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5190 +/- ##\n==========================================\n- Coverage 77.96% 77.09% -0.88% \n==========================================\n Files 138 138 \n Lines 23838 23839 +1 \n==========================================\n- Hits 18585 18378 -207 \n- Misses 5253 5461 +208 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <ø> (-0.05%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.24% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.85% <100.00%> (+0.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `19.92% <0.00%> (-75.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.80% <0.00%> (+0.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=footer). Last update [b28b537...00d97e5](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Previously bart's LearnedPositionalEmbedding basically assumed pad_token_id>=1, and added 2 extra spaces of empty embeddings at the beginning. These positions were never used.
Since the blenderbot state dict has only 128 entries (exactly as many as it needs, whereas bart has 1026, 2 more than it needs). it cannot use `LearnedPositionalEmbedding` unless we either allow this offset to bet set to 0, or add empty indices.
I prefer the former since the offset's motivation is not super clear to me.
I also move `create_position_ids_from_input_ids` into modeling_roberta.py, the only place it is used.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5190/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5190",
"html_url": "https://github.com/huggingface/transformers/pull/5190",
"diff_url": "https://github.com/huggingface/transformers/pull/5190.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5190.patch",
"merged_at": 1592926543000
} |
https://api.github.com/repos/huggingface/transformers/issues/5189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5189/comments | https://api.github.com/repos/huggingface/transformers/issues/5189/events | https://github.com/huggingface/transformers/pull/5189 | 643,285,721 | MDExOlB1bGxSZXF1ZXN0NDM4MTE1NDMy | 5,189 | Have documentation fail on warning | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=h1) Report\n> Merging [#5189](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1262495a912b9cd97e2ae174fd627a9d8a502341&el=desc) will **increase** coverage by `36.63%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5189 +/- ##\n===========================================\n+ Coverage 41.35% 77.99% +36.63% \n===========================================\n Files 138 138 \n Lines 23772 23772 \n===========================================\n+ Hits 9831 18540 +8709 \n+ Misses 13941 5232 -8709 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (+0.63%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `91.07% <0.00%> (+0.71%)` | :arrow_up: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+1.44%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (+2.66%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (+2.73%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (+3.54%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.77% <0.00%> (+5.54%)` | :arrow_up: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `76.84% <0.00%> (+12.63%)` | :arrow_up: |\n| ... and [47 more](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=footer). Last update [1262495...47a9e15](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"The `build_doc` test now fails when there's a warning, e.g. on the second commit:\r\n\r\n```\r\nreading sources... [ 85%] multilingual\r\nreading sources... [ 87%] notebooks\r\nreading sources... [ 89%] pretrained_models\r\nreading sources... [ 91%] quickstart\r\nreading sources... [ 93%] serialization\r\nreading sources... [ 95%] summary\r\nreading sources... [ 97%] torchscript\r\nreading sources... [100%] usage\r\n\r\n\r\nWarning, treated as error:\r\n/home/circleci/transformers/src/transformers/modeling_albert.py:docstring of transformers.AlbertModel.forward:47:Unexpected indentation.\r\nmake: *** [Makefile:19: html] Error 2\r\n\r\nExited with code exit status 2\r\nCircleCI received exit code 2\r\n```"
] | 1,592 | 1,592 | 1,592 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5189/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5189",
"html_url": "https://github.com/huggingface/transformers/pull/5189",
"diff_url": "https://github.com/huggingface/transformers/pull/5189.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5189.patch",
"merged_at": 1592855391000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5188/comments | https://api.github.com/repos/huggingface/transformers/issues/5188/events | https://github.com/huggingface/transformers/issues/5188 | 643,280,585 | MDU6SXNzdWU2NDMyODA1ODU= | 5,188 | Simplify LearnedPositionalEmbedding | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
},
{
"id": 2154394845,
"node_id": "MDU6TGFiZWwyMTU0Mzk0ODQ1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/work%20in%20progress",
"name": "work in progress",
"color": "2337ce",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,592 | 1,593 | 1,593 | CONTRIBUTOR | null | `create_position_ids_from_input_ids` and `LearnedPositionalEmbedding`
were both copied from fairseq and need either much more documentation or much simpler logic. They were originally copied for Roberta and now are also used for bart. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5188/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5187/comments | https://api.github.com/repos/huggingface/transformers/issues/5187/events | https://github.com/huggingface/transformers/pull/5187 | 643,263,844 | MDExOlB1bGxSZXF1ZXN0NDM4MDk3NDc1 | 5,187 | Add TF auto model to the docs + fix sphinx warnings (again) | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=h1) Report\n> Merging [#5187](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ebc36108dc1c20985905c79f7d6a00f57f3cd3ae&el=desc) will **decrease** coverage by `0.89%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5187 +/- ##\n==========================================\n- Coverage 77.99% 77.10% -0.90% \n==========================================\n Files 138 138 \n Lines 23772 23772 \n==========================================\n- Hits 18541 18329 -212 \n- Misses 5231 5443 +212 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `80.43% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.23% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.50% <ø> (ø)` | |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <ø> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.70% <ø> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `80.19% <ø> (ø)` | |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.12% <ø> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.43% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.04% <ø> (ø)` | |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `88.78% <ø> (ø)` | |\n| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=footer). Last update [ebc3610...cca9865](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | The goal of this PR was just to add the automodel to the docs (as pointed out in #5145) but then I had 100 sphinx warnings that made me grumpy so I had to fix them (some of them from the auto model docs, but most of them from #4978).
As usual a few of them were harmless, others had a real impact on the docs, so I think we should make sure in the CI that there are no sphinx warnings to avoid making the docs bad by mistake. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5187/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5187",
"html_url": "https://github.com/huggingface/transformers/pull/5187",
"diff_url": "https://github.com/huggingface/transformers/pull/5187.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5187.patch",
"merged_at": 1592851433000
} |
https://api.github.com/repos/huggingface/transformers/issues/5186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5186/comments | https://api.github.com/repos/huggingface/transformers/issues/5186/events | https://github.com/huggingface/transformers/issues/5186 | 643,245,061 | MDU6SXNzdWU2NDMyNDUwNjE= | 5,186 | Trouble with PL Checkpoint loading after finetuning bart-large | {
"login": "PingYu-iris",
"id": 23408859,
"node_id": "MDQ6VXNlcjIzNDA4ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/23408859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PingYu-iris",
"html_url": "https://github.com/PingYu-iris",
"followers_url": "https://api.github.com/users/PingYu-iris/followers",
"following_url": "https://api.github.com/users/PingYu-iris/following{/other_user}",
"gists_url": "https://api.github.com/users/PingYu-iris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PingYu-iris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PingYu-iris/subscriptions",
"organizations_url": "https://api.github.com/users/PingYu-iris/orgs",
"repos_url": "https://api.github.com/users/PingYu-iris/repos",
"events_url": "https://api.github.com/users/PingYu-iris/events{/privacy}",
"received_events_url": "https://api.github.com/users/PingYu-iris/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1841528858,
"node_id": "MDU6TGFiZWwxODQxNTI4ODU4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Summarization",
"name": "Summarization",
"color": "b6f97f",
"default": false,
"description": ""
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"How long did you train for? \r\n\r\nOur `finetune.py` is not identical to the authors' finetuning script and might produce different results, but rouge2=15.3 suggests that either you didn't train for very long, or there is a bug in your/our code.\r\n\r\nWhat is the `model.text_predictions` method?\r\n\r\n\r\n\r\n",
"I trained about 3 days on single 12G Titan V GPU. \r\n\r\n```\r\n def text_predictions(self, input_ids):\r\n generated_ids = self.model.generate(\r\n input_ids=input_ids,\r\n num_beams=1,\r\n max_length=80,\r\n repetition_penalty=2.5,\r\n length_penalty=1.0,\r\n early_stopping=True,\r\n )\r\n preds = [\r\n self.tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True)\r\n for g in generated_ids\r\n ]\r\n return preds\r\n```\r\n\r\nThis is my text_predictions function.",
"try \r\n```python\r\nself.model.generate(input_ids=input_ids, attention_mask=attention_mask)\r\n```\r\nThis will use the beam search kwargs from model.config, which are much better than those.\r\n\r\n",
"I changed my code:\r\n\r\n```\r\n if args.do_predict:\r\n\r\n\r\n examples = [\" \" + x.rstrip() if \"t5\" in args.model_name_or_path else x.rstrip() for x in\r\n open(\"cnn_dm/test.source\").readlines()]\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)\r\n\r\n model = model.load_from_checkpoint('bart_sum/checkpointepoch=2.ckpt').to(device)\r\n\r\n\r\n torch.save(model.state_dict(), args.output_dir + '/pytorch_model.bin')\r\n model.config.to_json_file(args.output_dir + '/config.json')\r\n\r\n model = BartForConditionalGeneration.from_pretrained('bart_sum').to(device)\r\n\r\n task_specific_params = model.config.task_specific_params\r\n if task_specific_params is not None:\r\n model.config.update(task_specific_params.get(\"summarization\", {}))\r\n\r\n model.eval()\r\n\r\n fout = Path(\"test_generation.txt\").open(\"w\", encoding=\"utf-8\")\r\n\r\n for batch in tqdm(list(chunks(examples, 1))):\r\n dct = tokenizer.batch_encode_plus(batch, max_length=1024, return_tensors=\"pt\", pad_to_max_length=True).to(device)\r\n summaries = model.generate(**dct)\r\n\r\n dec = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summaries]\r\n\r\n for hypothesis in dec:\r\n fout.write(hypothesis + \"\\n\")\r\n fout.flush()\r\n```\r\n\r\nAfter that, I found generated sentences in dec is a random sentence. For example:\r\n Everybody Everybody Everybody SUM SUM SUM sergeant sergeant sergeant noisy noisy noisy Myster Myster MysterJewishJewishJewish sergeant sergeant talks talks talks noisy noisy imitate imitate imitate sergeant sergeantestonesestonesestones noisy noisy overhe overhe overhe Palmer Palmer Palmer noisy noisyJewishJewish spawned spawned spawned sergeant sergeant Height Height Heightnornornor For For For manif manif manif onboard onboard onboardsharingsharingestonesestones selects selects selects electors electors electors noisy noisy sergeant sergeantWAY Height Height selects selects sergeant sergeantJewishJewish Appearance Appearance Appearance Myster Myster selects selects fearing fearing fearing framed framed framed summ summ summ sergeant sergeant Pistol Pistol Pistol sergeant sergeant Appearance Appearance sergeant sergeant bees bees beesestonesestones Islamists Islamists Islamists sergeant sergeantselectselectselect sergeant sergeant spawned spawned manif manifestonesestonesirdirdird\r\n\r\nI think my finetuned model is totally fail but I don't know what is wrong with my finetune model. I simply ran` finetune_bart.sh ` . The only change is setting training batch size to be 1.\r\n\r\n",
"Interesting. A Rouge2 score of ~15, which you had before, is a lot better than random, was that the same checkpoint?\r\n\r\nAlso, could you tell me the output for the following commands?\r\n\r\n```bash\r\ntransformers-cli env\r\npip freeze | grep torch\r\n```\r\n\r\n",
"This is a same checkpoint. I think my previous code has some problems. I wrote a function \"text_predictions\" in class SummarizationTrainer. This text_predictions function called `self.model.generate`. I think this self.model is from class BaseTransformer with `self.model = MODEL_MODES[mode].from_pretrained(\"facebook/bart-large)` instead of my saved checkpoint. Am I right?\r\n\r\nwith running: `transformers-cli env`\r\n\r\n```\r\n- `transformers` version: 2.11.0\r\n- Platform: Linux-4.15.0-106-generic-x86_64-with-debian-stretch-sid\r\n- Python version: 3.6.10\r\n- PyTorch version (GPU?): 1.5.0 (True)\r\n- Tensorflow version (GPU?): 2.2.0 (True)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```\r\nwith running `pip freeze | grep torch`:\r\n\r\n```\r\npytorch-lightning==0.7.6\r\ntorch==1.5.0\r\ntorchvision==0.6.0a0+82fd1c8\r\n```\r\n\r\n",
"Yes that is correct. To reload a checkpoint, I have been using `resume_from_checkpoint` with good results. Also this should be fixed in the latest code, for future readers.\r\n\r\n@williamFalcon in pl=0.7.6 is it possible to load a checkpoint without instantiating a trainer?\r\ndoes `model = model.load_from_checkpoint('bart_sum/checkpointepoch=2.ckpt').to(device)` not work?",
"Then how could I generate samples from checkpoints? \r\n\r\n```\r\ntrainer.resume_from_checkpoint = checkpoints[-1]\r\ntrainer.logger.log_hyperparams(model.hparams)\r\ntrainer.test(model)\r\n```\r\nThis only provides me with a loss value without generating samples from a checkpoint.",
"You can use `model.model.generate(input_ids)` if `model` is a `pl.Module`.\r\n\r\nYou can also save the transformers model, and then use `run_eval.py`\r\n\r\nSomething like:\r\n`$ mkdir output_dir`\r\n\r\n```python\r\nfrom pathlib import Path\r\nsave_dir = Path('output_dir')\r\nsave_dir.mkdir(exist_ok=True)\r\nmodel.model.save_pretrained(save_dir)\r\n```\r\n\r\nThen follow instructions [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#evaluation-commands) for how to invoke `run_eval.py`\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,599 | 1,599 | NONE | null | Hi there,
I recently ran "finetune_bart.sh" with data from cnn_dm/train.source and saved checkpoints. I only have GPU with 12G memory, then I trained this model with batch_size 1 without distributed training.
I predicted test summaries results on cnn_dm/test.source with:
```python
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)
checkpoints = list(sorted(glob.glob(os.path.join(args.output_dir, "checkpointepoch=*.ckpt"), recursive=True)))
model = model.load_from_checkpoint(checkpoints[-1])
model.eval()
model.freeze()
fout = Path("test_generation.txt").open("w", encoding="utf-8")
for batch in tqdm(list(chunks(examples, 1))):
inputs = tokenizer.batch_encode_plus([batch[0]], max_length=1024, return_tensors='pt')['input_ids']
outputs = model.text_predictions(inputs)
fout.write(outputs[0] + "\n")
fout.flush()
output_lns = [x.rstrip() for x in open("test_generation.txt").readlines()]
reference_lns = [x.rstrip() for x in open("cnn_dm/test.target").readlines()]
calculate_rouge(output_lns, reference_lns, "rouge_scores.txt")
```
After that I received results:
ROUGE_1:
AggregateScore(low=Score(precision=0.428813534233686, recall=0.38720984475275855, fmeasure=0.3951340300593005), mid=Score(precision=0.4311467253592325, recall=0.38942524434531234, fmeasure=0.3970368964972586), high=Score(precision=0.4336709710273236, recall=0.3917389347240298, fmeasure=0.39884900115657695))
ROUGE_2:
AggregateScore(low=Score(precision=0.166422532946031, recall=0.14864999757241987, fmeasure=0.15229217101061118), mid=Score(precision=0.16833011753361882, recall=0.1504195716038362, fmeasure=0.15396139813339343), high=Score(precision=0.17058789969308358, recall=0.15232853455768658, fmeasure=0.15590467051587376))
ROUGE_L:
AggregateScore(low=Score(precision=0.2791458979499142, recall=0.2529897770384392, fmeasure=0.25757140506162507), mid=Score(precision=0.28120564591223207, recall=0.25488090117768447, fmeasure=0.2593799995678622), high=Score(precision=0.2834430254354239, recall=0.25683467237590635, fmeasure=0.2611850779055858))
This result is worse than the reported cnn_dm results in paper. I haven't changed any parameters yet. What is wrong with my training?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5186/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5185/comments | https://api.github.com/repos/huggingface/transformers/issues/5185/events | https://github.com/huggingface/transformers/pull/5185 | 643,221,238 | MDExOlB1bGxSZXF1ZXN0NDM4MDYyNjMy | 5,185 | [T5] add missing docstring for some configurations | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | MEMBER | null | Add missing docs for T5 config. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5185/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5185",
"html_url": "https://github.com/huggingface/transformers/pull/5185",
"diff_url": "https://github.com/huggingface/transformers/pull/5185.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5185.patch",
"merged_at": 1592845212000
} |
https://api.github.com/repos/huggingface/transformers/issues/5184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5184/comments | https://api.github.com/repos/huggingface/transformers/issues/5184/events | https://github.com/huggingface/transformers/pull/5184 | 643,161,366 | MDExOlB1bGxSZXF1ZXN0NDM4MDE1NTA5 | 5,184 | More clear error message in the use-case of #5169 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=h1) Report\n> Merging [#5184](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a7c86dc33d6def6dba44f6ed2b71e8a1644130&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5184 +/- ##\n==========================================\n- Coverage 78.04% 78.00% -0.04% \n==========================================\n Files 138 138 \n Lines 23766 23767 +1 \n==========================================\n- Hits 18548 18540 -8 \n- Misses 5218 5227 +9 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.83% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <0.00%> (+1.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=footer). Last update [d2a7c86...3c842db](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | MEMBER | null | Supplying `is_pretokenized=True` to an encoding method means that the sequences are given as a list of words (words being strings).
Make the error message more clear in this case.
Fix #5169 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5184/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5184",
"html_url": "https://github.com/huggingface/transformers/pull/5184",
"diff_url": "https://github.com/huggingface/transformers/pull/5184.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5184.patch",
"merged_at": 1592912250000
} |
https://api.github.com/repos/huggingface/transformers/issues/5183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5183/comments | https://api.github.com/repos/huggingface/transformers/issues/5183/events | https://github.com/huggingface/transformers/issues/5183 | 643,153,807 | MDU6SXNzdWU2NDMxNTM4MDc= | 5,183 | Unable to load the reformer pre-trained model, connection broken after X% | {
"login": "as-stevens",
"id": 61624036,
"node_id": "MDQ6VXNlcjYxNjI0MDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/61624036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/as-stevens",
"html_url": "https://github.com/as-stevens",
"followers_url": "https://api.github.com/users/as-stevens/followers",
"following_url": "https://api.github.com/users/as-stevens/following{/other_user}",
"gists_url": "https://api.github.com/users/as-stevens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/as-stevens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/as-stevens/subscriptions",
"organizations_url": "https://api.github.com/users/as-stevens/orgs",
"repos_url": "https://api.github.com/users/as-stevens/repos",
"events_url": "https://api.github.com/users/as-stevens/events{/privacy}",
"received_events_url": "https://api.github.com/users/as-stevens/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Not sure how to help you here. Can you load other models from pretrained? Like\r\n\r\n```python \r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\")\r\n```\r\n?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,600 | 1,600 | CONTRIBUTOR | null | # ❓ Questions & Help
I am trying to load the Reformer pre-trained model, But I am not able to do so. The load fails after some random x%. The error stack trace is;
ChunkedEncodingError: ('Connection broken: OSError("(10054, \'WSAECONNRESET\')",)', OSError("(10054, 'WSAECONNRESET')",))
Code to load the model, a simple cell with imports and just the code below to load the model;
modelWiki = ReformerModelWithLMHead.from_pretrained('google/reformer-enwik8')
The notebook is running behind the proxy, could that be an issue?
The SO - https://stackoverflow.com/questions/27333671/how-to-solve-the-10054-error
But it talks about connection refused by server.
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5183/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5182/comments | https://api.github.com/repos/huggingface/transformers/issues/5182/events | https://github.com/huggingface/transformers/pull/5182 | 643,134,481 | MDExOlB1bGxSZXF1ZXN0NDM3OTk0NjU0 | 5,182 | MarianTokenizer.prepare_translation_batch uses new tokenizer API | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=h1) Report\n> Merging [#5182](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c4d4e8bdbd25d9463d41de6398940329c89b7fb6&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `66.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5182 +/- ##\n==========================================\n- Coverage 77.90% 77.88% -0.02% \n==========================================\n Files 140 140 \n Lines 24334 24335 +1 \n==========================================\n- Hits 18957 18953 -4 \n- Misses 5377 5382 +5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <66.66%> (+0.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+0.49%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.37% <0.00%> (+25.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=footer). Last update [c4d4e8b...5195adc](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,593 | 1,593 | CONTRIBUTOR | null | new params:
```python
truncation_strategy="only_first",
padding="longest",
```
The old behavior was to pad to model_max_len then call `trim_batch`.
I also added two tests than ensure that it works. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5182/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5182",
"html_url": "https://github.com/huggingface/transformers/pull/5182",
"diff_url": "https://github.com/huggingface/transformers/pull/5182.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5182.patch",
"merged_at": 1593613970000
} |
https://api.github.com/repos/huggingface/transformers/issues/5181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5181/comments | https://api.github.com/repos/huggingface/transformers/issues/5181/events | https://github.com/huggingface/transformers/issues/5181 | 643,131,440 | MDU6SXNzdWU2NDMxMzE0NDA= | 5,181 | Is it possible to mimic trim_batch using new tokenizer strategies? | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Hi @sshleifer you should read the detailed description on the tokenizers refactoring PR https://github.com/huggingface/transformers/pull/4510#issue-421650163\r\n\r\nUntil it's added in the doc (will be soon), it's required reading for all core contributors of `transformers`.",
"Thanks. I read that, and am still somewhat confused about why I pass `truncation=True` and get entries that are longer than `tokenizer.max_model_length`. The PR description says:\r\n\r\n\r\n\r\nHere is a simplified example:\r\n\r\n```python\r\nfrom transformers import BartTokenizer\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-large')\r\nassert tokenizer.model_max_length == 1024\r\n\r\n# tokenizer.batch_encode_plus returns ids shaped (2, 1024)\r\nbatch_sentences = ['tiny sentence 1'*1000, 'tiny_sentence2']\r\nids = tokenizer.batch_encode_plus(batch_sentences, pad_to_max_length=True, max_length=tokenizer.model_max_length,\r\n truncation=True, return_tensors='pt').input_ids\r\nassert ids.shape[1] <= tokenizer.model_max_length, ids.shape[1]\r\n\r\n# tokenizer.__call__ returns ids shaped (2, 3002)\r\nids = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors='pt',\r\n max_length=tokenizer.model_max_length, ).input_ids\r\nassert ids.shape[1] <= tokenizer.model_max_length, ids.shape[1]\r\n```",
"I'll take a look"
] | 1,592 | 1,593 | 1,593 | CONTRIBUTOR | null | I am trying to replace the old workflow of
calling batch_encode_plus to make tensors of shape
`(n_examples, model_max_length)` and then calling `trim_batch` to reduce padding computation, with the new tokenizers kwargs.
Is this possible?
The following code does not seem to truncate inputs longer than 512 (the second assert breaks).
Attempt:
```python
from transformers import BartTokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
kw = dict(max_length=512,
pad_to_max_length=True, padding=True, return_tensors='pt', truncation='only_first')
batch = tokenizer(['tiny sentence 1', 'tiny_sentence2'],**kw)
assert batch.input_ids.shape[1] == 7, batch.input_ids.shape[1]
input_ids, mask = trim_batch(**batch, pad_token_id=tokenizer.pad_token_id)
assert input_ids.shape[1] == 7, batch.input_ids.shape[1]
batch_overflow = tokenizer(['tiny sentence 1'*1000, 'tiny_sentence2'], **kw)
assert batch_overflow.input_ids.shape[1] == 512, batch_overflow.input_ids.shape[1]
```
Traceback:
```python
assert batch_overflow.input_ids.shape[1] == 512, batch_overflow.input_ids.shape[1]
AssertionError: 3002
```
Help much appreciated, @mfuntowicz @thomwolf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5181/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5180/comments | https://api.github.com/repos/huggingface/transformers/issues/5180/events | https://github.com/huggingface/transformers/issues/5180 | 643,099,741 | MDU6SXNzdWU2NDMwOTk3NDE= | 5,180 | Which Marian version was used to train the Helsinki-NLP/* checkpoints? | {
"login": "vmedappa",
"id": 40217037,
"node_id": "MDQ6VXNlcjQwMjE3MDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/40217037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vmedappa",
"html_url": "https://github.com/vmedappa",
"followers_url": "https://api.github.com/users/vmedappa/followers",
"following_url": "https://api.github.com/users/vmedappa/following{/other_user}",
"gists_url": "https://api.github.com/users/vmedappa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vmedappa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vmedappa/subscriptions",
"organizations_url": "https://api.github.com/users/vmedappa/orgs",
"repos_url": "https://api.github.com/users/vmedappa/repos",
"events_url": "https://api.github.com/users/vmedappa/events{/privacy}",
"received_events_url": "https://api.github.com/users/vmedappa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Those models were trained by @jorgtied as part of the OPUS project. He might know the answer.",
"@sshleifer Do you know how can I fine tune the models with my specific data?",
"I just started a PR to support that, but it's still a week away. At the moment, your best bet is to modify `summarization/finetune.py`.\r\n",
"> Trying to find the **Marian version used to train Marian MT transformers**. This would help me understand the **benchmarks for translation times**. I see that multiple papers mention various benchmarks for CPU translation :\r\n> \r\n> * https://www.aclweb.org/anthology/D19-5632.pdf - has the benchmarks for the latest update Marian v1.9\r\n> \r\n> * https://www.aclweb.org/anthology/P18-4020.pdf - does not mention the version but I presume it uses versions between Marian v1.7 and Marian v1.5\r\n> \r\n> \r\n> **Marian v1.9 suggests considerably faster translation times than Marian v1.7 and Marian v1.5.**\r\n> \r\n> Finding the version used for the huggingface transformers both in the huggingface and Helsinki NLP documentation hasn't been fruitful. Would be really helpful to have this answered.\r\n\r\nThe version should be marked in the original model files distributed at https://github.com/Helsinki-NLP/Opus-MT. Most of them will be v1.7. I just started recently to use v1.9 for upcoming models.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,599 | 1,599 | NONE | null | Trying to find the **Marian version used to train Marian MT transformers**. This would help me understand the **benchmarks for translation times**. I see that multiple papers mention various benchmarks for CPU translation :
- https://www.aclweb.org/anthology/D19-5632.pdf - has the benchmarks for the latest update Marian v1.9
- https://www.aclweb.org/anthology/P18-4020.pdf - does not mention the version but I presume it uses versions between Marian v1.7 and Marian v1.5
**Marian v1.9 suggests considerably faster translation times than Marian v1.7 and Marian v1.5.**
Finding the version used for the huggingface transformers both in the huggingface and Helsinki NLP documentation hasn't been fruitful. Would be really helpful to have this answered. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5180/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5179/comments | https://api.github.com/repos/huggingface/transformers/issues/5179/events | https://github.com/huggingface/transformers/pull/5179 | 643,044,274 | MDExOlB1bGxSZXF1ZXN0NDM3OTIwMzA4 | 5,179 | Model card for t5-base-finetuned-emotion (recognition) | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=h1) Report\n> Merging [#5179](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eb0ca71ef6772a1dd16ac152e1f7a07a9e1e6fda&el=desc) will **increase** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5179 +/- ##\n==========================================\n+ Coverage 77.98% 78.06% +0.08% \n==========================================\n Files 138 138 \n Lines 23710 23710 \n==========================================\n+ Hits 18490 18510 +20 \n+ Misses 5220 5200 -20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `35.03% <0.00%> (+6.36%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=footer). Last update [eb0ca71...a3d99bd](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5179/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5179",
"html_url": "https://github.com/huggingface/transformers/pull/5179",
"diff_url": "https://github.com/huggingface/transformers/pull/5179.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5179.patch",
"merged_at": 1592847946000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5178/comments | https://api.github.com/repos/huggingface/transformers/issues/5178/events | https://github.com/huggingface/transformers/pull/5178 | 643,040,214 | MDExOlB1bGxSZXF1ZXN0NDM3OTE2ODc5 | 5,178 | Add model cards for Microsoft's MiniLM | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=h1) Report\n> Merging [#5178](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa0be6d76187e0639851f6d762b9ffae7fbd9202&el=desc) will **increase** coverage by `0.61%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5178 +/- ##\n==========================================\n+ Coverage 77.75% 78.36% +0.61% \n==========================================\n Files 138 138 \n Lines 23710 23710 \n==========================================\n+ Hits 18435 18581 +146 \n+ Misses 5275 5129 -146 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.28% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `95.18% <0.00%> (+0.37%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.28% <0.00%> (+0.82%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `96.00% <0.00%> (+68.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=footer). Last update [fa0be6d...e38cb39](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5178/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5178/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5178",
"html_url": "https://github.com/huggingface/transformers/pull/5178",
"diff_url": "https://github.com/huggingface/transformers/pull/5178.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5178.patch",
"merged_at": 1592833695000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5177/comments | https://api.github.com/repos/huggingface/transformers/issues/5177/events | https://github.com/huggingface/transformers/issues/5177 | 642,966,277 | MDU6SXNzdWU2NDI5NjYyNzc= | 5,177 | When is 2.12 coming out? | {
"login": "Laksh1997",
"id": 59830552,
"node_id": "MDQ6VXNlcjU5ODMwNTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/59830552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laksh1997",
"html_url": "https://github.com/Laksh1997",
"followers_url": "https://api.github.com/users/Laksh1997/followers",
"following_url": "https://api.github.com/users/Laksh1997/following{/other_user}",
"gists_url": "https://api.github.com/users/Laksh1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laksh1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laksh1997/subscriptions",
"organizations_url": "https://api.github.com/users/Laksh1997/orgs",
"repos_url": "https://api.github.com/users/Laksh1997/repos",
"events_url": "https://api.github.com/users/Laksh1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laksh1997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"For now you can install from source to use `AutoModelForCausalLM`",
"Yes but you can't push out libraries with packages installed from master",
"Probably this week or the next!",
"+1",
"Version v3.0.0 was released this morning!"
] | 1,592 | 1,593 | 1,593 | NONE | null | Hi when is 2.12 coming out? Cheers.
(Interested in AutoModelForCausalLM) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5177/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5176/comments | https://api.github.com/repos/huggingface/transformers/issues/5176/events | https://github.com/huggingface/transformers/issues/5176 | 642,953,157 | MDU6SXNzdWU2NDI5NTMxNTc= | 5,176 | object returned by RobertaTokenizerFast() class are not serializable, | {
"login": "ad6398",
"id": 38162294,
"node_id": "MDQ6VXNlcjM4MTYyMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/38162294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ad6398",
"html_url": "https://github.com/ad6398",
"followers_url": "https://api.github.com/users/ad6398/followers",
"following_url": "https://api.github.com/users/ad6398/following{/other_user}",
"gists_url": "https://api.github.com/users/ad6398/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ad6398/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ad6398/subscriptions",
"organizations_url": "https://api.github.com/users/ad6398/orgs",
"repos_url": "https://api.github.com/users/ad6398/repos",
"events_url": "https://api.github.com/users/ad6398/events{/privacy}",
"received_events_url": "https://api.github.com/users/ad6398/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1920687293,
"node_id": "MDU6TGFiZWwxOTIwNjg3Mjkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Fast%20Tokenizers",
"name": "Fast Tokenizers",
"color": "b60205",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"cc @mfuntowicz ",
"This is fixed on master and should be in the next release"
] | 1,592 | 1,593 | 1,593 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): RobertaTokenizerFast()
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
yes
## To reproduce
[Steps to reproduce the behavior: https://github.com/huggingface/tokenizers/issues/313#issue-642829111 ](https://github.com/huggingface/tokenizers/issues/313#issue-642829111)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5176/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5175/comments | https://api.github.com/repos/huggingface/transformers/issues/5175/events | https://github.com/huggingface/transformers/pull/5175 | 642,886,418 | MDExOlB1bGxSZXF1ZXN0NDM3Nzg5MjA4 | 5,175 | Update model card for COVID-QA model | {
"login": "bogdankostic",
"id": 48713846,
"node_id": "MDQ6VXNlcjQ4NzEzODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/48713846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bogdankostic",
"html_url": "https://github.com/bogdankostic",
"followers_url": "https://api.github.com/users/bogdankostic/followers",
"following_url": "https://api.github.com/users/bogdankostic/following{/other_user}",
"gists_url": "https://api.github.com/users/bogdankostic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bogdankostic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bogdankostic/subscriptions",
"organizations_url": "https://api.github.com/users/bogdankostic/orgs",
"repos_url": "https://api.github.com/users/bogdankostic/repos",
"events_url": "https://api.github.com/users/bogdankostic/events{/privacy}",
"received_events_url": "https://api.github.com/users/bogdankostic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=h1) Report\n> Merging [#5175](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc3a0c06075050d3de586c543e4ad6a7efc9260e&el=desc) will **decrease** coverage by `0.37%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5175 +/- ##\n==========================================\n- Coverage 78.31% 77.93% -0.38% \n==========================================\n Files 137 137 \n Lines 23475 23475 \n==========================================\n- Hits 18384 18295 -89 \n- Misses 5091 5180 +89 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.16% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.11% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=footer). Last update [bc3a0c0...2d35d71](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Specify exact dataset that was used for crossvalidation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5175/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5175",
"html_url": "https://github.com/huggingface/transformers/pull/5175",
"diff_url": "https://github.com/huggingface/transformers/pull/5175.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5175.patch",
"merged_at": 1592864773000
} |
https://api.github.com/repos/huggingface/transformers/issues/5174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5174/comments | https://api.github.com/repos/huggingface/transformers/issues/5174/events | https://github.com/huggingface/transformers/pull/5174 | 642,798,230 | MDExOlB1bGxSZXF1ZXN0NDM3NzE3NjAx | 5,174 | Add README.md (nyu-mll) | {
"login": "lhaausing",
"id": 55363337,
"node_id": "MDQ6VXNlcjU1MzYzMzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/55363337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhaausing",
"html_url": "https://github.com/lhaausing",
"followers_url": "https://api.github.com/users/lhaausing/followers",
"following_url": "https://api.github.com/users/lhaausing/following{/other_user}",
"gists_url": "https://api.github.com/users/lhaausing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhaausing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhaausing/subscriptions",
"organizations_url": "https://api.github.com/users/lhaausing/orgs",
"repos_url": "https://api.github.com/users/lhaausing/repos",
"events_url": "https://api.github.com/users/lhaausing/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhaausing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=h1) Report\n> Merging [#5174](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc3a0c06075050d3de586c543e4ad6a7efc9260e&el=desc) will **decrease** coverage by `0.37%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5174 +/- ##\n==========================================\n- Coverage 78.31% 77.93% -0.38% \n==========================================\n Files 137 137 \n Lines 23475 23475 \n==========================================\n- Hits 18384 18295 -89 \n- Misses 5091 5180 +89 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.16% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.80% <0.00%> (+0.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=footer). Last update [bc3a0c0...7af96db](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks! model cards: https://huggingface.co/nyu-mll"
] | 1,592 | 1,593 | 1,592 | CONTRIBUTOR | null | Add README.md on organization nyu-mll. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5174/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5174/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5174",
"html_url": "https://github.com/huggingface/transformers/pull/5174",
"diff_url": "https://github.com/huggingface/transformers/pull/5174.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5174.patch",
"merged_at": 1592861068000
} |
https://api.github.com/repos/huggingface/transformers/issues/5173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5173/comments | https://api.github.com/repos/huggingface/transformers/issues/5173/events | https://github.com/huggingface/transformers/issues/5173 | 642,652,943 | MDU6SXNzdWU2NDI2NTI5NDM= | 5,173 | Trying to made a keras model with transformer layers defined in hf-transformers, keep running into `AttributeError: Tensor.op is meaningless when eager execution is enabled, when trying to make a keras model` | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | CONTRIBUTOR | null | # ❓ Questions & Help
## Details
I am trying to build models using huggingface transformer layers. However, I keep running into `AttributeError: Tensor.op is meaningless when eager execution is enabled.`
I don't want to disable eager execution, as I heard it interfere's with some of keras' other functionality (feel free to correct if this is wrong).
Here's the abridged code for one attempt (full code here https://colab.research.google.com/drive/1pnFDEQB4EuxNM1pSgbWJNKD2208dIIN0?usp=sharing )
```
from transformers.modeling_tf_bert import TFBertLayer
class TFBertEncoderAlter(tf.keras.layers.Layer):
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
self.output_hidden_states = config.output_hidden_states
self.layer = [TFBertLayer(config, name="layer_._{}".format(i)) for i in range(config.num_hidden_layers)]
def call(self, inputs, training=False):
hidden_states, attention_mask, output_attentions = inputs
all_hidden_states = ()
all_attentions = ()
for i, layer_module in enumerate(self.layer):
if self.output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
layer_outputs = layer_module(
[hidden_states, attention_mask, output_attentions], training=training
)
hidden_states = layer_outputs[0]
if cast_bool_to_primitive(output_attentions) is True:
all_attentions = all_attentions + (layer_outputs[1],)
# Add last layer
if self.output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
outputs = (hidden_states,)
if self.output_hidden_states:
outputs = outputs + (all_hidden_states,)
if cast_bool_to_primitive(output_attentions) is True:
outputs = outputs + (all_attentions,)
return outputs # outputs, (hidden states), (attentions)
P_trans11 = TFBertEncoderAlter(config, name='Encoder')
inputHiddenVals = tf.keras.Input(shape=[None, None], dtype=tf.float32, name='input_Q',
batch_size=None)
P_outputs = P_trans11((outt, None, None))
modelNew = tf.keras.Model(inputHiddenVals,P_outputs)
```
Here is the output
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-35-8cd9393cb573> in <module>()
5
6 P_outputs = P_trans11((outt, None, None))
----> 7 modelNew = tf.keras.Model(inputHiddenVals,P_outputs)
6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in __init__(self, *args, **kwargs)
165
166 def __init__(self, *args, **kwargs):
--> 167 super(Model, self).__init__(*args, **kwargs)
168 _keras_api_gauge.get_cell('model').set(True)
169 # Model must be created under scope of DistStrat it will be trained with.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in __init__(self, *args, **kwargs)
171 'inputs' in kwargs and 'outputs' in kwargs):
172 # Graph network
--> 173 self._init_graph_network(*args, **kwargs)
174 else:
175 # Subclassed network
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
454 self._self_setattr_tracking = False # pylint: disable=protected-access
455 try:
--> 456 result = method(self, *args, **kwargs)
457 finally:
458 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in _init_graph_network(self, inputs, outputs, name, **kwargs)
252
253 if any(not hasattr(tensor, '_keras_history') for tensor in self.outputs):
--> 254 base_layer_utils.create_keras_history(self._nested_outputs)
255
256 self._base_init(name=name, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py in create_keras_history(tensors)
184 keras_tensors: The Tensors found that came from a Keras Layer.
185 """
--> 186 _, created_layers = _create_keras_history_helper(tensors, set(), [])
187 return created_layers
188
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
210 if getattr(tensor, '_keras_history', None) is not None:
211 continue
--> 212 op = tensor.op # The Op that created this Tensor.
213 if op not in processed_ops:
214 if op.type.startswith('Sparse'):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in op(self)
1111 def op(self):
1112 raise AttributeError(
-> 1113 "Tensor.op is meaningless when eager execution is enabled.")
1114
1115 @property
AttributeError: Tensor.op is meaningless when eager execution is enabled.
```
Here is a different approach ( full code here https://colab.research.google.com/drive/1bieigPh98l9POzT3Tdz8DWDN_nh18YG1?usp=sharing )
```
from transformers.modeling_tf_bert import TFBertEncoder, TFBertMainLayer, TFBertLayer
from transformers.modeling_tf_utils import (
get_initializer,
keras_serializable,
shape_list,
)
from transformers.configuration_bert import BertConfig
class TFBertEncoder0(tf.keras.layers.Layer):
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
self.output_attentions = config.output_attentions
self.output_hidden_states = config.output_hidden_states
self.layer = [TFBertLayer(config, name="layer_._{}".format(i)) for i in range(config.num_hidden_layers)]
def call(self, inputs, training=False):
hidden_states, attention_mask, head_mask = inputs
all_hidden_states = ()
all_attentions = ()
for i, layer_module in enumerate(self.layer):
if self.output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
layer_outputs = layer_module([hidden_states, attention_mask, head_mask], training=training)
hidden_states = layer_outputs[0]
if self.output_attentions:
all_attentions = all_attentions + (layer_outputs[1],)
# Add last layer
if self.output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
outputs = (hidden_states,)
if self.output_hidden_states:
outputs = outputs + (all_hidden_states,)
if self.output_attentions:
outputs = outputs + (all_attentions,)
return outputs # outputs, (hidden states), (attentions)
@keras_serializable
class TFBertMainLayerAlter4(tf.keras.layers.Layer):
config_class = BertConfig
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
self.num_hidden_layers = config.num_hidden_layers
self.initializer_range = config.initializer_range
self.output_attentions = config.output_attentions
self.encoder = TFBertEncoder0(config, name="encoder")
def _prune_heads(self, heads_to_prune):
""" Prunes heads of the model.
heads_to_prune: dict of {layer_num: list of heads to prune in this layer}
See base class PreTrainedModel
"""
raise NotImplementedError
def call(
self,
inputs,
training=False,
):
encoder_outputs = self.encoder(
[inputs, None, None], training=training
)
sequence_output = encoder_outputs[0]
outputs = (sequence_output,) + encoder_outputs[
1:
] # add hidden_states and attentions if they are here
return outputs # sequence_output, pooled_output, (hidden_states), (attentions)
P_trans11 = TFBertMainLayerAlter4(config3, name="roberta")
inputHiddenVals = tf.keras.Input(shape=[None, None], dtype=tf.float32, name='input_Q',
batch_size=None)
P_outputs = P_trans11(outt)
modelNew = tf.keras.Model(inputHiddenVals,P_outputs)
```
Again, same result
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-43-79b3b73e5f5f> in <module>()
5
6 P_outputs = P_trans11(outt)
----> 7 modelNew = tf.keras.Model(inputHiddenVals,P_outputs)
6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in __init__(self, *args, **kwargs)
165
166 def __init__(self, *args, **kwargs):
--> 167 super(Model, self).__init__(*args, **kwargs)
168 _keras_api_gauge.get_cell('model').set(True)
169 # Model must be created under scope of DistStrat it will be trained with.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in __init__(self, *args, **kwargs)
171 'inputs' in kwargs and 'outputs' in kwargs):
172 # Graph network
--> 173 self._init_graph_network(*args, **kwargs)
174 else:
175 # Subclassed network
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
454 self._self_setattr_tracking = False # pylint: disable=protected-access
455 try:
--> 456 result = method(self, *args, **kwargs)
457 finally:
458 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in _init_graph_network(self, inputs, outputs, name, **kwargs)
252
253 if any(not hasattr(tensor, '_keras_history') for tensor in self.outputs):
--> 254 base_layer_utils.create_keras_history(self._nested_outputs)
255
256 self._base_init(name=name, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py in create_keras_history(tensors)
184 keras_tensors: The Tensors found that came from a Keras Layer.
185 """
--> 186 _, created_layers = _create_keras_history_helper(tensors, set(), [])
187 return created_layers
188
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
210 if getattr(tensor, '_keras_history', None) is not None:
211 continue
--> 212 op = tensor.op # The Op that created this Tensor.
213 if op not in processed_ops:
214 if op.type.startswith('Sparse'):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in op(self)
1111 def op(self):
1112 raise AttributeError(
-> 1113 "Tensor.op is meaningless when eager execution is enabled.")
1114
1115 @property
AttributeError: Tensor.op is meaningless when eager execution is enabled.
```
Here's another attempt (full code here https://colab.research.google.com/drive/1UVJ7XSx0vXpgApe6E7ECVb9LNSJN_D9-?usp=sharing )
```
from transformers.modeling_tf_bert import TFBertLayer
l1 = TFBertLayer(config)
l2 = TFBertLayer(config)
inputHiddenVals = tf.keras.Input(shape=[None, None], dtype=tf.float32, name='input_Q',
batch_size=None)
P_outputs = l1((outt, None, None))[0]
P_outputs2 = l2((outt, None, None))[0]
modelNew = tf.keras.Model(inputHiddenVals,P_outputs2)
```
Again, same results.
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/62504903/attributeerror-tensor-op-is-meaningless-when-eager-execution-is-enabled-when-t | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5173/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5172/comments | https://api.github.com/repos/huggingface/transformers/issues/5172/events | https://github.com/huggingface/transformers/issues/5172 | 642,643,307 | MDU6SXNzdWU2NDI2NDMzMDc= | 5,172 | Load a T5ForConditionalGeneration's encoder into a T5Model | {
"login": "Palipoor",
"id": 16380397,
"node_id": "MDQ6VXNlcjE2MzgwMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/16380397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Palipoor",
"html_url": "https://github.com/Palipoor",
"followers_url": "https://api.github.com/users/Palipoor/followers",
"following_url": "https://api.github.com/users/Palipoor/following{/other_user}",
"gists_url": "https://api.github.com/users/Palipoor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Palipoor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Palipoor/subscriptions",
"organizations_url": "https://api.github.com/users/Palipoor/orgs",
"repos_url": "https://api.github.com/users/Palipoor/repos",
"events_url": "https://api.github.com/users/Palipoor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Palipoor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @Palipoor, This might do the trick,\r\n\r\n```python\r\nfirst_model = T5ForConditionalGeneration.from_pretrained('your_finetuned_model')\r\nsecond_model = T5Model.from_pretrained('t5-small') # assuming 'your_finetuned_model' is t5-small\r\n\r\n# get first model's encoder weights\r\nfirst_model_encoder_state_dict = first_model.encoder.state_dict()\r\n\r\n# load first model's encoder weights into second_model's encoder\r\nsecond_model.encoder.load_state_dict(first_model_encoder_state_dict)\r\n```\r\n\r\n@patrickvonplaten can you take a look ?",
"@patil-suraj has the correct idea I think. You can even make it easier by just doing\r\n\r\n```python\r\nt5_model_no_lm_head = T5.from_pretrained(\"<path_to_t5_for_cond_generation>\") # this will load all weighs that are present in both models. So It will just skip the lm head weighs\r\n```\r\n\r\nYou can verify that this works by doing the following:\r\n\r\n```python \r\nt5_model_with_lm_head = T5ForConditionalGeneration.from_pretrained('t5-small')\r\nt5_model_with_lm_head.save_pretrained(\"./\")\r\nt5_model_no_lm_head = T5Model.from_pretrained(\"./\")\r\n```",
"@patrickvonplaten @patil-suraj Thank you both for your resposnes! ",
"Hi All,\r\nSo I have been trying to load a pre-tuned T5 model and then predict a sentence. I have been following this tutorial :\r\nhttps://www.geeksforgeeks.org/text-to-text-transfer-transformer-in-data-augmentation/\r\n\r\nAnd I have been able to pretune and save the model in a folder. Now I cant seem to know how to load it back and predict a given sentence. I have done the steps suggested by @patil-suraj but then how to use this configuration to do a prediction.\r\n\r\nGrateful if you could help. Thanks :)\r\n\r\n"
] | 1,592 | 1,619 | 1,592 | NONE | null | Hi,
I know that T5ForConditionalGeneration is a T5Model with decoding. I've got a T5ForConditionalGeneration and that I've fine-tuned on a seq2seq task and now I want to use its T5 encoder to initialize the parameters in a T5Model( to further train it on some other task.) I read the code and I didn't understand what should I do. Can you please help me? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5172/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5172/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5171/comments | https://api.github.com/repos/huggingface/transformers/issues/5171/events | https://github.com/huggingface/transformers/pull/5171 | 642,642,567 | MDExOlB1bGxSZXF1ZXN0NDM3NjAxOTk2 | 5,171 | Fixing docs for Encoder Decoder Config | {
"login": "mikaelsouza",
"id": 9092284,
"node_id": "MDQ6VXNlcjkwOTIyODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9092284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikaelsouza",
"html_url": "https://github.com/mikaelsouza",
"followers_url": "https://api.github.com/users/mikaelsouza/followers",
"following_url": "https://api.github.com/users/mikaelsouza/following{/other_user}",
"gists_url": "https://api.github.com/users/mikaelsouza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikaelsouza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikaelsouza/subscriptions",
"organizations_url": "https://api.github.com/users/mikaelsouza/orgs",
"repos_url": "https://api.github.com/users/mikaelsouza/repos",
"events_url": "https://api.github.com/users/mikaelsouza/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikaelsouza/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=h1) Report\n> Merging [#5171](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc3a0c06075050d3de586c543e4ad6a7efc9260e&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5171 +/- ##\n=======================================\n Coverage 78.31% 78.31% \n=======================================\n Files 137 137 \n Lines 23475 23475 \n=======================================\n+ Hits 18384 18385 +1 \n+ Misses 5091 5090 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.16% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.11% <0.00%> (+0.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=footer). Last update [bc3a0c0...06ccbb9](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Oh, you are 100% correct here @mikaelsouza . This must be pretty confusing in the docs. Thanks for the fix!"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | This is a small contribution to the EncoderDecoderConfig documentation.
While trying to use this class, I noticed the documentation said that we could pass 2 "encoder" parameters to the class constructor.
I am almost 99.99% certain it meant we could pass an encoder and a decoder parameter to the class constructor. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5171/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5171",
"html_url": "https://github.com/huggingface/transformers/pull/5171",
"diff_url": "https://github.com/huggingface/transformers/pull/5171.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5171.patch",
"merged_at": 1592815877000
} |
https://api.github.com/repos/huggingface/transformers/issues/5170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5170/comments | https://api.github.com/repos/huggingface/transformers/issues/5170/events | https://github.com/huggingface/transformers/issues/5170 | 642,619,809 | MDU6SXNzdWU2NDI2MTk4MDk= | 5,170 | Add support for `encoder_hidden_states` and `encoder_attention_mask` in modeling_longformer | {
"login": "HHousen",
"id": 11785397,
"node_id": "MDQ6VXNlcjExNzg1Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/11785397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HHousen",
"html_url": "https://github.com/HHousen",
"followers_url": "https://api.github.com/users/HHousen/followers",
"following_url": "https://api.github.com/users/HHousen/following{/other_user}",
"gists_url": "https://api.github.com/users/HHousen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HHousen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HHousen/subscriptions",
"organizations_url": "https://api.github.com/users/HHousen/orgs",
"repos_url": "https://api.github.com/users/HHousen/repos",
"events_url": "https://api.github.com/users/HHousen/events{/privacy}",
"received_events_url": "https://api.github.com/users/HHousen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @HHousen,\r\n\r\nIn order for Longformer to work some changes need to be done to the `longformer_modeling.py` file (move the level of abstraction much lower) and also we need to think about how to deal with cross - attention layers for Longformer's chunked self attention. I will do some refactoring for Longformer soon. \r\n\r\nAs it is now, I will not work properly within the encoder-decoder architecture",
"Linking this to https://github.com/huggingface/transformers/issues/4225#issuecomment-659467998.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,600 | 1,600 | CONTRIBUTOR | null | # 🚀 Feature request
In order to use the longformer with the `EncoderDecoderModel` structure, it needs to accept `head_mask`, `encoder_hidden_states`, and `encoder_attention_mask` in the `forward()` function. Currently, `LongformerSelfAttention` asserts that `encoder_hidden_states` and `encoder_attention_mask` are `None`. I attempted to implement this functionality by comparing the bert implementation, which accepts these values, to the longformer implementation.
I can create an `EncoderDecoderModel` like so: `model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "allenai/longformer-base-4096")` and the model does train using my changes. Even though the model trains, I am not confident that the changes I made are correct. I think the cross attention mechanism already exists within the `LongformerModel` since it is a subclass of `RobertaModel` which is a subclass of `BertModel`. `BertModel` uses `BertLayer`s which use cross attention if `self.is_decoder` is True.
Furthermore, I get the below error messages, which have something to do with the inheritance hierarchy. It fails to initialize the cross attention layers, which makes sense, but I am not sure about the other ones.
```
2020-06-21 18:00:53,533|transformers.modeling_encoder_decoder|INFO> Initializing allenai/longformer-base-4096 as a decoder model. Cross attention layers are added to allenai/longformer-base-4096 and randomly initialized if allenai/longformer-base-4096's architecture allows for cross attention layers.
2020-06-21 18:00:53,609|transformers.modeling_utils|INFO> loading weights file https://cdn.huggingface.co/allenai/longformer-base-4096/pytorch_model.bin from cache at /root/.cache/torch/transformers/dfc92dbbf5c555abf807425ebdb22b55de7a17e21fe1c48cbaa5764982c1d9c0.cd65234711d2e83d420aa696eb9186cdec6ab79ef8bf090b442cf249443dfa92
2020-06-21 18:00:58,792|transformers.modeling_utils|INFO> Weights of BertLMHeadModel not initialized from pretrained model: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.crossattention.self.query.weight', 'encoder.layer.0.crossattention.self.query.bias', 'encoder.layer.0.crossattention.self.key.weight', 'encoder.layer.0.crossattention.self.key.bias', 'encoder.layer.0.crossattention.self.value.weight', 'encoder.layer.0.crossattention.self.value.bias', 'encoder.layer.0.crossattention.output.dense.weight', 'encoder.layer.0.crossattention.output.dense.bias', 'encoder.layer.0.crossattention.output.LayerNorm.weight', 'encoder.layer.0.crossattention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.crossattention.self.query.weight', 'encoder.layer.1.crossattention.self.query.bias', 'encoder.layer.1.crossattention.self.key.weight', 'encoder.layer.1.crossattention.self.key.bias', 'encoder.layer.1.crossattention.self.value.weight', 'encoder.layer.1.crossattention.self.value.bias', 'encoder.layer.1.crossattention.output.dense.weight', 'encoder.layer.1.crossattention.output.dense.bias', 'encoder.layer.1.crossattention.output.LayerNorm.weight', 'encoder.layer.1.crossattention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.crossattention.self.query.weight', 'encoder.layer.2.crossattention.self.query.bias', 'encoder.layer.2.crossattention.self.key.weight', 'encoder.layer.2.crossattention.self.key.bias', 'encoder.layer.2.crossattention.self.value.weight', 'encoder.layer.2.crossattention.self.value.bias', 'encoder.layer.2.crossattention.output.dense.weight', 'encoder.layer.2.crossattention.output.dense.bias', 'encoder.layer.2.crossattention.output.LayerNorm.weight', 'encoder.layer.2.crossattention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.crossattention.self.query.weight', 'encoder.layer.3.crossattention.self.query.bias', 'encoder.layer.3.crossattention.self.key.weight', 'encoder.layer.3.crossattention.self.key.bias', 'encoder.layer.3.crossattention.self.value.weight', 'encoder.layer.3.crossattention.self.value.bias', 'encoder.layer.3.crossattention.output.dense.weight', 'encoder.layer.3.crossattention.output.dense.bias', 'encoder.layer.3.crossattention.output.LayerNorm.weight', 'encoder.layer.3.crossattention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.crossattention.self.query.weight', 'encoder.layer.4.crossattention.self.query.bias', 'encoder.layer.4.crossattention.self.key.weight', 'encoder.layer.4.crossattention.self.key.bias', 'encoder.layer.4.crossattention.self.value.weight', 'encoder.layer.4.crossattention.self.value.bias', 'encoder.layer.4.crossattention.output.dense.weight', 'encoder.layer.4.crossattention.output.dense.bias', 'encoder.layer.4.crossattention.output.LayerNorm.weight', 'encoder.layer.4.crossattention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.crossattention.self.query.weight', 'encoder.layer.5.crossattention.self.query.bias', 'encoder.layer.5.crossattention.self.key.weight', 'encoder.layer.5.crossattention.self.key.bias', 'encoder.layer.5.crossattention.self.value.weight', 'encoder.layer.5.crossattention.self.value.bias', 'encoder.layer.5.crossattention.output.dense.weight', 'encoder.layer.5.crossattention.output.dense.bias', 'encoder.layer.5.crossattention.output.LayerNorm.weight', 'encoder.layer.5.crossattention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.crossattention.self.query.weight', 'encoder.layer.6.crossattention.self.query.bias', 'encoder.layer.6.crossattention.self.key.weight', 'encoder.layer.6.crossattention.self.key.bias', 'encoder.layer.6.crossattention.self.value.weight', 'encoder.layer.6.crossattention.self.value.bias', 'encoder.layer.6.crossattention.output.dense.weight', 'encoder.layer.6.crossattention.output.dense.bias', 'encoder.layer.6.crossattention.output.LayerNorm.weight', 'encoder.layer.6.crossattention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.crossattention.self.query.weight', 'encoder.layer.7.crossattention.self.query.bias', 'encoder.layer.7.crossattention.self.key.weight', 'encoder.layer.7.crossattention.self.key.bias', 'encoder.layer.7.crossattention.self.value.weight', 'encoder.layer.7.crossattention.self.value.bias', 'encoder.layer.7.crossattention.output.dense.weight', 'encoder.layer.7.crossattention.output.dense.bias', 'encoder.layer.7.crossattention.output.LayerNorm.weight', 'encoder.layer.7.crossattention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.crossattention.self.query.weight', 'encoder.layer.8.crossattention.self.query.bias', 'encoder.layer.8.crossattention.self.key.weight', 'encoder.layer.8.crossattention.self.key.bias', 'encoder.layer.8.crossattention.self.value.weight', 'encoder.layer.8.crossattention.self.value.bias', 'encoder.layer.8.crossattention.output.dense.weight', 'encoder.layer.8.crossattention.output.dense.bias', 'encoder.layer.8.crossattention.output.LayerNorm.weight', 'encoder.layer.8.crossattention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.crossattention.self.query.weight', 'encoder.layer.9.crossattention.self.query.bias', 'encoder.layer.9.crossattention.self.key.weight', 'encoder.layer.9.crossattention.self.key.bias', 'encoder.layer.9.crossattention.self.value.weight', 'encoder.layer.9.crossattention.self.value.bias', 'encoder.layer.9.crossattention.output.dense.weight', 'encoder.layer.9.crossattention.output.dense.bias', 'encoder.layer.9.crossattention.output.LayerNorm.weight', 'encoder.layer.9.crossattention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.crossattention.self.query.weight', 'encoder.layer.10.crossattention.self.query.bias', 'encoder.layer.10.crossattention.self.key.weight', 'encoder.layer.10.crossattention.self.key.bias', 'encoder.layer.10.crossattention.self.value.weight', 'encoder.layer.10.crossattention.self.value.bias', 'encoder.layer.10.crossattention.output.dense.weight', 'encoder.layer.10.crossattention.output.dense.bias', 'encoder.layer.10.crossattention.output.LayerNorm.weight', 'encoder.layer.10.crossattention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.crossattention.self.query.weight', 'encoder.layer.11.crossattention.self.query.bias', 'encoder.layer.11.crossattention.self.key.weight', 'encoder.layer.11.crossattention.self.key.bias', 'encoder.layer.11.crossattention.self.value.weight', 'encoder.layer.11.crossattention.self.value.bias', 'encoder.layer.11.crossattention.output.dense.weight', 'encoder.layer.11.crossattention.output.dense.bias', 'encoder.layer.11.crossattention.output.LayerNorm.weight', 'encoder.layer.11.crossattention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.decoder.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias']
2020-06-21 18:00:58,816|transformers.modeling_utils|INFO> Weights from pretrained model not used in BertLMHeadModel: ['longformer.embeddings.word_embeddings.weight', 'longformer.embeddings.position_embeddings.weight', 'longformer.embeddings.token_type_embeddings.weight', 'longformer.embeddings.LayerNorm.weight', 'longformer.embeddings.LayerNorm.bias', 'longformer.encoder.layer.0.attention.self.query.weight', 'longformer.encoder.layer.0.attention.self.query.bias', 'longformer.encoder.layer.0.attention.self.key.weight', 'longformer.encoder.layer.0.attention.self.key.bias', 'longformer.encoder.layer.0.attention.self.value.weight', 'longformer.encoder.layer.0.attention.self.value.bias', 'longformer.encoder.layer.0.attention.self.query_global.weight', 'longformer.encoder.layer.0.attention.self.query_global.bias', 'longformer.encoder.layer.0.attention.self.key_global.weight', 'longformer.encoder.layer.0.attention.self.key_global.bias', 'longformer.encoder.layer.0.attention.self.value_global.weight', 'longformer.encoder.layer.0.attention.self.value_global.bias', 'longformer.encoder.layer.0.attention.output.dense.weight', 'longformer.encoder.layer.0.attention.output.dense.bias', 'longformer.encoder.layer.0.attention.output.LayerNorm.weight', 'longformer.encoder.layer.0.attention.output.LayerNorm.bias', 'longformer.encoder.layer.0.intermediate.dense.weight', 'longformer.encoder.layer.0.intermediate.dense.bias', 'longformer.encoder.layer.0.output.dense.weight', 'longformer.encoder.layer.0.output.dense.bias', 'longformer.encoder.layer.0.output.LayerNorm.weight', 'longformer.encoder.layer.0.output.LayerNorm.bias', 'longformer.encoder.layer.1.attention.self.query.weight', 'longformer.encoder.layer.1.attention.self.query.bias', 'longformer.encoder.layer.1.attention.self.key.weight', 'longformer.encoder.layer.1.attention.self.key.bias', 'longformer.encoder.layer.1.attention.self.value.weight', 'longformer.encoder.layer.1.attention.self.value.bias', 'longformer.encoder.layer.1.attention.self.query_global.weight', 'longformer.encoder.layer.1.attention.self.query_global.bias', 'longformer.encoder.layer.1.attention.self.key_global.weight', 'longformer.encoder.layer.1.attention.self.key_global.bias', 'longformer.encoder.layer.1.attention.self.value_global.weight', 'longformer.encoder.layer.1.attention.self.value_global.bias', 'longformer.encoder.layer.1.attention.output.dense.weight', 'longformer.encoder.layer.1.attention.output.dense.bias', 'longformer.encoder.layer.1.attention.output.LayerNorm.weight', 'longformer.encoder.layer.1.attention.output.LayerNorm.bias', 'longformer.encoder.layer.1.intermediate.dense.weight', 'longformer.encoder.layer.1.intermediate.dense.bias', 'longformer.encoder.layer.1.output.dense.weight', 'longformer.encoder.layer.1.output.dense.bias', 'longformer.encoder.layer.1.output.LayerNorm.weight', 'longformer.encoder.layer.1.output.LayerNorm.bias', 'longformer.encoder.layer.2.attention.self.query.weight', 'longformer.encoder.layer.2.attention.self.query.bias', 'longformer.encoder.layer.2.attention.self.key.weight', 'longformer.encoder.layer.2.attention.self.key.bias', 'longformer.encoder.layer.2.attention.self.value.weight', 'longformer.encoder.layer.2.attention.self.value.bias', 'longformer.encoder.layer.2.attention.self.query_global.weight', 'longformer.encoder.layer.2.attention.self.query_global.bias', 'longformer.encoder.layer.2.attention.self.key_global.weight', 'longformer.encoder.layer.2.attention.self.key_global.bias', 'longformer.encoder.layer.2.attention.self.value_global.weight', 'longformer.encoder.layer.2.attention.self.value_global.bias', 'longformer.encoder.layer.2.attention.output.dense.weight', 'longformer.encoder.layer.2.attention.output.dense.bias', 'longformer.encoder.layer.2.attention.output.LayerNorm.weight', 'longformer.encoder.layer.2.attention.output.LayerNorm.bias', 'longformer.encoder.layer.2.intermediate.dense.weight', 'longformer.encoder.layer.2.intermediate.dense.bias', 'longformer.encoder.layer.2.output.dense.weight', 'longformer.encoder.layer.2.output.dense.bias', 'longformer.encoder.layer.2.output.LayerNorm.weight', 'longformer.encoder.layer.2.output.LayerNorm.bias', 'longformer.encoder.layer.3.attention.self.query.weight', 'longformer.encoder.layer.3.attention.self.query.bias', 'longformer.encoder.layer.3.attention.self.key.weight', 'longformer.encoder.layer.3.attention.self.key.bias', 'longformer.encoder.layer.3.attention.self.value.weight', 'longformer.encoder.layer.3.attention.self.value.bias', 'longformer.encoder.layer.3.attention.self.query_global.weight', 'longformer.encoder.layer.3.attention.self.query_global.bias', 'longformer.encoder.layer.3.attention.self.key_global.weight', 'longformer.encoder.layer.3.attention.self.key_global.bias', 'longformer.encoder.layer.3.attention.self.value_global.weight', 'longformer.encoder.layer.3.attention.self.value_global.bias', 'longformer.encoder.layer.3.attention.output.dense.weight', 'longformer.encoder.layer.3.attention.output.dense.bias', 'longformer.encoder.layer.3.attention.output.LayerNorm.weight', 'longformer.encoder.layer.3.attention.output.LayerNorm.bias', 'longformer.encoder.layer.3.intermediate.dense.weight', 'longformer.encoder.layer.3.intermediate.dense.bias', 'longformer.encoder.layer.3.output.dense.weight', 'longformer.encoder.layer.3.output.dense.bias', 'longformer.encoder.layer.3.output.LayerNorm.weight', 'longformer.encoder.layer.3.output.LayerNorm.bias', 'longformer.encoder.layer.4.attention.self.query.weight', 'longformer.encoder.layer.4.attention.self.query.bias', 'longformer.encoder.layer.4.attention.self.key.weight', 'longformer.encoder.layer.4.attention.self.key.bias', 'longformer.encoder.layer.4.attention.self.value.weight', 'longformer.encoder.layer.4.attention.self.value.bias', 'longformer.encoder.layer.4.attention.self.query_global.weight', 'longformer.encoder.layer.4.attention.self.query_global.bias', 'longformer.encoder.layer.4.attention.self.key_global.weight', 'longformer.encoder.layer.4.attention.self.key_global.bias', 'longformer.encoder.layer.4.attention.self.value_global.weight', 'longformer.encoder.layer.4.attention.self.value_global.bias', 'longformer.encoder.layer.4.attention.output.dense.weight', 'longformer.encoder.layer.4.attention.output.dense.bias', 'longformer.encoder.layer.4.attention.output.LayerNorm.weight', 'longformer.encoder.layer.4.attention.output.LayerNorm.bias', 'longformer.encoder.layer.4.intermediate.dense.weight', 'longformer.encoder.layer.4.intermediate.dense.bias', 'longformer.encoder.layer.4.output.dense.weight', 'longformer.encoder.layer.4.output.dense.bias', 'longformer.encoder.layer.4.output.LayerNorm.weight', 'longformer.encoder.layer.4.output.LayerNorm.bias', 'longformer.encoder.layer.5.attention.self.query.weight', 'longformer.encoder.layer.5.attention.self.query.bias', 'longformer.encoder.layer.5.attention.self.key.weight', 'longformer.encoder.layer.5.attention.self.key.bias', 'longformer.encoder.layer.5.attention.self.value.weight', 'longformer.encoder.layer.5.attention.self.value.bias', 'longformer.encoder.layer.5.attention.self.query_global.weight', 'longformer.encoder.layer.5.attention.self.query_global.bias', 'longformer.encoder.layer.5.attention.self.key_global.weight', 'longformer.encoder.layer.5.attention.self.key_global.bias', 'longformer.encoder.layer.5.attention.self.value_global.weight', 'longformer.encoder.layer.5.attention.self.value_global.bias', 'longformer.encoder.layer.5.attention.output.dense.weight', 'longformer.encoder.layer.5.attention.output.dense.bias', 'longformer.encoder.layer.5.attention.output.LayerNorm.weight', 'longformer.encoder.layer.5.attention.output.LayerNorm.bias', 'longformer.encoder.layer.5.intermediate.dense.weight', 'longformer.encoder.layer.5.intermediate.dense.bias', 'longformer.encoder.layer.5.output.dense.weight', 'longformer.encoder.layer.5.output.dense.bias', 'longformer.encoder.layer.5.output.LayerNorm.weight', 'longformer.encoder.layer.5.output.LayerNorm.bias', 'longformer.encoder.layer.6.attention.self.query.weight', 'longformer.encoder.layer.6.attention.self.query.bias', 'longformer.encoder.layer.6.attention.self.key.weight', 'longformer.encoder.layer.6.attention.self.key.bias', 'longformer.encoder.layer.6.attention.self.value.weight', 'longformer.encoder.layer.6.attention.self.value.bias', 'longformer.encoder.layer.6.attention.self.query_global.weight', 'longformer.encoder.layer.6.attention.self.query_global.bias', 'longformer.encoder.layer.6.attention.self.key_global.weight', 'longformer.encoder.layer.6.attention.self.key_global.bias', 'longformer.encoder.layer.6.attention.self.value_global.weight', 'longformer.encoder.layer.6.attention.self.value_global.bias', 'longformer.encoder.layer.6.attention.output.dense.weight', 'longformer.encoder.layer.6.attention.output.dense.bias', 'longformer.encoder.layer.6.attention.output.LayerNorm.weight', 'longformer.encoder.layer.6.attention.output.LayerNorm.bias', 'longformer.encoder.layer.6.intermediate.dense.weight', 'longformer.encoder.layer.6.intermediate.dense.bias', 'longformer.encoder.layer.6.output.dense.weight', 'longformer.encoder.layer.6.output.dense.bias', 'longformer.encoder.layer.6.output.LayerNorm.weight', 'longformer.encoder.layer.6.output.LayerNorm.bias', 'longformer.encoder.layer.7.attention.self.query.weight', 'longformer.encoder.layer.7.attention.self.query.bias', 'longformer.encoder.layer.7.attention.self.key.weight', 'longformer.encoder.layer.7.attention.self.key.bias', 'longformer.encoder.layer.7.attention.self.value.weight', 'longformer.encoder.layer.7.attention.self.value.bias', 'longformer.encoder.layer.7.attention.self.query_global.weight', 'longformer.encoder.layer.7.attention.self.query_global.bias', 'longformer.encoder.layer.7.attention.self.key_global.weight', 'longformer.encoder.layer.7.attention.self.key_global.bias', 'longformer.encoder.layer.7.attention.self.value_global.weight', 'longformer.encoder.layer.7.attention.self.value_global.bias', 'longformer.encoder.layer.7.attention.output.dense.weight', 'longformer.encoder.layer.7.attention.output.dense.bias', 'longformer.encoder.layer.7.attention.output.LayerNorm.weight', 'longformer.encoder.layer.7.attention.output.LayerNorm.bias', 'longformer.encoder.layer.7.intermediate.dense.weight', 'longformer.encoder.layer.7.intermediate.dense.bias', 'longformer.encoder.layer.7.output.dense.weight', 'longformer.encoder.layer.7.output.dense.bias', 'longformer.encoder.layer.7.output.LayerNorm.weight', 'longformer.encoder.layer.7.output.LayerNorm.bias', 'longformer.encoder.layer.8.attention.self.query.weight', 'longformer.encoder.layer.8.attention.self.query.bias', 'longformer.encoder.layer.8.attention.self.key.weight', 'longformer.encoder.layer.8.attention.self.key.bias', 'longformer.encoder.layer.8.attention.self.value.weight', 'longformer.encoder.layer.8.attention.self.value.bias', 'longformer.encoder.layer.8.attention.self.query_global.weight', 'longformer.encoder.layer.8.attention.self.query_global.bias', 'longformer.encoder.layer.8.attention.self.key_global.weight', 'longformer.encoder.layer.8.attention.self.key_global.bias', 'longformer.encoder.layer.8.attention.self.value_global.weight', 'longformer.encoder.layer.8.attention.self.value_global.bias', 'longformer.encoder.layer.8.attention.output.dense.weight', 'longformer.encoder.layer.8.attention.output.dense.bias', 'longformer.encoder.layer.8.attention.output.LayerNorm.weight', 'longformer.encoder.layer.8.attention.output.LayerNorm.bias', 'longformer.encoder.layer.8.intermediate.dense.weight', 'longformer.encoder.layer.8.intermediate.dense.bias', 'longformer.encoder.layer.8.output.dense.weight', 'longformer.encoder.layer.8.output.dense.bias', 'longformer.encoder.layer.8.output.LayerNorm.weight', 'longformer.encoder.layer.8.output.LayerNorm.bias', 'longformer.encoder.layer.9.attention.self.query.weight', 'longformer.encoder.layer.9.attention.self.query.bias', 'longformer.encoder.layer.9.attention.self.key.weight', 'longformer.encoder.layer.9.attention.self.key.bias', 'longformer.encoder.layer.9.attention.self.value.weight', 'longformer.encoder.layer.9.attention.self.value.bias', 'longformer.encoder.layer.9.attention.self.query_global.weight', 'longformer.encoder.layer.9.attention.self.query_global.bias', 'longformer.encoder.layer.9.attention.self.key_global.weight', 'longformer.encoder.layer.9.attention.self.key_global.bias', 'longformer.encoder.layer.9.attention.self.value_global.weight', 'longformer.encoder.layer.9.attention.self.value_global.bias', 'longformer.encoder.layer.9.attention.output.dense.weight', 'longformer.encoder.layer.9.attention.output.dense.bias', 'longformer.encoder.layer.9.attention.output.LayerNorm.weight', 'longformer.encoder.layer.9.attention.output.LayerNorm.bias', 'longformer.encoder.layer.9.intermediate.dense.weight', 'longformer.encoder.layer.9.intermediate.dense.bias', 'longformer.encoder.layer.9.output.dense.weight', 'longformer.encoder.layer.9.output.dense.bias', 'longformer.encoder.layer.9.output.LayerNorm.weight', 'longformer.encoder.layer.9.output.LayerNorm.bias', 'longformer.encoder.layer.10.attention.self.query.weight', 'longformer.encoder.layer.10.attention.self.query.bias', 'longformer.encoder.layer.10.attention.self.key.weight', 'longformer.encoder.layer.10.attention.self.key.bias', 'longformer.encoder.layer.10.attention.self.value.weight', 'longformer.encoder.layer.10.attention.self.value.bias', 'longformer.encoder.layer.10.attention.self.query_global.weight', 'longformer.encoder.layer.10.attention.self.query_global.bias', 'longformer.encoder.layer.10.attention.self.key_global.weight', 'longformer.encoder.layer.10.attention.self.key_global.bias', 'longformer.encoder.layer.10.attention.self.value_global.weight', 'longformer.encoder.layer.10.attention.self.value_global.bias', 'longformer.encoder.layer.10.attention.output.dense.weight', 'longformer.encoder.layer.10.attention.output.dense.bias', 'longformer.encoder.layer.10.attention.output.LayerNorm.weight', 'longformer.encoder.layer.10.attention.output.LayerNorm.bias', 'longformer.encoder.layer.10.intermediate.dense.weight', 'longformer.encoder.layer.10.intermediate.dense.bias', 'longformer.encoder.layer.10.output.dense.weight', 'longformer.encoder.layer.10.output.dense.bias', 'longformer.encoder.layer.10.output.LayerNorm.weight', 'longformer.encoder.layer.10.output.LayerNorm.bias', 'longformer.encoder.layer.11.attention.self.query.weight', 'longformer.encoder.layer.11.attention.self.query.bias', 'longformer.encoder.layer.11.attention.self.key.weight', 'longformer.encoder.layer.11.attention.self.key.bias', 'longformer.encoder.layer.11.attention.self.value.weight', 'longformer.encoder.layer.11.attention.self.value.bias', 'longformer.encoder.layer.11.attention.self.query_global.weight', 'longformer.encoder.layer.11.attention.self.query_global.bias', 'longformer.encoder.layer.11.attention.self.key_global.weight', 'longformer.encoder.layer.11.attention.self.key_global.bias', 'longformer.encoder.layer.11.attention.self.value_global.weight', 'longformer.encoder.layer.11.attention.self.value_global.bias', 'longformer.encoder.layer.11.attention.output.dense.weight', 'longformer.encoder.layer.11.attention.output.dense.bias', 'longformer.encoder.layer.11.attention.output.LayerNorm.weight', 'longformer.encoder.layer.11.attention.output.LayerNorm.bias', 'longformer.encoder.layer.11.intermediate.dense.weight', 'longformer.encoder.layer.11.intermediate.dense.bias', 'longformer.encoder.layer.11.output.dense.weight', 'longformer.encoder.layer.11.output.dense.bias', 'longformer.encoder.layer.11.output.LayerNorm.weight', 'longformer.encoder.layer.11.output.LayerNorm.bias', 'longformer.pooler.dense.weight', 'longformer.pooler.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']
2020-06-21 18:00:58,818|transformers.configuration_encoder_decoder|INFO> Set `config.is_decoder=True` for decoder_config
```
## Motivation
I'd like to use the longformer as an `EncoderDecoderModel` and these changes are required for the `EncoderDecoderModel` to work properly. My end goal is to use a encoder-decoder architecture for long documents. The better solution appears to be #4406. However, I still wonder if my changes are correct.
## Your contribution
My changes are in the `longformer-encoder_hidden_states` branch of [HHousen/transformers](https://github.com/HHousen/transformers/tree/longformer-encoder_hidden_states). Link to modified file: [modeling_longformer.py](https://github.com/HHousen/transformers/blob/longformer-encoder_hidden_states/src/transformers/modeling_longformer.py).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5170/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5169/comments | https://api.github.com/repos/huggingface/transformers/issues/5169/events | https://github.com/huggingface/transformers/issues/5169 | 642,613,226 | MDU6SXNzdWU2NDI2MTMyMjY= | 5,169 | [Tokenizer] batch_encode_plus method cannot encode List[Tuple[str]] with is_pretokenized=True | {
"login": "vjeronymo2",
"id": 37119493,
"node_id": "MDQ6VXNlcjM3MTE5NDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/37119493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vjeronymo2",
"html_url": "https://github.com/vjeronymo2",
"followers_url": "https://api.github.com/users/vjeronymo2/followers",
"following_url": "https://api.github.com/users/vjeronymo2/following{/other_user}",
"gists_url": "https://api.github.com/users/vjeronymo2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vjeronymo2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vjeronymo2/subscriptions",
"organizations_url": "https://api.github.com/users/vjeronymo2/orgs",
"repos_url": "https://api.github.com/users/vjeronymo2/repos",
"events_url": "https://api.github.com/users/vjeronymo2/events{/privacy}",
"received_events_url": "https://api.github.com/users/vjeronymo2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Hi @vjeronymo2, `encode_plus` method expects list of `str` or `tokens` if you are using `is_pretokenized=True`.\r\n`tokens` here are not ints, they are wordpiece tokens, if you want to convert a string to tokens then you can use `.tokenize` method",
"> Hi @vjeronymo2, `encode_plus` method expects list of `str` or `tokens` if you are using `is_pretokenized=True`.\r\n> `tokens` here are not ints, they are wordpiece tokens, if you want to convert a string to tokens then you can use `.tokenize` method\r\n\r\nThanks for clarifying that. I guess my biggest confusion was with the correct meaning of is_pretokenized flag.\r\nHave a nice day, good sir."
] | 1,592 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using: BERT
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load a tokenizer
2. Create a generic List[Tuple[List[int], List[int]]], for example [([2023, 2573], [2023, 2573, 2205])]
3. Encode the list using the method batch_encode_plus with is_tokenized=True to encode the pairs together
4. Error
```python
from transformers import BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = [('This works','this works too')]
print(tokenizer.batch_encode_plus(text, add_special_tokens=False)['input_ids']) # This works
input_ids = [([2023, 2573], [2023, 2573, 2205])]
tokenizer.batch_encode_plus(input_ids, is_pretokenized=True) # This raises error
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in get_input_ids(text)
1715 else:
1716 raise ValueError(
-> 1717 "Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers."
1718 )
1719
ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
batch_encode_plus would be able to encode List[Tuple[List[int], List[int]]], as described in the function description, hence, the example's input_ids would be:
`[[101, 2023, 2573, 102, 2023, 2573, 2205, 102]]`
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: probably not, but irrelevant
- Using distributed or parallel set-up in script?: probably not | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5169/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5168/comments | https://api.github.com/repos/huggingface/transformers/issues/5168/events | https://github.com/huggingface/transformers/issues/5168 | 642,597,963 | MDU6SXNzdWU2NDI1OTc5NjM= | 5,168 | Summarization Examples: Support label_smoothed_cross_entropy | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1841528858,
"node_id": "MDU6TGFiZWwxODQxNTI4ODU4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Summarization",
"name": "Summarization",
"color": "b6f97f",
"default": false,
"description": ""
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] | closed | false | null | [] | [
"Yes, it's annoying that torch doesn't have an implementation of smoothed cross entropy.\r\n\r\nAlthough note that using smoothed cross entropy will hurt the probability calibration of a seq2seq model (https://pubs.acs.org/doi/10.1021/acscentsci.9b00576)",
"@sshleifer This is good point. I would consider to work on this issue\r\nAs i see we need 2 things for this:\r\n1) add an implementation of label_smoothed_cross_entropy\r\n2) support choice of loss function in `finetune.py`\r\n\r\nam i correct?\r\n",
"Yes you are!",
"This issue is resolved as label smoothing has already been integrated in master with https://github.com/huggingface/transformers/pull/5919"
] | 1,592 | 1,598 | 1,598 | CONTRIBUTOR | null | This seems to be used by nearly all papers, but the HF implementation in `examples/summarization/finetune.py` uses the standard cross entropy loss.
Ideally, we could expose a command line arg to allow this choice to be toggled, like:
```python
parser.add_argument('--loss', type=str, choices=['cross_entropy', 'label_smoothed_cross_entropy'], default='cross_entropy')
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5168/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5167/comments | https://api.github.com/repos/huggingface/transformers/issues/5167/events | https://github.com/huggingface/transformers/issues/5167 | 642,583,788 | MDU6SXNzdWU2NDI1ODM3ODg= | 5,167 | results for wikitext-2 clm using GPT-2 differ between paper and example code | {
"login": "rajarsheem",
"id": 6441313,
"node_id": "MDQ6VXNlcjY0NDEzMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6441313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajarsheem",
"html_url": "https://github.com/rajarsheem",
"followers_url": "https://api.github.com/users/rajarsheem/followers",
"following_url": "https://api.github.com/users/rajarsheem/following{/other_user}",
"gists_url": "https://api.github.com/users/rajarsheem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajarsheem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajarsheem/subscriptions",
"organizations_url": "https://api.github.com/users/rajarsheem/orgs",
"repos_url": "https://api.github.com/users/rajarsheem/repos",
"events_url": "https://api.github.com/users/rajarsheem/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajarsheem/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think the results in this table of the GPT2 paper are zero-shot results (without fine-tuning). "
] | 1,592 | 1,593 | 1,593 | NONE | null | Hi,
The example code [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) to train causal LM on wikitext-2 uses a gpt2 base version (117M parameters). This code claims and also yields a perplexity of ~20 on the test data.
As you can see, the example script uses ```gpt``` as the ```model_name_or_path``` param which is actually the 117M model.
```python
python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
```

However, the GPT-2 [paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) [page 5, top table] shows that they got a perplexity of 29.41 with the said model.

Did I miss something? Can you please explain me the mismatch?
Thanks in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5167/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5166/comments | https://api.github.com/repos/huggingface/transformers/issues/5166/events | https://github.com/huggingface/transformers/issues/5166 | 642,568,367 | MDU6SXNzdWU2NDI1NjgzNjc= | 5,166 | Cannot import AutoModelForSeq2SeqLM | {
"login": "shubhamagarwal92",
"id": 7984532,
"node_id": "MDQ6VXNlcjc5ODQ1MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7984532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shubhamagarwal92",
"html_url": "https://github.com/shubhamagarwal92",
"followers_url": "https://api.github.com/users/shubhamagarwal92/followers",
"following_url": "https://api.github.com/users/shubhamagarwal92/following{/other_user}",
"gists_url": "https://api.github.com/users/shubhamagarwal92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shubhamagarwal92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shubhamagarwal92/subscriptions",
"organizations_url": "https://api.github.com/users/shubhamagarwal92/orgs",
"repos_url": "https://api.github.com/users/shubhamagarwal92/repos",
"events_url": "https://api.github.com/users/shubhamagarwal92/events{/privacy}",
"received_events_url": "https://api.github.com/users/shubhamagarwal92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this is because you don't have installed the library from source (see the [note here](https://github.com/huggingface/transformers/blob/master/examples/README.md#important-note)).\r\n\r\nTo install from source, follow the steps [here](https://huggingface.co/transformers/installation.html#installing-from-source)",
"@sgugger Thanks, it works when installing with the source (through small changes) :) \r\n\r\nAlso, as I said, the 2.11.0 version (installed by pip) still has this bug \r\n\r\n```\r\nfrom transformers import AutoModelForSeq2SeqLM\r\n```\r\n\r\nWhat do you suggest for a stable version of transformers? ",
"I'm not sure I follow. 2.11.0 is the last stable version, but the examples script on the main branch only works with an installation from source. \r\n\r\nIf you want a version of the examples compatible with 2.11.0, you should use their version in the [2.11.0 tagged repo](https://github.com/huggingface/transformers/tree/v2.11.0).",
"Same Issue v2.11.0",
"Again this is not an issue. The examples in the master repo are on par with the version of transformers on master, so you need an installation from source to run them, which is clearly indicated in the README.\r\n\r\nIf you want to execute the examples script as they were for v2.11.0, you should use 2.11.0 tagged repo](https://github.com/huggingface/transformers/tree/v2.11.0).\r\n\r\nClosing this issue, please reopen if anything I said was unclear."
] | 1,592 | 1,593 | 1,593 | CONTRIBUTOR | null | # 🐛 Bug
Has the `AutoModelForSeq2SeqLM` class changed?
I am trying to run transformer examples, basically the [token-classification](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_pl.sh) with pytorch-lightning, which calls [AutoModelForSeq2SeqLM](https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py#L18). However, I am getting an import error. See below.
## Information
Model I am using (Bert, XLNet ...):
`bert-base-multilingual-cased`
Language I am using the model on (English, Chinese ...):
`English`
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
cd transformers/examples/token-classification
./run_pl.sh
```
```
Traceback (most recent call last):
File "run_pl_ner.py", line 12, in <module>
from lightning_base import BaseTransformer, add_generic_args, generic_train
File "/transformers/examples/lightning_base.py", line 12, in <module>
from transformers import (
ImportError: cannot import name 'AutoModelForSeq2SeqLM' from 'transformers' (/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/__init__.py)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Reproduce the example.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux
- Python version: Python 3.8.3
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5166/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5166/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5165/comments | https://api.github.com/repos/huggingface/transformers/issues/5165/events | https://github.com/huggingface/transformers/pull/5165 | 642,555,160 | MDExOlB1bGxSZXF1ZXN0NDM3NTM5MjE3 | 5,165 | Create README.md | {
"login": "aodiniz",
"id": 6626805,
"node_id": "MDQ6VXNlcjY2MjY4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6626805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aodiniz",
"html_url": "https://github.com/aodiniz",
"followers_url": "https://api.github.com/users/aodiniz/followers",
"following_url": "https://api.github.com/users/aodiniz/following{/other_user}",
"gists_url": "https://api.github.com/users/aodiniz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aodiniz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aodiniz/subscriptions",
"organizations_url": "https://api.github.com/users/aodiniz/orgs",
"repos_url": "https://api.github.com/users/aodiniz/repos",
"events_url": "https://api.github.com/users/aodiniz/events{/privacy}",
"received_events_url": "https://api.github.com/users/aodiniz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Creating README.md file for model on Community contribution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5165/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5165",
"html_url": "https://github.com/huggingface/transformers/pull/5165",
"diff_url": "https://github.com/huggingface/transformers/pull/5165.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5165.patch",
"merged_at": 1592848154000
} |
https://api.github.com/repos/huggingface/transformers/issues/5164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5164/comments | https://api.github.com/repos/huggingface/transformers/issues/5164/events | https://github.com/huggingface/transformers/issues/5164 | 642,555,147 | MDU6SXNzdWU2NDI1NTUxNDc= | 5,164 | output generate scores per hypothesis/token | {
"login": "guyeyal",
"id": 3502557,
"node_id": "MDQ6VXNlcjM1MDI1NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3502557?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guyeyal",
"html_url": "https://github.com/guyeyal",
"followers_url": "https://api.github.com/users/guyeyal/followers",
"following_url": "https://api.github.com/users/guyeyal/following{/other_user}",
"gists_url": "https://api.github.com/users/guyeyal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guyeyal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guyeyal/subscriptions",
"organizations_url": "https://api.github.com/users/guyeyal/orgs",
"repos_url": "https://api.github.com/users/guyeyal/repos",
"events_url": "https://api.github.com/users/guyeyal/events{/privacy}",
"received_events_url": "https://api.github.com/users/guyeyal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I guess similar to `output_attentions` and `output_hidden_states`, we could output the `scores / probabilities` for generation, but I'm really not sure if it is required that often. What do you think @sshleifer @yjernite ? ",
"I would suggest trying it on a branch and seeing if it produces better generations. I have been inspecting the scores this week (just by saving hypotheses to disk) and have not gotten much utility. If it helps produce better generations, however, we should obviously add this!\r\n",
"Dear @patrickvonplaten and @sshleifer, Thanks for the quick reply.\r\nI'm interested in the perplexity of my generated text as function of different generated methods. This can done using the probabilities of the output tokens. \r\nAnother interesting case that jumps to mind is the case of auto complete, where you wanna present the user a generated text only if it passes some threshold of confidence. \r\n",
"Those are actually very useful applications! We will soon have a bigger refactoring of the generate method I think and will hopefully include this. \r\n\r\nAs @sshleifer said, for now, it would be great if you can show how you would integrate it on a branch including some interesting results. ",
"Fantastic. Will do. ",
"Thanks for raising the issue @guyeyal. IT would definitely be helpful to have a running example.\r\n\r\nMore generally @patrickvonplaten I think this is functionality will be helpful for the line of research concerned with analyzing the role of preplexity as a training objective as well as work on re-ranking generations or using stuff like noisy channel modeling, so definitely think it should be in the next big refactor.\r\n\r\n[https://arxiv.org/abs/1904.09751](https://arxiv.org/abs/1904.09751)\r\n[https://arxiv.org/abs/1908.05731](https://arxiv.org/abs/1908.05731)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'll add a vote here that I'm interested in this too. I wrote some code locally very similar to guyeyal's.",
"Thanks for the great work!\r\n\r\nI would also be interested in this functionality. I am using an autoregressive transformer model as part of a reinforcement learning problem. To alleviate the sample inefficiency of RL, it is very attractive to generate data using beam search, in order to add `num_beams > 1` of data to a buffer per time step. I would then like to bias the sampling of data from this buffer according to the probability of the generated sequence, defined like the diagram in this example:\r\n\r\nhttps://huggingface.co/blog/how-to-generate#beam-search\r\n\r\n@patrickvonplaten is this something that is likely to be covered in the PR here: https://github.com/huggingface/transformers/pull/6949\r\nor is it better to open a new issue? Thanks!",
"There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!",
"> There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!\r\n\r\nI saw a PR here, but not committed. #6289",
"> > There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!\r\n> \r\n> I saw a PR here, but not committed. #6289\r\n\r\nAny idea why this wasn't commited?",
"> > > There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!\r\n> > \r\n> > \r\n> > I saw a PR here, but not committed. #6289\r\n> \r\n> Any idea why this wasn't commited?\r\n\r\nWasn't quite working for me properly when I tried it. I did a fix locally based on 3.4.0, but the big refactor of generation_utils in 3.5.x broke it entirely again. Would be better to start afresh at this point, I think with all the changes to that file.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Is this feature available? Or still in the works?",
"See: https://discuss.huggingface.co/t/announcement-generationoutputs-scores-attentions-and-hidden-states-now-available-as-outputs-to-generate/3094",
"I am using the tensor flow implementation of T5. When I use `model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=5, return_dict_in_generate=True)`, it returns a tensor of shape `(1,5,vocab_size)`. Essentially, it is giving me the probability of a **single** word for each beam. In the description for `beam_search` in the link above, it says that score refers to \"all processed lm head logits + the current beam_scores for **each output token**\". In this link, https://huggingface.co/transformers/internal/generation_utils.html, it says that scores is \"the prediction scores of the language modelling head, for **each generation step**\". Ideally, we want a score for each token at every step of the generation for each beam search. So, wouldn't the shape of the output be (`batch_size`,`number_of_beams`,`sequence_length`,`vocab_size`)? That way, we can follow the path that each beam search went through to get the max probability sequence. In other words, for each beam, we have the probability of each token in our vocabulary for each generation step (until the max length). \r\n\r\nI want to use these token probabilities to calculate the sequence probabilities. In the blog for `generate()` (https://huggingface.co/blog/how-to-generate#beam-search), it shows that beam search looks for the highest product of probabilities between all sequences. \"_At time step 2, beam search finds that the word sequence (\"The\",\"dog\",\"has\"), has with 0.36 a higher probability than (\"The\",\"nice\",\"woman\"), which has 0.2_\". How can we get access to this sequence level probability, as show in this blog? ",
"@patrickvonplaten, in a follow up to the post above, does the **tensorflow** implementation of `model.generate()` produce either the `sequence_scores` that is also available in the pytorch implementation? Or, somehow the `scores` returns a tensor that is in the shape `(batch_size,number_of_beams,sequence_length,vocab_size)`, where we can calculate the product of the token probabilities at each step in the beam_search for each sequence? Thanks for your help! ",
"how can I get the gold sequence generate score?",
"I don't think we have the TF implementation of this function yet. Also cc @gante here",
"@xxllp for PT, we have a function to compute the scores with beam search. It is not documented yet, but you can check the function [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L804).\r\n\r\nFor TF, that function is yet to be implemented (but it is in our TODO list :) )"
] | 1,592 | 1,661 | 1,619 | NONE | null | # 🚀 Feature request
Thanks for doing such an awesome work.
i'm interested in the hypothesis score when running generate.
This could be done per hypothesis, or preferably per token in the hypothesis.
## Motivation
The motivation is to gain confidence for my generated text,
I suggest:
1. adding flag in modeling_utils.py to generate to return_scores
2. in _generate_beam_search :
if return_scores:
return also
for _generate_beam_search:
` best_scores = []
# retrieve best hypotheses
for i, hypotheses in enumerate(generated_hyps):
sorted_hyps = sorted(hypotheses.beams, key=lambda x: x[0])
for j in range(output_num_return_sequences_per_batch):
effective_batch_idx = output_num_return_sequences_per_batch * i + j
hyp_score, best_hyp = sorted_hyps.pop()
sent_lengths[effective_batch_idx] = len(best_hyp)
best.append(best_hyp)
best_scores.append(hyp_score)
# shorter batches are filled with pad_token
if sent_lengths.min().item() != sent_lengths.max().item():
assert pad_token_id is not None, "`Pad_token_id` has to be defined"
sent_max_len = min(sent_lengths.max().item() + 1, max_length)
decoded = input_ids.new(output_batch_size, sent_max_len).fill_(pad_token_id)
# fill with hypothesis and eos_token_id if necessary
for i, hypo in enumerate(best):
decoded[i, : sent_lengths[i]] = hypo
if sent_lengths[i] < max_length:
decoded[i, sent_lengths[i]] = eos_token_id
else:
# none of the hypotheses have an eos_token
assert (len(hypo) == max_length for hypo in best)
decoded = torch.stack(best).type(torch.long).to(next(self.parameters()).device)
if return_scores:
return decoded, best_scores `
for _generate_no_beam_search:
` output_score = 0
while cur_len < max_length:
model_inputs = self.prepare_inputs_for_generation(
input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_specific_kwargs
)
outputs = self(**model_inputs)
next_token_logits = outputs[0][:, -1, :]
# if model has past, then set the past variable to speed up decoding
if self._use_cache(outputs, use_cache):
past = outputs[1]
# repetition penalty from CTRL paper (https://arxiv.org/abs/1909.05858)
if repetition_penalty != 1.0:
self.enforce_repetition_penalty_(next_token_logits, batch_size, 1, input_ids, repetition_penalty)
if no_repeat_ngram_size > 0:
# calculate a list of banned tokens to prevent repetitively generating the same ngrams
# from fairseq: https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345
banned_tokens = calc_banned_ngram_tokens(input_ids, batch_size, no_repeat_ngram_size, cur_len)
for batch_idx in range(batch_size):
next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float("inf")
if bad_words_ids is not None:
# calculate a list of banned tokens according to bad words
banned_tokens = calc_banned_bad_words_ids(input_ids, bad_words_ids)
for batch_idx in range(batch_size):
next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float("inf")
# set eos token prob to zero if min_length is not reached
if eos_token_id is not None and cur_len < min_length:
next_token_logits[:, eos_token_id] = -float("inf")
if do_sample:
# Temperature (higher temperature => more likely to sample low probability tokens)
if temperature != 1.0:
next_token_logits = next_token_logits / temperature
# Top-p/top-k filtering
next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)
# Sample
probs = F.softmax(next_token_logits, dim=-1)
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
else:
# Greedy decoding
next_token = torch.argmax(next_token_logits, dim=-1)
from IPython import embed; embed()
next_score = torch.gather(next_token_logits, -1, next_tokens) # (batch_size, num_beams * 2)
# update generations and finished sentences
if eos_token_id is not None:
# pad finished sentences if eos_token_id exist
tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents)
else:
tokens_to_add = next_token
# add token and increase length by one
input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
output_score+=next_score
cur_len = cur_len + 1
if eos_token_id is not None:
eos_in_sents = tokens_to_add == eos_token_id
# if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length
is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(eos_in_sents.long()).bool()
sent_lengths.masked_fill_(is_sents_unfinished_and_token_to_add_is_eos, cur_len)
# unfinished_sents is set to zero if eos in sentence
unfinished_sents.mul_((~eos_in_sents).long())
# stop when there is a </s> in each sentence, or if we exceed the maximul length
if unfinished_sents.max() == 0:
break
# extend attention_mask for new generated input if only decoder
if self.config.is_encoder_decoder is False:
attention_mask = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
)
# if there are different sentences lengths in the batch, some batches have to be padded
if sent_lengths.min().item() != sent_lengths.max().item():
assert pad_token_id is not None, "`Pad_token_id` has to be defined if batches have different lengths"
# finished sents are filled with pad_token
decoded = input_ids.new(batch_size, sent_lengths.max().item()).fill_(pad_token_id)
else:
decoded = input_ids
for hypo_idx, hypo in enumerate(input_ids):
decoded[hypo_idx, : sent_lengths[hypo_idx]] = hypo[: sent_lengths[hypo_idx]]
if return_scores:
return decoded, output_score
return decoded `
In the next step we could save the score per token to allow the user to decide where he wants to truncate the generated text as function of confidence
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5164/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5164/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5163/comments | https://api.github.com/repos/huggingface/transformers/issues/5163/events | https://github.com/huggingface/transformers/issues/5163 | 642,499,993 | MDU6SXNzdWU2NDI0OTk5OTM= | 5,163 | Why doesn't stride in squad_convert_example_to_features‘s encode_plus set to doc_stride? | {
"login": "ZihaoZheng98",
"id": 22414831,
"node_id": "MDQ6VXNlcjIyNDE0ODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/22414831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZihaoZheng98",
"html_url": "https://github.com/ZihaoZheng98",
"followers_url": "https://api.github.com/users/ZihaoZheng98/followers",
"following_url": "https://api.github.com/users/ZihaoZheng98/following{/other_user}",
"gists_url": "https://api.github.com/users/ZihaoZheng98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZihaoZheng98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZihaoZheng98/subscriptions",
"organizations_url": "https://api.github.com/users/ZihaoZheng98/orgs",
"repos_url": "https://api.github.com/users/ZihaoZheng98/repos",
"events_url": "https://api.github.com/users/ZihaoZheng98/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZihaoZheng98/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Can you give more details as asked in the issue template?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5163/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5162/comments | https://api.github.com/repos/huggingface/transformers/issues/5162/events | https://github.com/huggingface/transformers/issues/5162 | 642,485,462 | MDU6SXNzdWU2NDI0ODU0NjI= | 5,162 | How to save the created embedding of the text corpus. | {
"login": "PaulJohny",
"id": 16956495,
"node_id": "MDQ6VXNlcjE2OTU2NDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/16956495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulJohny",
"html_url": "https://github.com/PaulJohny",
"followers_url": "https://api.github.com/users/PaulJohny/followers",
"following_url": "https://api.github.com/users/PaulJohny/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulJohny/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulJohny/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulJohny/subscriptions",
"organizations_url": "https://api.github.com/users/PaulJohny/orgs",
"repos_url": "https://api.github.com/users/PaulJohny/repos",
"events_url": "https://api.github.com/users/PaulJohny/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulJohny/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | NONE | null | After converting the text corpus into word embedding how can we save the embeddings into a file. It is not practical and feasible to create word embedding each and every time. Is there ant work around to save the embedding. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5162/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5161/comments | https://api.github.com/repos/huggingface/transformers/issues/5161/events | https://github.com/huggingface/transformers/issues/5161 | 642,452,126 | MDU6SXNzdWU2NDI0NTIxMjY= | 5,161 | Keras model created from individual Bert Layers has weights not shown in trainable_weights nor non_trainable_weights. model.summary() / utils.plot_model shows those weights as part of graph though | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | CONTRIBUTOR | null | # 🐛 Bug
## Information
I created a model which takes 2 layers from from a bert model, and creates a mini-bert model.
However, not all the weights/layers from the component layers are in this model. But These weights are shown was connected in `model.summary()` and in
```
tf.keras.utils.plot_model(
model, to_file='model.png', show_shapes=False, show_layer_names=True,
rankdir='TB', expand_nested=False, dpi=96
)
```
Model I am using (Bert, XLNet ...): bert-base-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
Here is a Colabnotebook with a minimal example that reproduced the issue.
https://colab.research.google.com/drive/1n3_XNhdgH6Qo7GT-M570lIKWAoU3TML5?usp=sharing
And here is the code
```
!pip install transformers --q
%tensorflow_version 2.x
from transformers import TFBertModel, AutoModel, TFRobertaModel, AutoTokenizer
import tensorflow as tf
import tensorflow_addons as tfa
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
from tensorflow import keras
from tensorflow.keras import layers
from copy import deepcopy
logger = tf.get_logger()
logger.info(tf.__version__)
def get_mini_models():
tempModel = TFRobertaModel.from_pretrained('bert-base-uncased', from_pt=True)
layer9 = deepcopy(tempModel.layers[0].encoder.layer[8])
layer10 = deepcopy(tempModel.layers[0].encoder.layer[9])
inputHiddenVals = tf.keras.Input(shape=[None, None], dtype=tf.float32, name='input_Q',
batch_size=None)
hidden1 = layer9((inputHiddenVals, None, None))
hidden2 = layer10((hidden1[0], None, None))
modelNew = tf.keras.Model(inputs=inputHiddenVals, outputs=hidden2)
del tempModel
return modelNew
@tf.function
def loss_fn(_, probs):
bs = tf.shape(probs)[0]
labels = tf.eye(bs, bs)
return tf.losses.categorical_crossentropy(labels,
probs,
from_logits=True)
model = get_mini_models()
model.compile(loss=loss_fn,
optimizer=tfa.optimizers.AdamW(weight_decay=1e-4, learning_rate=1e-5,
epsilon=1e-06))
# Get model and layers directly to compare
tempModel = TFRobertaModel.from_pretrained('bert-base-uncased', from_pt=True)
layer9 = deepcopy(tempModel.layers[0].encoder.layer[8])
layer10 = deepcopy(tempModel.layers[0].encoder.layer[9])
# Only one layer, and that layer also has missing weights.
for i, var in enumerate(model.weights):
print(model.weights[i].name)
# Full weights for one layer
for i, var in enumerate(layer9.weights):
print(layer9.weights[i].name)
# Test what correct output should be
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
inputt = tokenizer.encode('This is a sentence', return_tensors='tf')
outt = tempModel(inputt)[0]
# Test model output. Not the same.
model(outt)
# Model summary somehow lists the weights
model.summary()
# Model diagram shows the correct connections between all the layers.
tf.keras.utils.plot_model(
model, to_file='model.png', show_shapes=False, show_layer_names=True,
rankdir='TB', expand_nested=False, dpi=96
)
```
Edit: I also tried making the layers from scratch, and setting the weights directly, same result. Here's a colab notebook that does htis. https://colab.research.google.com/drive/1EC_fObSp9lUsj_PFaYgFtRI93ErPYmU9?usp=sharing
## Expected behavior
The model should contain all the weights from the component layers, and the output of the model should be the same as executing the layers in the same way outside the model.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?:no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5161/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5160/comments | https://api.github.com/repos/huggingface/transformers/issues/5160/events | https://github.com/huggingface/transformers/pull/5160 | 642,423,432 | MDExOlB1bGxSZXF1ZXN0NDM3NDQ0ODUy | 5,160 | Create README.md | {
"login": "aodiniz",
"id": 6626805,
"node_id": "MDQ6VXNlcjY2MjY4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6626805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aodiniz",
"html_url": "https://github.com/aodiniz",
"followers_url": "https://api.github.com/users/aodiniz/followers",
"following_url": "https://api.github.com/users/aodiniz/following{/other_user}",
"gists_url": "https://api.github.com/users/aodiniz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aodiniz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aodiniz/subscriptions",
"organizations_url": "https://api.github.com/users/aodiniz/orgs",
"repos_url": "https://api.github.com/users/aodiniz/repos",
"events_url": "https://api.github.com/users/aodiniz/events{/privacy}",
"received_events_url": "https://api.github.com/users/aodiniz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=h1) Report\n> Merging [#5160](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68e19f1c228c92d5d800533f558faff24b57127a&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5160 +/- ##\n==========================================\n- Coverage 77.93% 77.92% -0.01% \n==========================================\n Files 137 137 \n Lines 23475 23475 \n==========================================\n- Hits 18295 18294 -1 \n- Misses 5180 5181 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5160/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5160/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.28% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=footer). Last update [68e19f1...2bf6959](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Creating README.md file for model on Community contribution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5160/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5160",
"html_url": "https://github.com/huggingface/transformers/pull/5160",
"diff_url": "https://github.com/huggingface/transformers/pull/5160.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5160.patch",
"merged_at": 1592863195000
} |
https://api.github.com/repos/huggingface/transformers/issues/5159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5159/comments | https://api.github.com/repos/huggingface/transformers/issues/5159/events | https://github.com/huggingface/transformers/issues/5159 | 642,403,874 | MDU6SXNzdWU2NDI0MDM4NzQ= | 5,159 | Transformer pipeline loading model and tokenizer on every prediction request | {
"login": "Sharathmk99",
"id": 3970340,
"node_id": "MDQ6VXNlcjM5NzAzNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3970340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sharathmk99",
"html_url": "https://github.com/Sharathmk99",
"followers_url": "https://api.github.com/users/Sharathmk99/followers",
"following_url": "https://api.github.com/users/Sharathmk99/following{/other_user}",
"gists_url": "https://api.github.com/users/Sharathmk99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sharathmk99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sharathmk99/subscriptions",
"organizations_url": "https://api.github.com/users/Sharathmk99/orgs",
"repos_url": "https://api.github.com/users/Sharathmk99/repos",
"events_url": "https://api.github.com/users/Sharathmk99/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sharathmk99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @Sharathmk99, can post the snippet of your server code where you are initialising `QNAModel()` ",
"I am also facing the same issue? please help",
"Any fix for this?",
"This windows issue has not fixed even in the latest versions of transformers. @patil-suraj Please help!\r\n```\r\nfrom transformers import pipeline\r\n\r\nmodel_name = \"deepset/roberta-base-squad2\"\r\nnlp = pipeline(\"question-answering\", model=model_name, tokenizer=model_name, framework=\"pt\")\r\nnlp(question=question, context=text)\r\n```\r\nThis only happens on Windows",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,605 | 1,605 | NONE | null | # 🐛 Bug
## Information
I'm trying to build API which accept question and context, response should be the answer. I'm trying to use Transformer Question Answering pipeline task with BERT model.
I'm initializing the model and pipeline object on __init__ method and trying to use pipeline object in another method, so that on server startup the model is loaded to memory and prediction will be fast.
But on every prediction request, __init__method is called, when i debugged the flow, i think its because of feature calculation is happening using Pool threads.
https://github.com/huggingface/transformers/blob/68e19f1c228c92d5d800533f558faff24b57127a/src/transformers/data/processors/squad.py#L318
Can you please to resolve the issue?
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] my own modified scripts: Own script
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
## To reproduce
Steps to reproduce the behavior:
Source Code:
```
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
import time
from transformers import pipeline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using {device} for QNA Model prediction")
class QNAModel:
def __init__(self):
start_time = time.time()
self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
self.model = BertForQuestionAnswering.from_pretrained(
'bert-large-uncased-whole-word-masking-finetuned-squad')
self.pipeline = pipeline("question-answering", model=self.model, tokenizer=self.tokenizer)
print("--- %s seconds model load time ---" % (time.time() - start_time))
def get_answers(self, passages, question):
start_time = time.time()
answers = []
for passage in passages:
context = passage["passage"]
print(self.pipeline(question=question, context=context))
print("--- %s seconds Total QNA model prediction---" % (time.time() - start_time))
return answers
```
Testing
```
qna_model = QNAModel()
qna_model.get_answers(passages1, question1)
qna_model.get_answers(passages1, question2)
```
Logs,
```
Using cpu for QNA Model prediction
--- 16.12714695930481 seconds model load time ---
convert squad examples to features: 100%|██████████| 1/1 [00:00<00:00, 58.58it/s]
add example index and unique id: 100%|██████████| 1/1 [00:00<?, ?it/s]
{'score': 0.39260570705747, 'start': 364, 'end': 380, 'answer': '<answer1>'}
Using cpu for QNA Model prediction
--- 14.559933185577393 seconds model load time ---
convert squad examples to features: 100%|██████████| 1/1 [00:00<00:00, 187.58it/s]
add example index and unique id: 100%|██████████| 1/1 [00:00<?, ?it/s]
{'score': 0.2136769815853059, 'start': 0, 'end': 84, 'answer': '<answer2>'}
--- 51.28193521499634 seconds Total QNA model prediction---
```
## Expected behavior
Model should load once so that prediction is fast
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Windows
- Python version: 3.7
- PyTorch version (GPU?): 1.5.0+cpu
- Tensorflow version (GPU?): NA
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5159/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5159/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5158/comments | https://api.github.com/repos/huggingface/transformers/issues/5158/events | https://github.com/huggingface/transformers/pull/5158 | 642,365,732 | MDExOlB1bGxSZXF1ZXN0NDM3NDA1Mjc3 | 5,158 | Fix PABEE's result table | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=h1) Report\n> Merging [#5158](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aa6a29bc25b663e1311c5c4fb96b004cf8a6d2b6&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5158 +/- ##\n==========================================\n+ Coverage 77.92% 77.94% +0.02% \n==========================================\n Files 137 137 \n Lines 23475 23475 \n==========================================\n+ Hits 18292 18298 +6 \n+ Misses 5183 5177 -6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=footer). Last update [aa6a29b...23f6a30](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5158/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5158",
"html_url": "https://github.com/huggingface/transformers/pull/5158",
"diff_url": "https://github.com/huggingface/transformers/pull/5158.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5158.patch",
"merged_at": 1592665000000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5157/comments | https://api.github.com/repos/huggingface/transformers/issues/5157/events | https://github.com/huggingface/transformers/pull/5157 | 642,357,219 | MDExOlB1bGxSZXF1ZXN0NDM3Mzk5MzI4 | 5,157 | [examples] fixes arguments for summarization finetune scripts | {
"login": "ieBoytsov",
"id": 61888740,
"node_id": "MDQ6VXNlcjYxODg4NzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/61888740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ieBoytsov",
"html_url": "https://github.com/ieBoytsov",
"followers_url": "https://api.github.com/users/ieBoytsov/followers",
"following_url": "https://api.github.com/users/ieBoytsov/following{/other_user}",
"gists_url": "https://api.github.com/users/ieBoytsov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ieBoytsov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ieBoytsov/subscriptions",
"organizations_url": "https://api.github.com/users/ieBoytsov/orgs",
"repos_url": "https://api.github.com/users/ieBoytsov/repos",
"events_url": "https://api.github.com/users/ieBoytsov/events{/privacy}",
"received_events_url": "https://api.github.com/users/ieBoytsov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=h1) Report\n> Merging [#5157](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68e19f1c228c92d5d800533f558faff24b57127a&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5157 +/- ##\n=======================================\n Coverage 77.93% 77.94% \n=======================================\n Files 137 137 \n Lines 23475 23475 \n=======================================\n+ Hits 18295 18297 +2 \n+ Misses 5180 5178 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=footer). Last update [68e19f1...49b1e42](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | This PR fixes arguments in bash scripts for finetuning bart/bart_tiny/t5 models in summarization examples.
After changes that were merged in #4951 i noticed some small problems. That includes:
* missing default arguments for `finetune.py` (--data_dir, output_dir)
* invalid argument name for number of gpu to use (`n_gpu` instead of `gpus`)
* argument `model_type` that is not present in list of arguments and crashes the script | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5157/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5157",
"html_url": "https://github.com/huggingface/transformers/pull/5157",
"diff_url": "https://github.com/huggingface/transformers/pull/5157.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5157.patch",
"merged_at": 1592754682000
} |
https://api.github.com/repos/huggingface/transformers/issues/5156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5156/comments | https://api.github.com/repos/huggingface/transformers/issues/5156/events | https://github.com/huggingface/transformers/issues/5156 | 642,338,118 | MDU6SXNzdWU2NDIzMzgxMTg= | 5,156 | BertForMaskedLM "labels" is an unexpected keyword | {
"login": "guoxuxu",
"id": 29363464,
"node_id": "MDQ6VXNlcjI5MzYzNDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29363464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guoxuxu",
"html_url": "https://github.com/guoxuxu",
"followers_url": "https://api.github.com/users/guoxuxu/followers",
"following_url": "https://api.github.com/users/guoxuxu/following{/other_user}",
"gists_url": "https://api.github.com/users/guoxuxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guoxuxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guoxuxu/subscriptions",
"organizations_url": "https://api.github.com/users/guoxuxu/orgs",
"repos_url": "https://api.github.com/users/guoxuxu/repos",
"events_url": "https://api.github.com/users/guoxuxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/guoxuxu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, @guoxuxu , `lm_labels` is changed to `labels` in a recent commit on master. If you are using master then use `labels` otherwise use `lm_labels` \r\n\r\n@sgugger This change is causing a lot of confusion. Would it be a good idea to keep master and release docs separate ? ",
"This has just been done. The [documentation](https://huggingface.co/transformers/) now shows the latest stable release (v2.11.0) and you have to opt-in to see the [master documentation](https://huggingface.co/transformers/master/).\r\n\r\nI'll work on a version selector next.",
"Thanks @sgugger !",
"I think we can close the issue as a result. Please reopen if any problem persists."
] | 1,592 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
The official BertForMaskedLM example: https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm
has a bug. When running: outputs = model(input_ids, labels=input_ids), it alerts: TypeError: forward() got an unexpected keyword argument 'labels'
Model I am using (Bert, XLNet ...):
BERT
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
from transformers import BertTokenizer, BertForMaskedLM
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=input_ids)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
should work with labels??
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5156/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5155/comments | https://api.github.com/repos/huggingface/transformers/issues/5155/events | https://github.com/huggingface/transformers/issues/5155 | 642,338,021 | MDU6SXNzdWU2NDIzMzgwMjE= | 5,155 | new tokenizer backend breaks old code | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Can you share the exact command line you ran and the last 20-30 lines of the error message?",
"The command given in the readme \n\n\nTruncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior.\n\nThis is getting printed some hundreth of times\n",
"Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior.\r\n\r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py\", line 125, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py\", line 48, in mapstar\r\n return list(map(*args))\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/data/processors/squad.py\", line 134, in squad_convert_example_to_features\r\n encoded_dict = tokenizer.encode_plus(\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1504, in encode_plus\r\n return self._encode_plus(\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/tokenization_utils.py\", line 358, in _encode_plus\r\n return self._prepare_for_model(\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/tokenization_utils.py\", line 573, in _prepare_for_model\r\n ids, pair_ids, overflowing_tokens = self.truncate_sequences(\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/tokenization_utils.py\", line 675, in truncate_sequences\r\n assert len(ids) > num_tokens_to_remove\r\nAssertionError\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"runsquad.py\", line 827, in <module>\r\n main()\r\n File \"runsquad.py\", line 765, in main\r\n train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)\r\n File \"runsquad.py\", line 451, in load_and_cache_examples\r\n features, dataset = squad_convert_examples_to_features(\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/data/processors/squad.py\", line 326, in squad_convert_examples_to_features\r\n features = list(\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/tqdm/std.py\", line 1129, in __iter__\r\n for obj in iterable:\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py\", line 420, in <genexpr>\r\n return (item for chunk in result for item in chunk)\r\n File \"/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py\", line 868, in next\r\n raise value\r\nAssertionError\r\n\r\n\r\nEdit: http://prntscr.com/t3ak3t\r\nThe error occures while processing the datas",
"Same behaviour, any idea to debug it? ",
"As temporary fix I took an older fork without the new tokenizers logic and copied the new models to it",
"> Can you share the exact command line you ran and the last 20-30 lines of the error message?\r\n\r\nIt seems it it related with the issue I created here: https://github.com/huggingface/tokenizers/issues/307#issuecomment-647603101\r\ncc: @sshleifer ",
"Yes, it the same error\r\n\r\n> > Can you share the exact command line you ran and the last 20-30 lines of the error message?\r\n> \r\n> It seems it it related with the issue I created here: [huggingface/tokenizers#307 (comment)](https://github.com/huggingface/tokenizers/issues/307#issuecomment-647603101)\r\n> cc: @sshleifer\r\n\r\n",
"Still facing the same issue, did the fix work?",
"Was this issue solved? I am facing the same problem, it was working just fine this evening, and I re-installed transformers module on a new notebook, thats when it fetched the newer version and is causing this error, I've tested it with Bert and gpt2, and it still presists. \r\nI read the code for the tokenization_utils_base, here is where it's originating, based on the \"if conditional block\", i coded padding to be True, but still it gives same error.\r\n\r\n```\r\ndef _get_padding_truncation_strategies(\r\n self, padding=False, truncation=False, max_length=None, pad_to_multiple_of=None, verbose=True, **kwargs\r\n ):\r\n \"\"\" Find the correct padding/truncation strategy with backward compatibility\r\n for old arguments (truncation_strategy and pad_to_max_length) and behaviors.\r\n \"\"\"\r\n old_truncation_strategy = kwargs.pop(\"truncation_strategy\", \"do_not_truncate\")\r\n old_pad_to_max_length = kwargs.pop(\"pad_to_max_length\", False)\r\n\r\n # Backward compatibility for previous behavior, maybe we should deprecate it:\r\n # If you only set max_length, it activates truncation for max_length\r\n if max_length is not None and padding is False and truncation is False:\r\n if verbose:\r\n logger.warning(\r\n \"Truncation was not explicitely activated but `max_length` is provided a specific value, \"\r\n \"please use `truncation=True` to explicitely truncate examples to max length. \"\r\n \"Defaulting to 'only_first' truncation strategy. \"\r\n \"If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior.\"\r\n )\r\n truncation = \"only_first\"\r\n```",
"Are you installing it from source?",
"@mrm8488 No, currently not, its from pip, could it be that the pip isn't getting the updated code, the version shows transformers-3.0.0, and I remember yesterday it wasn't 3.0.0",
"Closing as adressing @llStringll's issue in #5377."
] | 1,592 | 1,593 | 1,593 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): any model
Language I am using the model on (English, Chinese ...): english
The problem arises when using:
* [x] the official example scripts: (give details below) QA script from official repo
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) Squad 1 and 2
## To reproduce
Steps to reproduce the behavior:
Just try to run the example script for QA task
The error output is too large to copy in here.
The new tokenizer backend is not backward compatible. The truncation is set to false by default.
With the new tokenizer backend the truncation need to get initialized with True value, but in example or old codes there is often no possibility to set it while calling.
I think the feature conversion code is not updated to the new backend, right ?
## Expected behavior
It should train the model correctly
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: any
- Python version: 3.7-3.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5155/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5155/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5154/comments | https://api.github.com/repos/huggingface/transformers/issues/5154/events | https://github.com/huggingface/transformers/issues/5154 | 642,321,170 | MDU6SXNzdWU2NDIzMjExNzA= | 5,154 | Lite transformer | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Very cool!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
Transformer has become ubiquitous in natural language processing (e.g., machine translation, question answering); however, it requires enormous amount of computations to achieve high performance, which makes it not suitable for mobile applications that are tightly constrained by the hardware resources and battery. In this paper, we present an efficient mobile NLP architecture, Lite Transformer to facilitate deploying mobile NLP applications on edge devices. The key primitive is the Long-Short Range Attention (LSRA), where one group of heads specializes in the local context modeling (by convolution) while another group specializes in the long-distance relationship modeling (by attention). Such specialization brings consistent improvement over the vanilla transformer on three well-established language tasks: machine translation, abstractive summarization, and language modeling. Under constrained resources (500M/100M MACs), Lite Transformer outperforms transformer on WMT'14 English-French by 1.2/1.7 BLEU, respectively. Lite Transformer reduces the computation of transformer base model by 2.5x with 0.3 BLEU score degradation. Combining with pruning and quantization, we further compressed the model size of Lite Transformer by 18.2x. For language modeling, Lite Transformer achieves 1.8 lower perplexity than the transformer at around 500M MACs. Notably, Lite Transformer outperforms the AutoML-based Evolved Transformer by 0.5 higher BLEU for the mobile NLP setting without the costly architecture search that requires more than 250 GPU years. Code has been made available at https://github.com/mit-han-lab/lite-transformer
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (give details)https://github.com/mit-han-lab/lite-transformer
* [x] the model weights are available: (give details) linked in readme of GitHub repo
* [x] who are the authors: (mention them, if possible by @gh-username)Zhanghao Wu • Zhijian Liu • Ji Lin • Yujun Lin • Song Han
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5154/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5154/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5153/comments | https://api.github.com/repos/huggingface/transformers/issues/5153/events | https://github.com/huggingface/transformers/pull/5153 | 642,312,985 | MDExOlB1bGxSZXF1ZXN0NDM3MzY4MjE0 | 5,153 | Create README.md | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"File extension is missing, can you add it?",
"(should be `model_cards/ahotrod/electra_large_discriminator_squad2_512/README.md`)\r\n\r\nThanks!"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | electra_large_discriminator_squad2_512 LM for Question Answering | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5153/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5153",
"html_url": "https://github.com/huggingface/transformers/pull/5153",
"diff_url": "https://github.com/huggingface/transformers/pull/5153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5153.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5152/comments | https://api.github.com/repos/huggingface/transformers/issues/5152/events | https://github.com/huggingface/transformers/pull/5152 | 642,299,021 | MDExOlB1bGxSZXF1ZXN0NDM3MzU4NDkz | 5,152 | Create README.md | {
"login": "aodiniz",
"id": 6626805,
"node_id": "MDQ6VXNlcjY2MjY4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6626805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aodiniz",
"html_url": "https://github.com/aodiniz",
"followers_url": "https://api.github.com/users/aodiniz/followers",
"following_url": "https://api.github.com/users/aodiniz/following{/other_user}",
"gists_url": "https://api.github.com/users/aodiniz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aodiniz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aodiniz/subscriptions",
"organizations_url": "https://api.github.com/users/aodiniz/orgs",
"repos_url": "https://api.github.com/users/aodiniz/repos",
"events_url": "https://api.github.com/users/aodiniz/events{/privacy}",
"received_events_url": "https://api.github.com/users/aodiniz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=h1) Report\n> Merging [#5152](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ed94b231269972c59d53bf4134a842c2273e814&el=desc) will **decrease** coverage by `0.38%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5152 +/- ##\n==========================================\n- Coverage 78.32% 77.93% -0.39% \n==========================================\n Files 137 137 \n Lines 23472 23472 \n==========================================\n- Hits 18385 18294 -91 \n- Misses 5087 5178 +91 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5152/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5152/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5152/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=footer). Last update [5ed94b2...9874049](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Creating README.md file for model on Community contribution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5152/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5152",
"html_url": "https://github.com/huggingface/transformers/pull/5152",
"diff_url": "https://github.com/huggingface/transformers/pull/5152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5152.patch",
"merged_at": 1592863174000
} |
https://api.github.com/repos/huggingface/transformers/issues/5151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5151/comments | https://api.github.com/repos/huggingface/transformers/issues/5151/events | https://github.com/huggingface/transformers/issues/5151 | 642,298,605 | MDU6SXNzdWU2NDIyOTg2MDU= | 5,151 | TFBertForSequenceClassification: TypeError: call() got an unexpected keyword argument 'labels' | {
"login": "afogarty85",
"id": 49048309,
"node_id": "MDQ6VXNlcjQ5MDQ4MzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afogarty85",
"html_url": "https://github.com/afogarty85",
"followers_url": "https://api.github.com/users/afogarty85/followers",
"following_url": "https://api.github.com/users/afogarty85/following{/other_user}",
"gists_url": "https://api.github.com/users/afogarty85/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afogarty85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afogarty85/subscriptions",
"organizations_url": "https://api.github.com/users/afogarty85/orgs",
"repos_url": "https://api.github.com/users/afogarty85/repos",
"events_url": "https://api.github.com/users/afogarty85/events{/privacy}",
"received_events_url": "https://api.github.com/users/afogarty85/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should try this\r\n`pip install git+https://github.com/huggingface/transformers`\r\nnot\r\n`pip install transformers`\r\nbecause the latest version isn't available in any release",
"It is available in the version v3.0.0 which was released this morning :)",
"I'm having the same issue with v3.0.2, following is the error msg:\r\n\r\n> `TypeError: tf__call() got an unexpected keyword argument 'labels'`",
"> I'm having the same issue with v3.0.2, following is the error msg:\r\n> \r\n> > `TypeError: tf__call() got an unexpected keyword argument 'labels'`\r\n\r\nI would like to elaborate more upon this issue. I carefully checked the source code and the error is that the `TFDistilBertModel` get the `label` keyword argument and throws this error. I have ensured that the data is fitted as the desired form in the type of `tf.data.Dataset`. The code I wrote is effectively identical to the [`run_tf_glue.py`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py) and the example code is running correctly.\r\n\r\n> Note: I pull down the github repo and install the dependency as described [here](https://huggingface.co/transformers/examples.html#important-note). Is this possibly related to the issue?\r\n\r\nI also tried the following code to reinstall transformers but it still doesn't work\r\n\r\n```\r\npip uninstall transformers\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nThis has been to a point where I'm extremely frustrated, it would be really appreciated if someone can point me to a right direction.",
"> > I'm having the same issue with v3.0.2, following is the error msg:\r\n> > > `TypeError: tf__call() got an unexpected keyword argument 'labels'`\r\n> \r\n> I would like to elaborate more upon this issue. I carefully checked the source code and the error is that the `TFDistilBertModel` get the `label` keyword argument and throws this error. I have ensured that the data is fitted as the desired form in the type of `tf.data.Dataset`. The code I wrote is effectively identical to the [`run_tf_glue.py`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py) and the example code is running correctly.\r\n> \r\n> > Note: I pull down the github repo and install the dependency as described [here](https://huggingface.co/transformers/examples.html#important-note). Is this possibly related to the issue?\r\n> \r\n> I also tried the following code to reinstall transformers but it still doesn't work\r\n> \r\n> ```\r\n> pip uninstall transformers\r\n> pip install git+https://github.com/huggingface/transformers\r\n> ```\r\n> \r\n> This has been to a point where I'm extremely frustrated, it would be really appreciated if someone can point me to a right direction.\r\n\r\nDid you wanna do classification task?\r\nIf so, you may need to use `TFDistilBertForSequenceClassification` or `TFDistilBertForTokenClassification`。\r\nThe `TFDistilBertModel ` class doesn't contain a classification_layer, so there isn't a 'labels' argument exists。",
"Post your code, as installing the latest transformers fixed the issue for me as recommended here.",
"@afogarty85 Hi, please see the code below.\r\nRegarding to the training data, the size of data is around 5000 text-label pair, I created this `tf.data.Dataset` inspired by the [`glue_convert_examples_to_features.py`](https://github.com/huggingface/transformers/blob/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787/src/transformers/data/processors/glue.py#L35)\r\n\r\n```python\r\nimport os\r\nimport json\r\nimport re\r\nfrom pprint import pprint\r\nfrom dataclasses import dataclass, field\r\nfrom dotenv import load_dotenv\r\n\r\nimport numpy as np\r\nimport pandas as pd\r\n\r\nimport tensorflow as tf\r\nfrom transformers import (\r\n AutoConfig,\r\n AutoTokenizer,\r\n TFAutoModel,\r\n TFTrainer,\r\n TFTrainingArguments,\r\n)\r\nfrom sklearn.metrics import precision_recall_fscore_support\r\n\r\nfrom tc_data import TopCoder\r\n\r\nload_dotenv()\r\n\r\ndef build_dataset(tokenizer):\r\n \"\"\" Build td.data.Dataset out of text and prize range.\"\"\"\r\n # Load TopCoder data\r\n tc = TopCoder()\r\n tc_req = tc.get_filtered_requirements()\r\n tc_meta = tc.get_filtered_challenge_info()\r\n\r\n # Convert float prize into categorical prize range\r\n interval = np.linspace(0, 3000, 31)[:-1]\r\n tc_prz_range = tc_meta['total_prize'].apply(lambda prz: np.searchsorted(interval, prz, side='right') - 1)\r\n tc_prz_range.name = 'prize_cat'\r\n\r\n req_prz_df = pd.concat([tc_req['requirement'], tc_prz_range], axis=1) # user this df to ensure the index of text and label is aligned\r\n dataset_size = len(req_prz_df)\r\n\r\n # batched encode the str to `input_ids` and `attention_mask`\r\n batched_encoded = tokenizer(req_prz_df['requirement'].to_list(), padding='max_length', truncation=True)\r\n\r\n # Features are tuple of {'input_ids': [...], 'attention_mask': [...]} and prize range label\r\n features = [({k: batched_encoded[k][i] for k in batched_encoded}, req_prz_df['prize_cat'].iloc[i]) for i in range(len(req_prz_df))]\r\n\r\n input_names = tuple(batched_encoded.keys())\r\n def gen():\r\n \"\"\" generator used in `tf.data.Dataset.from_generator`.\"\"\"\r\n for encoded_str, label in features:\r\n yield encoded_str, label\r\n\r\n return (\r\n tf.data.Dataset.from_generator(\r\n gen,\r\n ({k: tf.int32 for k in batched_encoded}, tf.int32),\r\n ({k: tf.TensorShape([512]) for k in batched_encoded}, tf.TensorShape([]))\r\n ),\r\n dataset_size\r\n )\r\n\r\ndef compute_metrics(pred):\r\n \"\"\" Compute eval metrics\r\n reference: https://huggingface.co/transformers/training.html#tensorflow\r\n \"\"\"\r\n labels = pred.label_ids\r\n preds = pred.predictions.argmax(-1)\r\n precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary')\r\n acc = (preds == labels).mean()\r\n return {\r\n 'accuracy': acc,\r\n 'f1': f1,\r\n 'precision': precision,\r\n 'recall': recall\r\n }\r\n\r\ndef finetune_with_tftrainer():\r\n \"\"\" Fine tune with TFTrainer\"\"\"\r\n config = AutoConfig.from_pretrained(os.getenv('MODEL_NAME'), cache_dir=os.getenv('OUTPUT_DIR'), num_labels=30)\r\n tokenizer = AutoTokenizer.from_pretrained(os.getenv('MODEL_NAME'), cache_dir=os.getenv('OUTPUT_DIR'))\r\n\r\n training_args = TFTrainingArguments(\r\n output_dir=os.getenv('OUTPUT_DIR'),\r\n logging_dir=os.getenv('OUTPUT_DIR'),\r\n overwrite_output_dir=True,\r\n do_train=True,\r\n do_eval=True,\r\n learning_rate=2e-5,\r\n )\r\n\r\n with training_args.strategy.scope():\r\n model = TFAutoModel.from_pretrained(os.getenv('MODEL_NAME'), config=config, cache_dir=os.getenv('OUTPUT_DIR'))\r\n\r\n # Get data for fine-tuning\r\n dataset, dataset_size = build_dataset(tokenizer)\r\n\r\n # shuffle and split train/test tasks manuanly\r\n dataset = dataset.shuffle(dataset_size)\r\n train_size, test_size = int(dataset_size * (4 / 5)), dataset_size - int(dataset_size * (4 / 5)) # 8-2 split\r\n train_data, test_data = dataset.take(train_size), dataset.skip(train_size)\r\n\r\n trainer = TFTrainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_data,\r\n eval_dataset=test_data,\r\n compute_metrics=compute_metrics\r\n )\r\n\r\n # Train the model\r\n trainer.train()\r\n trainer.save_model()\r\n tokenizer.save_pretrained(os.getenv('OUTPUT_DIR'))\r\n\r\n # Evaluate the model\r\n result = trainer.evaluate()\r\n pprint(result)\r\n with open(os.path.join(os.getenv('OUTPUT_DIR'), 'eval_results.json'), 'w') as fwrite:\r\n json.dump(result, fwrite, indent=4)\r\n\r\nif __name__ == \"__main__\":\r\n finetune_with_tftrainer()\r\n\r\n```",
"@BenjiTheC \r\ntry replace `TFAutoModel ` with `TFAutoModelForSequenceClassification`",
"@QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?\r\n\r\nThanks!",
"> @QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?\r\n> \r\n> Thanks!\r\n\r\n@QixinLi It's working after the replacement, but still wondering if this will impact my goal for finetuning, thanks a lot!",
"`TFAutoModelForSequenceClassification ` 是一个封装好的用于文本分类的bert模型。以你貌似用到的`TFDistilBertForSequenceClassification`为例:\r\n```python\r\nclass TFDistilBertForSequenceClassification(TFDistilBertPreTrainedModel, TFSequenceClassificationLoss):\r\n def __init__(self, config, *inputs, **kwargs):\r\n super().__init__(config, *inputs, **kwargs)\r\n self.num_labels = config.num_labels\r\n\r\n self.distilbert = TFDistilBertMainLayer(config, name=\"distilbert\")\r\n self.pre_classifier = tf.keras.layers.Dense(\r\n config.dim,\r\n kernel_initializer=get_initializer(config.initializer_range),\r\n activation=\"relu\",\r\n name=\"pre_classifier\",\r\n )\r\n self.classifier = tf.keras.layers.Dense(\r\n config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name=\"classifier\"\r\n )\r\n self.dropout = tf.keras.layers.Dropout(config.seq_classif_dropout)\r\n```\r\n它有两层线性层和一层dropout。\r\n不过它在库里是封装好的。如果你想在bert最后一层输出之后再去加一些自定义的网络结构,可能需要自定义一个model类,并且继承`TFDistilBertPreTrainedModel`。\r\n\r\n如果你只是想做文本分类,那么`TFAutoModelForSequenceClassification`应该能满足你的要求。",
"> > @QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?\r\n> > Thanks!\r\n> \r\n> @QixinLi It's working after the replacement, but still wondering if this will impact my goal for finetuning, thanks a lot!\r\n\r\nI do not think so. People have tend to have found that extracting the CLS token from the hidden layer is what you want, instead of the embeddings for all your tokens. Some discussion on that is here: https://github.com/huggingface/transformers/issues/1950",
"> > > @QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?\r\n> > > Thanks!\r\n> > \r\n> > \r\n> > @QixinLi It's working after the replacement, but still wondering if this will impact my goal for finetuning, thanks a lot!\r\n> \r\n> I do not think so. People have tend to have found that extracting the CLS token from the hidden layer is what you want, instead of the embeddings for all your tokens. Some discussion on that is here: #1950\r\n\r\nCan you refer me to a specific comment? My purpose is exactly described in [this comment](https://github.com/huggingface/transformers/issues/1950#issuecomment-558679189), but it seems like a `...ForSequenceClassification` adds a specific mission type. Thanks!",
"> `TFAutoModelForSequenceClassification ` 是一个封装好的用于文本分类的bert模型。以你貌似用到的`TFDistilBertForSequenceClassification`为例:\r\n> \r\n> ```python\r\n> class TFDistilBertForSequenceClassification(TFDistilBertPreTrainedModel, TFSequenceClassificationLoss):\r\n> def __init__(self, config, *inputs, **kwargs):\r\n> super().__init__(config, *inputs, **kwargs)\r\n> self.num_labels = config.num_labels\r\n> \r\n> self.distilbert = TFDistilBertMainLayer(config, name=\"distilbert\")\r\n> self.pre_classifier = tf.keras.layers.Dense(\r\n> config.dim,\r\n> kernel_initializer=get_initializer(config.initializer_range),\r\n> activation=\"relu\",\r\n> name=\"pre_classifier\",\r\n> )\r\n> self.classifier = tf.keras.layers.Dense(\r\n> config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name=\"classifier\"\r\n> )\r\n> self.dropout = tf.keras.layers.Dropout(config.seq_classif_dropout)\r\n> ```\r\n> \r\n> 它有两层线性层和一层dropout。\r\n> 不过它在库里是封装好的。如果你想在bert最后一层输出之后再去加一些自定义的网络结构,可能需要自定义一个model类,并且继承`TFDistilBertPreTrainedModel`。\r\n> \r\n> 如果你只是想做文本分类,那么`TFAutoModelForSequenceClassification`应该能满足你的要求。\r\n\r\n@QixinLi 您好!我的目的就是把文本喂到fine-tuned BERT里之后获取最后一层输出,缀上一些其他的features之后继续通过神经网络去做分类。 目前使用distillbert在本地机器上调试,跑通了之后会放到云上跑bert-large这样。如果是这样的话,我应该用哪一个类呢?或者我可以通过`output[1]`来从`...SequenceClassification`获得last hidden state吗?谢谢!",
"> > > > @QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?\r\n> > > > Thanks!\r\n> > > \r\n> > > \r\n> > > @QixinLi It's working after the replacement, but still wondering if this will impact my goal for finetuning, thanks a lot!\r\n> > \r\n> > \r\n> > I do not think so. People have tend to have found that extracting the CLS token from the hidden layer is what you want, instead of the embeddings for all your tokens. Some discussion on that is here: #1950\r\n> \r\n> Can you refer me to a specific comment? My purpose is exactly described in [this comment](https://github.com/huggingface/transformers/issues/1950#issuecomment-558679189), but it seems like a `...ForSequenceClassification` adds a specific mission type. Thanks!\r\n\r\nAre you classifying something? If so, use `...ForSequenceClassification`. This seems to be what you want to do given your text-label data. Extracting the embedding is slightly different when using `...ForSequenceClassification` rather than the plain `TFDistilBertModel`.\r\n\r\nBut for your purpose, to classify something and to then get those embeddings, look toward this comment, as it illustrates the difference.\r\n\r\nhttps://github.com/huggingface/transformers/issues/1950#issuecomment-558683444",
"如果你想要last_hidden_states,可以这么做。\r\n```python\r\nclass MyOwnModel(TFDistilBertPreTrainedModel):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n self.distilbert = TFDistilBertMainLayer(config, name=\"distilbert\")\r\n self.classifier = tf.keras.layers.Dense(config.num_labels)\r\n\r\n def call(self, inputs=None, mask=None, token_type_ids=None, labels=None):\r\n outputs = self.distilbert(inputs, attention_mask=mask,token_type_ids=token_type_ids) \r\n last_hidden_states = outputs[0] \r\n # do whatever you want\r\n processed_hidden_states = .........\r\n logits = self.classifier(processed_hidden_states)\r\n outputs = logits\r\n if labels is not None:\r\n loss = self.compute_loss(labels, logits)\r\n outputs = (loss,) + outputs\r\n return outputs\r\n```\r\n```python\r\nmodel = MyOwnModel.from_pretrained(os.getenv('MODEL_NAME'), config=config, cache_dir=os.getenv('OUTPUT_DIR'))\r\ninput_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"))[None, :] # Batch size 1\r\noutputs = model(input_ids)\r\n```\r\n\r\np.s.刚刚看了一下`...SequenceClassification`的call()函数,发现他的返回值output[1]是过了分类层后的结果,并没有你想要的last_hidden_states。所以应该不能满足你的需求。\r\n\r\n",
"> 如果你想要last_hidden_states,可以这么做。\r\n> \r\n> ```python\r\n> class MyOwnModel(TFDistilBertPreTrainedModel):\r\n> def __init__(self, config):\r\n> super().__init__(config)\r\n> self.distilbert = TFDistilBertMainLayer(config, name=\"distilbert\")\r\n> self.classifier = tf.keras.layers.Dense(config.num_labels)\r\n> \r\n> def call(self, inputs=None, mask=None, token_type_ids=None, labels=None):\r\n> outputs = self.distilbert(inputs, attention_mask=mask,token_type_ids=token_type_ids) \r\n> last_hidden_states = outputs[0] \r\n> # do whatever you want\r\n> processed_hidden_states = .........\r\n> logits = self.classifier(processed_hidden_states)\r\n> outputs = logits\r\n> if labels is not None:\r\n> loss = self.compute_loss(labels, logits)\r\n> outputs = (loss,) + outputs\r\n> return outputs\r\n> ```\r\n> \r\n> ```python\r\n> model = MyOwnModel.from_pretrained(os.getenv('MODEL_NAME'), config=config, cache_dir=os.getenv('OUTPUT_DIR'))\r\n> input_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"))[None, :] # Batch size 1\r\n> outputs = model(input_ids)\r\n> ```\r\n> \r\n> p.s.刚刚看了一下`...SequenceClassification`的call()函数,发现他的返回值output[1]是过了分类层后的结果,并没有你想要的last_hidden_states。所以应该不能满足你的需求。\r\n\r\n@QixinLi 您好,你的回复解答了我的问题,十分感谢!另外有两个follow up: \r\n1. distilbert 和 bert 要通过两个不同的类来读模型,但是在我的代码实现上继承`TFDistilBertModel`和`TFBertModel`都是没有区别的是吗\r\n2. `...PreTrainedModel`貌似是一个abstract class,是不是应该继承具体的BERT/DISTILLBERT类呢?\r\n\r\n谢谢!",
"> > > > > @QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?\r\n> > > > > Thanks!\r\n> > > > \r\n> > > > \r\n> > > > @QixinLi It's working after the replacement, but still wondering if this will impact my goal for finetuning, thanks a lot!\r\n> > > \r\n> > > \r\n> > > I do not think so. People have tend to have found that extracting the CLS token from the hidden layer is what you want, instead of the embeddings for all your tokens. Some discussion on that is here: #1950\r\n> > \r\n> > \r\n> > Can you refer me to a specific comment? My purpose is exactly described in [this comment](https://github.com/huggingface/transformers/issues/1950#issuecomment-558679189), but it seems like a `...ForSequenceClassification` adds a specific mission type. Thanks!\r\n> \r\n> Are you classifying something? If so, use `...ForSequenceClassification`. This seems to be what you want to do given your text-label data. Extracting the embedding is slightly different when using `...ForSequenceClassification` rather than the plain `TFDistilBertModel`.\r\n> \r\n> But for your purpose, to classify something and to then get those embeddings, look toward this comment, as it illustrates the difference.\r\n> \r\n> [#1950 (comment)](https://github.com/huggingface/transformers/issues/1950#issuecomment-558683444)\r\n\r\nThanks for the answer! I've found the inspiration I need from your referring issue. Much appreciated!",
"> > 如果你想要last_hidden_states,可以这么做。\r\n> > ```python\r\n> > class MyOwnModel(TFDistilBertPreTrainedModel):\r\n> > def __init__(self, config):\r\n> > super().__init__(config)\r\n> > self.distilbert = TFDistilBertMainLayer(config, name=\"distilbert\")\r\n> > self.classifier = tf.keras.layers.Dense(config.num_labels)\r\n> > \r\n> > def call(self, inputs=None, mask=None, token_type_ids=None, labels=None):\r\n> > outputs = self.distilbert(inputs, attention_mask=mask,token_type_ids=token_type_ids) \r\n> > last_hidden_states = outputs[0] \r\n> > # do whatever you want\r\n> > processed_hidden_states = .........\r\n> > logits = self.classifier(processed_hidden_states)\r\n> > outputs = logits\r\n> > if labels is not None:\r\n> > loss = self.compute_loss(labels, logits)\r\n> > outputs = (loss,) + outputs\r\n> > return outputs\r\n> > ```\r\n> > \r\n> > \r\n> > ```python\r\n> > model = MyOwnModel.from_pretrained(os.getenv('MODEL_NAME'), config=config, cache_dir=os.getenv('OUTPUT_DIR'))\r\n> > input_ids = tf.constant(tokenizer.encode(\"Hello, my dog is cute\"))[None, :] # Batch size 1\r\n> > outputs = model(input_ids)\r\n> > ```\r\n> > \r\n> > \r\n> > p.s.刚刚看了一下`...SequenceClassification`的call()函数,发现他的返回值output[1]是过了分类层后的结果,并没有你想要的last_hidden_states。所以应该不能满足你的需求。\r\n> \r\n> @QixinLi 您好,你的回复解答了我的问题,十分感谢!另外有两个follow up:\r\n> \r\n> 1. distilbert 和 bert 要通过两个不同的类来读模型,但是在我的代码实现上继承`TFDistilBertModel`和`TFBertModel`都是没有区别的是吗\r\n> 2. `...PreTrainedModel`貌似是一个abstract class,是不是应该继承具体的BERT/DISTILLBERT类呢?\r\n> \r\n> 谢谢!\r\n\r\n1. 是的。之后送到机器上跑,如果要更换模型的话,需要将`TFDistilBertPreTrainedModel`换成`TFBertPreTrainedModel`。`TFDistilBertMainLayer`也要换成`TFBertMainLayer`。\r\n可以去[官方文档](https://huggingface.co/transformers/model_doc/bert.html)查看,或直接浏览transformers关于这些类的源码。\r\n2.`...PreTrainedModel`也是继承自`tf.keras.Model`,所以直接继承没有问题。(transformers里头就是这么做的。我上面的代码仅供参考,具体实现可以学习huggingface大佬们的模型代码)"
] | 1,592 | 1,595 | 1,593 | NONE | null | # 🐛 Bug
## Information
Model I am using TFBertForSequenceClassification
Language I am using the model on: English
The problem arises when using:
* [X] the official example scripts: (give details below)
The tasks I am working on is:
* [X] my own task or dataset: (give details below)
To classify text and learn the model and package.
## To reproduce
Steps to reproduce the behavior:
1. pip install transformers (currently 2.11.0)
2. run default code on website: https://huggingface.co/transformers/model_doc/bert.html#tfbertforsequenceclassification
3. I tried to follow this: https://github.com/huggingface/transformers/issues/4848; and the issue remains the same.
```
import tensorflow as tf
from transformers import BertTokenizer, TFBertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
labels = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.7
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5151/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5150/comments | https://api.github.com/repos/huggingface/transformers/issues/5150/events | https://github.com/huggingface/transformers/pull/5150 | 642,297,347 | MDExOlB1bGxSZXF1ZXN0NDM3MzU3MzAy | 5,150 | [MobileBert] fix dropout | {
"login": "ZhuBaohe",
"id": 35796307,
"node_id": "MDQ6VXNlcjM1Nzk2MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/35796307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhuBaohe",
"html_url": "https://github.com/ZhuBaohe",
"followers_url": "https://api.github.com/users/ZhuBaohe/followers",
"following_url": "https://api.github.com/users/ZhuBaohe/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhuBaohe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhuBaohe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhuBaohe/subscriptions",
"organizations_url": "https://api.github.com/users/ZhuBaohe/orgs",
"repos_url": "https://api.github.com/users/ZhuBaohe/repos",
"events_url": "https://api.github.com/users/ZhuBaohe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhuBaohe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=h1) Report\n> Merging [#5150](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d97b4176e5e9acdab930d73a7cb308b12bd4ad9e&el=desc) will **decrease** coverage by `0.37%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5150 +/- ##\n==========================================\n- Coverage 78.31% 77.93% -0.38% \n==========================================\n Files 137 137 \n Lines 23472 23472 \n==========================================\n- Hits 18381 18292 -89 \n- Misses 5091 5180 +89 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `93.32% <0.00%> (ø)` | |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.26% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=footer). Last update [d97b417...66d7d53](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!\r\n\r\nI think it's indeed a bug! In Tensorflow 2.0, one should always pass the training parameter to `Dropout` explicitly."
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | This PR fixes dropout in class TFMobileBertOutput. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5150/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5150",
"html_url": "https://github.com/huggingface/transformers/pull/5150",
"diff_url": "https://github.com/huggingface/transformers/pull/5150.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5150.patch",
"merged_at": 1592630480000
} |
https://api.github.com/repos/huggingface/transformers/issues/5149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5149/comments | https://api.github.com/repos/huggingface/transformers/issues/5149/events | https://github.com/huggingface/transformers/pull/5149 | 642,297,169 | MDExOlB1bGxSZXF1ZXN0NDM3MzU3MTY5 | 5,149 | Create README.md | {
"login": "aodiniz",
"id": 6626805,
"node_id": "MDQ6VXNlcjY2MjY4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6626805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aodiniz",
"html_url": "https://github.com/aodiniz",
"followers_url": "https://api.github.com/users/aodiniz/followers",
"following_url": "https://api.github.com/users/aodiniz/following{/other_user}",
"gists_url": "https://api.github.com/users/aodiniz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aodiniz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aodiniz/subscriptions",
"organizations_url": "https://api.github.com/users/aodiniz/orgs",
"repos_url": "https://api.github.com/users/aodiniz/repos",
"events_url": "https://api.github.com/users/aodiniz/events{/privacy}",
"received_events_url": "https://api.github.com/users/aodiniz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Creating README.md file for model on Community contribution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5149/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5149",
"html_url": "https://github.com/huggingface/transformers/pull/5149",
"diff_url": "https://github.com/huggingface/transformers/pull/5149.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5149.patch",
"merged_at": 1592863254000
} |
https://api.github.com/repos/huggingface/transformers/issues/5148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5148/comments | https://api.github.com/repos/huggingface/transformers/issues/5148/events | https://github.com/huggingface/transformers/pull/5148 | 642,228,311 | MDExOlB1bGxSZXF1ZXN0NDM3MzA0MDEz | 5,148 | Update glossary | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=h1) Report\n> Merging [#5148](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f45e873910e60d89511ae0193711e71c5c710468&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5148 +/- ##\n=======================================\n Coverage 77.19% 77.20% \n=======================================\n Files 133 133 \n Lines 22233 22233 \n=======================================\n+ Hits 17163 17164 +1 \n+ Misses 5070 5069 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.28% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=footer). Last update [f45e873...fde64f8](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That's great! I left some nits.",
"Addressed (something wrong with the time in my WSL so the commit appears before the comments....)"
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | Update the glossary to the new tokenizer API. I also added a first section about general terms that might confuse a beginner (which anyone is welcome to expand when they see a term that's not necessarily obvious and that we use a lot) and proper references to the subsections about the model inputs to make sure none of the links break.
There was a bad indent introduced since my cleanup of sphinx warnings, so fixed that in passing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5148/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5148",
"html_url": "https://github.com/huggingface/transformers/pull/5148",
"diff_url": "https://github.com/huggingface/transformers/pull/5148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5148.patch",
"merged_at": 1592829050000
} |
https://api.github.com/repos/huggingface/transformers/issues/5147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5147/comments | https://api.github.com/repos/huggingface/transformers/issues/5147/events | https://github.com/huggingface/transformers/pull/5147 | 642,201,637 | MDExOlB1bGxSZXF1ZXN0NDM3MjgyMTQx | 5,147 | Typo in Reformer model card | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=h1) Report\n> Merging [#5147](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f45e873910e60d89511ae0193711e71c5c710468&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5147 +/- ##\n==========================================\n- Coverage 77.19% 77.19% -0.01% \n==========================================\n Files 133 133 \n Lines 22233 22233 \n==========================================\n- Hits 17163 17162 -1 \n- Misses 5070 5071 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=footer). Last update [f45e873...e655c44](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks a lot @flozi00 !"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5147/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5147",
"html_url": "https://github.com/huggingface/transformers/pull/5147",
"diff_url": "https://github.com/huggingface/transformers/pull/5147.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5147.patch",
"merged_at": 1592815763000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5146/comments | https://api.github.com/repos/huggingface/transformers/issues/5146/events | https://github.com/huggingface/transformers/pull/5146 | 642,184,593 | MDExOlB1bGxSZXF1ZXN0NDM3MjY4Mzg1 | 5,146 | Upgrade examples to pl=0.8.1 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=h1) Report\n> Merging [#5146](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1262495a912b9cd97e2ae174fd627a9d8a502341&el=desc) will **increase** coverage by `1.66%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5146 +/- ##\n==========================================\n+ Coverage 76.37% 78.04% +1.66% \n==========================================\n Files 138 138 \n Lines 23772 23772 \n==========================================\n+ Hits 18157 18553 +396 \n+ Misses 5615 5219 -396 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.16% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <0.00%> (+1.42%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <0.00%> (+2.57%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.11% <0.00%> (+3.84%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `33.43% <0.00%> (+4.77%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.31% <0.00%> (+42.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=footer). Last update [1262495...c0bd407](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Flaxy failure, merging."
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | This upgrades to the latest PL and follows the pl docs to avoid Deprecation warnings.
### Known Issue
`trainer.test` fails in multigpu, this is not a new issue, but here we add a test that will pass when the bug is fixed upstream.
Failure discussed [here](https://github.com/PyTorchLightning/pytorch-lightning/issues/2267) can be invoked on a multigpu machine with
```bash
pytest examples/summarization
```
This failure is not new, it also existed in previous code.
Traceback:
```bash
examples/summarization/finetune.py:322: in main
trainer.test(model) # this breaks in DDP, known lightning issue. See evaluate_ch
eckpoint to recover metrics.
../.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:11
55: in test
self.barrier('test_setup')
../.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:12
60: in barrier
torch_distrib.barrier()
../.conda/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:1
484: in barrier
_check_default_pg()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _check_default_pg():
"""
Helper that checks if the default ProcessGroup has been initialized, with
assertion
"""
assert _default_pg is not None, \
> "Default process group is not initialized"
E AssertionError: Default process group is not initialized
../.conda/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:1
87: AssertionError
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5146/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5146",
"html_url": "https://github.com/huggingface/transformers/pull/5146",
"diff_url": "https://github.com/huggingface/transformers/pull/5146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5146.patch",
"merged_at": 1592872811000
} |
https://api.github.com/repos/huggingface/transformers/issues/5145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5145/comments | https://api.github.com/repos/huggingface/transformers/issues/5145/events | https://github.com/huggingface/transformers/pull/5145 | 642,179,435 | MDExOlB1bGxSZXF1ZXN0NDM3MjY0MjAw | 5,145 | Quick tour | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=h1) Report\n> Merging [#5145](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f45e873910e60d89511ae0193711e71c5c710468&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5145 +/- ##\n=======================================\n Coverage 77.19% 77.19% \n=======================================\n Files 133 133 \n Lines 22233 22233 \n=======================================\n Hits 17163 17163 \n Misses 5070 5070 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (+0.14%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=footer). Last update [f45e873...ae14c1a](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That's awesome, I think this PR is really cool! I wonder whether A short section showing a super simple training example should already be in the quicktour",
"For the quick training example, I have it in mind but for when the integration with nlp is smooth: gathering the data takes a bit too long in code right now, and we have to talk about some obscure function that prepares it, which doesn't really fit the rest. "
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | This PR replaces the old quickstart guide by splitting it in two:
- a quick tour that is before the installation page,
- the philosophy page, after the installation.
It also renames Usage to Task summary since it's really what it is, and renames the files usage to task_summary, summary to model_summary.
The quick tour tries to go over all the API introduced in Transformers at a high level, pointing out tutorials (existing or upcoming) for each of them. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5145/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5145/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5145",
"html_url": "https://github.com/huggingface/transformers/pull/5145",
"diff_url": "https://github.com/huggingface/transformers/pull/5145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5145.patch",
"merged_at": 1592856489000
} |
https://api.github.com/repos/huggingface/transformers/issues/5144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5144/comments | https://api.github.com/repos/huggingface/transformers/issues/5144/events | https://github.com/huggingface/transformers/issues/5144 | 642,166,022 | MDU6SXNzdWU2NDIxNjYwMjI= | 5,144 | Scoring each word from the sentence using Pretrained LM | {
"login": "thak123",
"id": 3891859,
"node_id": "MDQ6VXNlcjM4OTE4NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3891859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thak123",
"html_url": "https://github.com/thak123",
"followers_url": "https://api.github.com/users/thak123/followers",
"following_url": "https://api.github.com/users/thak123/following{/other_user}",
"gists_url": "https://api.github.com/users/thak123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thak123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thak123/subscriptions",
"organizations_url": "https://api.github.com/users/thak123/orgs",
"repos_url": "https://api.github.com/users/thak123/repos",
"events_url": "https://api.github.com/users/thak123/events{/privacy}",
"received_events_url": "https://api.github.com/users/thak123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Something like [this](https://github.com/huggingface/transformers/issues/5000#issuecomment-647560847)?",
"You mean to say that use soft-max on the the hidden states of every word to get the score ?\r\n\r\nI think that should work.",
"Great!",
"Since the code referenced tries to get the next prediction and I want to score existing sentence I modified to code to take the soft-max on the 2nd axis which is the word. Here is output. But I am unable to score the sentence or find the perplexity.\r\n\r\n```\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\nimport torch\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n\r\ninputs = tokenizer.encode(\"This is just now\", return_tensors=\"pt\")\r\noutput = model(inputs)\r\n\r\nmodel_output = output[0]\r\nlast_token_softmax = torch.softmax(model_output, dim=-1).squeeze()\r\nlast_token_softmax.shape\r\ntop_n_values = last_token_softmax.topk(1)\r\nfor index, value in zip(top_n_values.indices, top_n_values.values):\r\n print(\"Score: \", value.tolist())\r\n print(\"This is just\" + tokenizer.decode(index.tolist()))\r\n```\r\n\r\n```\r\nScore: [0.04588045924901962]\r\nThis is just is\r\nScore: [0.1578938215970993]\r\nThis is just a\r\nScore: [0.17527997493743896]\r\nThis is just a\r\nScore: [0.13700434565544128]\r\nThis is just,\r\n\r\n```",
"Hi, welcome back 🤗!\r\n\r\nWe now have a document for perplexity especially: https://huggingface.co/transformers/perplexity.html\r\n\r\nCould you take a look and let me know if it's helpful for your use-case?"
] | 1,592 | 1,612 | 1,593 | NONE | null | # ❓ Questions & Help
## Details
I am interested in scoring each word from the sentence using the LM. Detect how likely possible is the word in the given context by scoring it using the LM.
Is there a way out for this ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5144/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5143/comments | https://api.github.com/repos/huggingface/transformers/issues/5143/events | https://github.com/huggingface/transformers/issues/5143 | 642,157,446 | MDU6SXNzdWU2NDIxNTc0NDY= | 5,143 | Passing in own embeddings for image input | {
"login": "andrewlee98",
"id": 14003549,
"node_id": "MDQ6VXNlcjE0MDAzNTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/14003549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andrewlee98",
"html_url": "https://github.com/andrewlee98",
"followers_url": "https://api.github.com/users/andrewlee98/followers",
"following_url": "https://api.github.com/users/andrewlee98/following{/other_user}",
"gists_url": "https://api.github.com/users/andrewlee98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andrewlee98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andrewlee98/subscriptions",
"organizations_url": "https://api.github.com/users/andrewlee98/orgs",
"repos_url": "https://api.github.com/users/andrewlee98/repos",
"events_url": "https://api.github.com/users/andrewlee98/events{/privacy}",
"received_events_url": "https://api.github.com/users/andrewlee98/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Check out the `inputs_embeds` parameter."
] | 1,592 | 1,592 | 1,592 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I am trying to use your pretrained BERT/RoBERTa models to write multimodal classifiers such as the bitransformer (see diagram at top of page 2):
https://arxiv.org/pdf/1909.02950.pdf
and Visual BERT:
https://arxiv.org/pdf/1908.03557.pdf
It would be nice to be able to change the input layer of the transformer models to handle custom input embeddings rather than just input_ids. If there is already a way to do this, it would be great to get some instructions on how. I have tried printing the layers of the pretrained RoBERTa model, but it only gives:
`<transformers.modeling_tf_roberta.TFRobertaMainLayer object at 0x7f79fc04dcf8>`
Edit: I actually see that there is a MMBT model mentioned in the documentation, but I cannot find the pre-trained model or any examples with it. Please help. Thank you.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
I would like to be able to use your pretrained models to handle image embeddings as well. I am working on a project with this multimodal dataset:
https://gombru.github.io/2019/10/09/MMHS/
Any help would be greatly appreciated. Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5143/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5142/comments | https://api.github.com/repos/huggingface/transformers/issues/5142/events | https://github.com/huggingface/transformers/issues/5142 | 642,116,017 | MDU6SXNzdWU2NDIxMTYwMTc= | 5,142 | T5 special tokens not mapped to unique indices in vocabulary | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @sarahwie, \r\n\r\nThanks for your issue. I can reproduce the problem and see the reason for it. Currently, we rely on Google's sentencepiece tokenizer: https://github.com/google/sentencepiece for encoding and decoding in T5. What happens is that the `tokenizer.decode(tokens)` depends on the function \r\n\r\n`sp_model.decode_pieces(tokens)` with `sp_model` being an instance of `sentencepiece.SentencePieceProcessor()`. To correctly convert a string of tokens: `[\"<unk>\", \"</s>\"]` to **one** string we thus rely on `sp_model.decode_pieces`, so it is a bit out of our control to do the correct decoding here. \r\n\r\nTo quickly see the problem @thomwolf @mfuntowicz @n1t0 one can run the following code\r\n\r\n```python \r\nfrom transformers import T5Tokenizer\r\ntokenizer = T5Tokenizer.from_pretrained('t5-base')\r\ntokenizer.convert_tokens_to_string([\"<unk>\", \"</s>\"]) # gives ' ⁇ '\r\n```\r\n\r\nWhat do you think how we should handle this problem at the moment @thomwolf @n1t0 @mfuntowicz ?",
"For anyone looking for a quick, temporary fix to the unending-generation problem: override the EOS token with a custom one (note this fix does not work for `unk_token` or `pad_token`; for some reason they can't be re-mapped)\r\n\r\n```\r\ntokenizer = T5Tokenizer.from_pretrained('t5-base')\r\ntokenizer.add_special_tokens({'eos_token':'[EOS]'})\r\n\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n\r\n>>> tokenizer.eos_token_id\r\n32100\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Is there any update on this? Does the bug still exist in version 3.4?",
"Hey guys, I would recommend using our new `T5TokenizerFast` which solves this problem as can be seen below:\r\n\r\n```python\r\n>>> from transformers import T5TokenizerFast\r\n>>> tokenizer = T5TokenizerFast.from_pretrained('t5-base')\r\n>>> tokenizer.pad_token\r\n'<pad>'\r\n>>> tokenizer.pad_token_id\r\n0\r\n>>> tokenizer.eos_token\r\n'</s>'\r\n>>> tokenizer.eos_token_id\r\n1\r\n>>> tokenizer.unk_token\r\n'<unk>'\r\n>>> tokenizer.unk_token_id\r\n2\r\n>>> tokenizer.decode([0])\r\n'<pad>'\r\n>>> tokenizer.decode([1])\r\n'</s>'\r\n>>> tokenizer.decode([2])\r\n'<unk>'\r\n```\r\n",
"I also made a PR to fix the slow T5Tokenizer. It probably won't make it into v3.5, but into the next version.",
"@patrickvonplaten \r\n\r\nTwo quick questions:\r\n- Is there any downside to using fasttokenizer?\r\n- What's the best way to patch this fix to slowtokenizer into an existing transformers install?\r\n\r\nBigger question:\r\nI ran into this no-EOS generation problem when using finetune.py, but when I set up my own T5 trainer, I somehow managed to sidestep the issue. Here are the details. Any idea why I wasn't affected once I set it up on my own?\r\n\r\nEach item of my data set (source and target) is configured as\r\n``` \r\n# max_src_len is length of longest sentence in input set\r\ntokenized_inputs = self.tokenizer.batch_encode_plus(\r\n [src], max_length=max_src_len, padding=\"max_length\", return_tensors=\"pt\")\r\n```\r\nwhere each `src` is a string of words, with no EOS token appended (since batch_encode will append it).\r\n\r\nI then train with this forward function: \r\n```\r\ndef forward(model, device, batch):\r\n src_ids = batch[\"source_ids\"].to(device, dtype=torch.long)\r\n src_mask = batch[\"source_mask\"].to(device, dtype=torch.long)\r\n tgt_ids = batch[\"target_ids\"].to(device, dtype=torch.long)\r\n\r\n # padded ids (pad=0) are set to -100, which means ignore for loss calculation\r\n tgt_ids[tgt_ids[: ,:] == 0 ] = -100\r\n label_ids = tgt_ids.to(device)\r\n out_dict = model(src_ids, attention_mask=src_mask, labels=label_ids, return_dict=True)\r\n loss, logits = out_dict['loss'], out_dict['logits']\r\n return loss, logits\r\n\r\n# then do appropriate zero_grad(), loss.backward, etc\r\n```\r\n\r\nModels I train in this way *do* learn to generate a final token with ID=1. In particular I wrote the following verification function: \r\n```\r\ndef masked_token_match(tgt_ids: torch.tensor, outputs: torch.tensor,\r\n return_indices=False) -> Union[Tuple[int,int], Tuple[int, int, torch.tensor]]:\r\n # left-shift\r\n output_shifted = outputs[:,1:]\r\n\r\n # create output_padded, which truncates output at tgt_ids size, filling with pad tokens\r\n if output_shifted.shape <= tgt_ids.shape:\r\n output_padded = torch.zeros_like(tgt_ids)\r\n output_padded[:output_shifted.shape[0], :output_shifted.shape[1]] = output_shifted\r\n else: # output_shifted is bigger\r\n # so copy only up to the target IDs length\r\n output_padded = output_shifted[:,:tgt_ids.shape[1]] # copy all rows (bs) and up to tgt_ids length\r\n \r\n # compare where tokens are > 1 (i.e. not pad or EOS)\r\n match_indices = output_padded == tgt_ids # either they match\r\n matches_no_eos = torch.logical_or(match_indices, tgt_ids < 2) # or we ignore them (pad and eos)\r\n matches_with_eos = torch.logical_or(match_indices, tgt_ids < 1) # or we ignore them (just pad)\r\n total_matches_no_eos = torch.sum(torch.all(matches_no_eos, axis=1))\r\n total_matches_with_eos = torch.sum(torch.all(matches_with_eos, axis=1))\r\n\r\n return total_matche_no_eos, total_matches_with_eos\r\n```\r\n\r\nFor a copy task (I was debugging the original finetune behavior), where I ask T5 to \r\n1) copy src => src (i.e. just copy word for word)\r\n2) copy src => (first word of source); (i.e. just copy the first word and then generate EOS)\r\n\r\nThe model learns to complete both tasks and append an EOS token after only 15-20k training examples.\r\n\r\nSo why did this setup work? Maybe we think that the model could still be generating additional non-zero (i.e. non-pad) tokens after EOS==1 in these sequences. But I also seem to have verified that isn't occurring because I use this generation code during eval:\r\n```\r\ngenerated_ids = model.generate(src_ids, attention_mask=src_mask) # (batch x seq length)\r\noutputs_decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)\r\n```\r\nand the outputs_decoded do correctly stop where they are supposed to stop",
"No real downside using the fast tokenizers if you don't have to look into the code. ",
"You can take a look into the PR to see what one would have to change to make it work with an existing code base",
"@sshleifer , @patrickvonplaten \r\n\r\nI still don't understand why my tweaked version worked and did appropriately truncate generations (see above details). sshleifer, maybe you can see easily what finetune.py is doing differently?",
"@jsrozner I don't know either, but interested to find out.\r\n1) what is `max_src_len` \r\n2) when were you using finetune.py? Do you remember your command? T5Tokenizer started adding `</s>` to inputs a few months ago, so maybe you were before that? Can you reproduce the breakage with current finetune.py/current `transformers`?\r\nAntoher random idea: finetune.py automatically uses `config.task_specific_params['summarization']` for generation, which might be bad for your use case.",
"cc @danyaljj \r\nI'm just going to consolidate discussion from (#7796) here. (Also relevant is [HF forum](https://discuss.huggingface.co/t/issue-with-finetuning-a-seq-to-seq-model/1680/28)) \r\n\r\n`max_src_len` above is the maximum length of any input sequence, counted in...wait for it...number of characters. Whoops. That was dumb. I intended to go through and find the maximum sequence length in *tokens*. I'll fix that, but I don't think it affects other things: it turns out that `max_src, max_tgt_len = (250, 250)` for the inputs I was using. But that just means we had a lot of padding.\r\n\r\nI was using finetune.py just last month, so I don't think it was the EOS token.\r\n\r\nThe \"gibberish\" generation still occurs if I just use finetune_t5.sh as written. If I do either of the following, the outputs are correct:\r\n1) Comment out `use_task_specific_params(self.model, \"summarization\")` in finetune.py\r\n2) Add min_len to the `generate` call:\r\n```\r\n generated_ids = self.model.generate(\r\n batch[\"input_ids\"],\r\n attention_mask=batch[\"attention_mask\"],\r\n use_cache=True,\r\n decoder_start_token_id=self.decoder_start_token_id,\r\n num_beams=self.eval_beams,\r\n max_length=self.eval_max_length,\r\n min_length=0\r\n )\r\n```\r\n\r\nThis is because config.json for t5-small and t5-base has the following (@danyaljj this is also the answer to our question about where prefix is getting picked up in the HF forum)\r\n```\r\n \"task_specific_params\": {\r\n \"summarization\": {\r\n \"early_stopping\": true,\r\n \"length_penalty\": 2.0,\r\n \"max_length\": 200,\r\n \"min_length\": 30,\r\n \"no_repeat_ngram_size\": 3,\r\n \"num_beams\": 4,\r\n \"prefix\": \"summarize: \"\r\n },\r\n```\r\n\r\nBut it looks like the only param that really matters was the min_length. Beam size, max_length, prefix, etc all weren't causing the problem. I verified on both the (sent) => (sent) copy and the (sent) => (first word of sent) tasks.\r\n\r\n\r\nSo at least for my use case it seems like the tokenizer decode bug was not causing a problem? It seems like even though *we, as ~users* couldn't decode tokens correctly, the model still knew that 1==EOS and that after an EOS it should print PAD. The problem was that we were forcing it to generate at least 30 tokens, hence all the gibberish that I was seeing.\r\n\r\n@sshleifer, does this make sense with your understanding of the finetune script? i.e., that failing to decode EOS shouldn't matter?\r\n\r\n@danyaljj, given that you wanted relatively short outputs of the answers to questions, this seems like it might fix the issue for you? Give it a try and see what happens?",
"Thanks, @jsrozner! 🙏 \r\n\r\nIn theory, this explains my issue as well since my outputs were quite short. I will repeat it and report the results here! ",
"yes that makes sense and thanks for consolidating.\nwe should link to this discussion in the t5 docs ! ",
"What files should I change to update docstrings? \r\n\r\nAlso can you take a look at a few more questions / related issues so that we can clean things up? These are roughly the same questions I had in a [post in the HF thread](https://discuss.huggingface.co/t/issue-with-finetuning-a-seq-to-seq-model/1680/24?u=jsrozner)\r\n\r\n## `decoder_input_ids` vs `labels`\r\n- When would we want to pass both? \r\n- Here's an [example](https://github.com/abhimishra91/transformers-tutorials/blob/0cf2be0c81221877966d017d6f591e011174979e/transformers_summarization_wandb.ipynb) (that has been linked to in HF forums) that seems to do it wrong. In particular, passes both `decoder_input_ids` and `lm_labels` but does not right_shift the `decoder_input_ids`. This seems like it does not give the effect that we want since we never right_shift. He probably wants to pass only `labels` and omit `decoder_input_ids`?\r\n- Finally, Documentation for [T5forConditionalGeneration](https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration) says that if decoder_input_ids are not provided then input_ids will be used. But actually labels will be used? \r\n\r\n## in Finetune.py\r\n- `_step` is manually right_shifting rather than letting `model` do it for us by just passing `label`. Why?\r\n- `_step` calculates the loss manually, but I want to confirm that if we had also passed `labels` into the `self(..)` call that we would have gotten the same loss output when `label_smoothing == 0`",
"@jsrozner \r\n\r\n+ Docstrings are in modeling_t5.py and https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/t5.rst\r\n+ I can't think of a good reason to pass both `decoder_input_ids` and `labels`\r\n+ correct that example is wrong.\r\n+ Documentation for [T5forConditionalGeneration](https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration) is wrong, as you suggest.\r\n#### in `finetune.py`\r\n+ we pass `decoder_input_ids` to avoid allowing the model to calculate the loss. The reasoning is that, in some environments (namely TPU), replacing `pad_token_id` with -100 is expensive and we do not want the loss to consider `pad_token_id`. \r\n+ You would not get the same loss if you passed `labels` to the model, because it would not ignore `pad_token_id`\r\n",
"anyone know how to add all standard special tokens like bos to t5? https://stackoverflow.com/questions/73322462/how-to-add-all-standard-special-tokens-to-my-hugging-face-tokenizer-and-model @jsrozner ?"
] | 1,592 | 1,660 | 1,605 | NONE | null | The docs recommend adding the special eos_token `<\s>` to the end of each string when encoding/decoding with `T5Tokenizer`. However, this (and the other special tokens e.g. `unk_token`, `pad_token` aren't assigned unique ids in the lookup vocabulary (they are mapped to `{0,1,2}`, which are indices for other common words in the vocab). In practice, I find my model fails to properly produce the `eos_token` since it is associated with blank spaces, so the model produces run-ons during generation
## To reproduce
```
>>> from transformers import T5Tokenizer
>>> tokenizer = T5Tokenizer.from_pretrained('t5-base')
>>> tokenizer.pad_token
'<pad>'
>>> tokenizer.pad_token_id
0
>>> tokenizer.eos_token
'</s>'
>>> tokenizer.eos_token_id
1
>>> tokenizer.unk_token
'<unk>'
>>> tokenizer.unk_token_id
2
```
```
>>> tokenizer.decode([0])
''
>>> tokenizer.decode([1])
''
>>> tokenizer.decode([2])
' ⁇ '
```
## Expected behavior
```
>>> tokenizer.decode([0])
'<pad>'
>>> tokenizer.decode([1])
'</s>'
>>> tokenizer.decode([2])
'<unk>'
```
## Environment info
- `transformers` version: 2.9.1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5142/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5142/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5141/comments | https://api.github.com/repos/huggingface/transformers/issues/5141/events | https://github.com/huggingface/transformers/pull/5141 | 642,109,661 | MDExOlB1bGxSZXF1ZXN0NDM3MjA4MTYw | 5,141 | [bart-mnli] Fix class flipping bug | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=h1) Report\n> Merging [#5141](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e33929ef1e98a71ba7fc411f39cbb5451396ef02&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5141 +/- ##\n=======================================\n Coverage 77.19% 77.20% \n=======================================\n Files 133 133 \n Lines 22232 22233 +1 \n=======================================\n+ Hits 17162 17164 +2 \n+ Misses 5070 5069 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5141/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <100.00%> (+0.20%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5141/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5141/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.28% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5141/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (+0.14%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=footer). Last update [e33929e...70c5736](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Previously, `eval_mnli/acc` was 0.35, now .9. Same issue as other models ported from fairseq.
New results of Victor's command.
```bash
wandb: eval_mnli-mm/acc 0.9001220504475184
wandb: _runtime 274.7172155380249
wandb: eval_loss 0.3496225342093929
wandb: eval_mnli/acc 0.9011716760061131
wandb: _timestamp 1592585765.2334569
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5141/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5141/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5141",
"html_url": "https://github.com/huggingface/transformers/pull/5141",
"diff_url": "https://github.com/huggingface/transformers/pull/5141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5141.patch",
"merged_at": 1592588005000
} |
https://api.github.com/repos/huggingface/transformers/issues/5140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5140/comments | https://api.github.com/repos/huggingface/transformers/issues/5140/events | https://github.com/huggingface/transformers/pull/5140 | 642,096,797 | MDExOlB1bGxSZXF1ZXN0NDM3MTk3OTMy | 5,140 | [pl_examples] deprecate BaseTransformer.is_logger | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=h1) Report\n> Merging [#5140](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/84be482f6698fac822a5113735f2242c6d3abc76&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5140 +/- ##\n==========================================\n- Coverage 77.19% 77.19% -0.01% \n==========================================\n Files 133 133 \n Lines 22232 22232 \n==========================================\n- Hits 17162 17161 -1 \n- Misses 5070 5071 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.15%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=footer). Last update [84be482...2aad9d1](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | - proc_rank is deprecated in new PL
- updates pl to 0.8.1
@williamFalcon suggestions more than welcome!
- The test that is failing is `test_bdc_multigpu` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5140/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5140",
"html_url": "https://github.com/huggingface/transformers/pull/5140",
"diff_url": "https://github.com/huggingface/transformers/pull/5140.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5140.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5139/comments | https://api.github.com/repos/huggingface/transformers/issues/5139/events | https://github.com/huggingface/transformers/issues/5139 | 642,012,868 | MDU6SXNzdWU2NDIwMTI4Njg= | 5,139 | `facebook/bart-large-mnli` - random accuracy on MNLI | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,592 | 1,592 | 1,592 | MEMBER | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BART
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: MNLI
* [ ] my own task or dataset: (give details below)
## To reproduce
There might be something wrong with the `facebook/bart-large-mnli` checkpoint (https://huggingface.co/facebook/bart-large-mnli). I can only get a random accuracy (0.35) on MNLI:
`python examples/text-classification/run_glue.py --data_dir <GLUE DATA DIR/MNLI> --task MNLI --model_name_or_path facebook/bart-large-mnli --output_dir ./dbg/ --max_seq_length 128 --overwrite_cache --do_eval`
I checked and the checkpoint has a classification head (`'classification_head.dense.bias', 'classification_head.out_proj.weight', 'classification_head.out_proj.bias'`) i.e. the classification head is not initialized randomly.
cc @sshleifer as discussed | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5139/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5138/comments | https://api.github.com/repos/huggingface/transformers/issues/5138/events | https://github.com/huggingface/transformers/pull/5138 | 641,962,013 | MDExOlB1bGxSZXF1ZXN0NDM3MDk2NzQ2 | 5,138 | Fix in Reformer Config documentation | {
"login": "erickrf",
"id": 294483,
"node_id": "MDQ6VXNlcjI5NDQ4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/294483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erickrf",
"html_url": "https://github.com/erickrf",
"followers_url": "https://api.github.com/users/erickrf/followers",
"following_url": "https://api.github.com/users/erickrf/following{/other_user}",
"gists_url": "https://api.github.com/users/erickrf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erickrf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erickrf/subscriptions",
"organizations_url": "https://api.github.com/users/erickrf/orgs",
"repos_url": "https://api.github.com/users/erickrf/repos",
"events_url": "https://api.github.com/users/erickrf/events{/privacy}",
"received_events_url": "https://api.github.com/users/erickrf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Greath thanks for the fix :-) "
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Fixed a variable name in the Reformer Config docstring (`lsh_chunk_length` -> `lsh_attn_chunk_length`) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5138/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5138",
"html_url": "https://github.com/huggingface/transformers/pull/5138",
"diff_url": "https://github.com/huggingface/transformers/pull/5138.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5138.patch",
"merged_at": 1592574092000
} |
https://api.github.com/repos/huggingface/transformers/issues/5137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5137/comments | https://api.github.com/repos/huggingface/transformers/issues/5137/events | https://github.com/huggingface/transformers/issues/5137 | 641,943,611 | MDU6SXNzdWU2NDE5NDM2MTE= | 5,137 | xlm-mlm-17-1280: after run model to get embeddings shape 20000 | {
"login": "vvssttkk",
"id": 8581044,
"node_id": "MDQ6VXNlcjg1ODEwNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8581044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vvssttkk",
"html_url": "https://github.com/vvssttkk",
"followers_url": "https://api.github.com/users/vvssttkk/followers",
"following_url": "https://api.github.com/users/vvssttkk/following{/other_user}",
"gists_url": "https://api.github.com/users/vvssttkk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vvssttkk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvssttkk/subscriptions",
"organizations_url": "https://api.github.com/users/vvssttkk/orgs",
"repos_url": "https://api.github.com/users/vvssttkk/repos",
"events_url": "https://api.github.com/users/vvssttkk/events{/privacy}",
"received_events_url": "https://api.github.com/users/vvssttkk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hmm, i found `(proj): Linear(in_features=1280, out_features=200000, bias=True))`\r\nwhy `out_features = 200000`?",
"That's the vocab size, which is of size 200 000. You can see it in the [config.json](https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-17-1280-config.json).",
"it's true, my bad\r\nand how i can get embeddings 1280 for input text? ",
"@LysandreJik maybe u can help, how i can get embeddings to sentence after run `outputs = model_xlm_mlm(input_ids, langs=langs_ru)`\r\nat the older version 2.3.0 the `outputs` had the last size <emb_dim> but no it's <vocab_size>",
"That's probably because you were using the `AutoModel` factory instead of `AutoModelWithLMHead`. The former returns the transformer embeddings of dimension `hidden_size` (1280 in your case), while the latter returns the projected embeddings on the vocabulary, of dimension `vocab_size` (200 000 in your case). \r\n\r\nChange the two lines:\r\n\r\n```py\r\nfrom transformers import AutoTokenizer, AutoModelWithLMHead\r\n\r\nmodel_xlm_mlm = AutoModelWithLMHead.from_pretrained(xlm_mlm)\r\n```\r\n\r\nto\r\n\r\n```py\r\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\nmodel_xlm_mlm = AutoModel.from_pretrained(xlm_mlm)\r\n```"
] | 1,592 | 1,594 | 1,593 | NONE | null | i want to get embeddings for :ru: text by `xlm-mlm-17-1280`
at the end get embeddings with shape 2k
example code (using the last version transformers & torch on ubuntu):
```
from transformers import AutoTokenizer, AutoModelWithLMHead
xlm_mlm = 'xlm-mlm-17-1280'
tokenizer_xlm_mlm = AutoTokenizer.from_pretrained(xlm_mlm)
model_xlm_mlm = AutoModelWithLMHead.from_pretrained(xlm_mlm)
input_ids = torch.tensor([tokenizer_xlm_mlm.encode(my_input_text)]) # batch size of 1
print(f'{input_ids.shape=}')
# input_ids.shape=torch.Size([1, 373])
lang_id_ru = tokenizer_xlm_mlm.lang2id['ru']
langs_ru = torch.tensor([lang_id_ru] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0])
print(f'{langs_ru.shape=}')
# langs_ru.shape=torch.Size([373])
langs_ru = langs_ru.view(1, -1) # is now of shape [1, sequence_length]
print(f'{langs_ru.shape=}')
# langs_ru.shape=torch.Size([1, 373])
outputs = model_xlm_mlm(input_ids, langs=langs_ru)
outputs[0].shape
# torch.Size([1, 373, 200000])
```
so it's a bug or my bad? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5137/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5136/comments | https://api.github.com/repos/huggingface/transformers/issues/5136/events | https://github.com/huggingface/transformers/issues/5136 | 641,865,374 | MDU6SXNzdWU2NDE4NjUzNzQ= | 5,136 | Transformer-XL tokenizer cannot properly tokenize brackets | {
"login": "RafaelWO",
"id": 38643099,
"node_id": "MDQ6VXNlcjM4NjQzMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RafaelWO",
"html_url": "https://github.com/RafaelWO",
"followers_url": "https://api.github.com/users/RafaelWO/followers",
"following_url": "https://api.github.com/users/RafaelWO/following{/other_user}",
"gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions",
"organizations_url": "https://api.github.com/users/RafaelWO/orgs",
"repos_url": "https://api.github.com/users/RafaelWO/repos",
"events_url": "https://api.github.com/users/RafaelWO/events{/privacy}",
"received_events_url": "https://api.github.com/users/RafaelWO/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"**UPDATE**\r\nI've done some further research and discovered that the tokenization of strings containing either\r\n\r\n1. any opening bracket, e.g. `( [ {`\r\n2. words with dashes, e.g. `10-year-old`\r\n3. other symbols with no space afterwards, e.g. (`km/h` or `$3`)\r\n4. numbers, either floating point, e.g. `3.23`, or large comma separated, e.g. `5,000`\r\n\r\nresult in tokenization errors. See the following example:\r\nExample string: \r\n```\r\n\"Hello (bracket) and side-scrolled [and] Henry's $5,000 km/h with 3.34 m. What's up!?\"\r\n```\r\nEncoded and decoded again with `TransfoXLTokenizer`: \r\n```\r\nHello <unk> ) and side <unk> <unk> ] <unk> <unk> <unk> km <unk> with 3 <unk> m . <unk> up ! ?\r\n```\r\n\r\nIn the [Transformer-XL paper](http://arxiv.org/abs/1901.02860) they used the WikiText-103 dataset. The authors of the [WikiText-103 paper](http://arxiv.org/abs/1609.07843) stated that they used the *Moses tokenizer* for tokenization of the wikipedia articles which can deal with the errors stated above (except for 4. but I implemented a custom solution for it - the authors did this too for WikiText-103). This tokenizer replaces dashes with `@-@`, e.g. `10-year-old` gets `10 @-@ year @-@ old`, and dots or commas in number the same way, e.g. `3.5` gets `3 @.@ 5` or `5,000` gets `5 @,@ 000`.\r\n\r\nSince the pretrained Transformer-XL model is trained with the tokenization above it would make sense to use the same rules for the `TransfoXLTokenizer`, in my opinion. I have found a python package for the *Moses tokenizer* (see [link](https://github.com/alvations/sacremoses)) but I would understand if you do not prefer using it here.\r\n\r\nOtherwise some logic of the `BertTokenizer` could be used to because it does perfectly fine with the string above: \r\n```\r\n[CLS] hello ( bracket ) and side - scrolled [ and ] henry ' s $ 5 , 000 km / h with 3 . 34 m . what ' s up ! ? [SEP]\r\n```\r\nThen the only thing to add would be the replacements with the `@` character, from my point of view.\r\n\r\nWhat do you think?",
"Hi, yes we already have a [dependency on sacremoses](https://github.com/huggingface/transformers/blob/master/setup.py#L128) for XLM so you can use it.\r\n\r\nDo you want to try to propose a PR fixing this issue?",
"Ah ok, I didn't know that.\r\n\r\nSure, but could you please give me a hint where it's best to implement this piece of code? I lost track a bit since the tokenization refactoring and I'm not sure what method I would have to overwrite in `TransfoXLTokenizer`.",
"Yes of course, so the new API didn't touch any model-specific behavior, it was all about the user-facing up-stream methods.\r\n\r\nIn your case, I think you'll probably want to update the `_tokenize()` method of Transfo-XL tokenizer here: https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_transfo_xl.py#L339-L356\r\n\r\nThis is the method in charge of splitting words in token strings.\r\n\r\nYou can have a look at the XLM tokenizer if you want to see how people have been using sacremoses:\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"unstale",
"Thanks :)"
] | 1,592 | 1,598 | 1,598 | CONTRIBUTOR | null | # 🐛 Bug
## Information
The `TransfoXLTokenizer` is not able to tokenize words with surrounding brackets correctly. I compared it with the `BertTokenizer` from `bert-base-uncased` which gives the expected result. Example text is: `"Hello (bracket)"`
Model I am using: **Transformer-XL**
Language I am using the model on: **English**
The problem arises when using:
* [x] my own modified scripts
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import BertTokenizer, TransfoXLTokenizer
bert = BertTokenizer.from_pretrained('bert-base-uncased')
transfoxl = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
def test_bracket(tokenizer):
enc = tokenizer.encode("Hello (bracket)")
dec = tokenizer.decode(enc)
print(f"ORG: Hello (bracket)\nENC: {enc}\nDEC: {dec}")
```
Results:
`test_bracket(bert)` gives the following output:
```
ORG: Hello (bracket)
ENC: [101, 7592, 1006, 21605, 1007, 102]
DEC: [CLS] hello ( bracket ) [SEP]
```
`test_bracket(transfoxl)` gives the following output:
```
ORG: Hello (bracket)
ENC: [14049, 24]
DEC: Hello <unk>
```
If the parameter `add_space_before_punct_symbol=True` is passed, then the result is:
```
ORG: Hello (bracket)
ENC: [14049, 24, 21]
DEC: Hello <unk> )
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The `TransfoXLTokenizer` should detect the punctuation symbols, e.g. `(`, separately and thus give the same result as the `BertTokenizer` (except the special tokens of course): `hello ( bracket )`
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5136/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5135/comments | https://api.github.com/repos/huggingface/transformers/issues/5135/events | https://github.com/huggingface/transformers/issues/5135 | 641,800,758 | MDU6SXNzdWU2NDE4MDA3NTg= | 5,135 | src/transformers/trainer.py relies on path to infer global training steps, skips training for glue example | {
"login": "alexeifigueroa",
"id": 7851054,
"node_id": "MDQ6VXNlcjc4NTEwNTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7851054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexeifigueroa",
"html_url": "https://github.com/alexeifigueroa",
"followers_url": "https://api.github.com/users/alexeifigueroa/followers",
"following_url": "https://api.github.com/users/alexeifigueroa/following{/other_user}",
"gists_url": "https://api.github.com/users/alexeifigueroa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexeifigueroa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexeifigueroa/subscriptions",
"organizations_url": "https://api.github.com/users/alexeifigueroa/orgs",
"repos_url": "https://api.github.com/users/alexeifigueroa/repos",
"events_url": "https://api.github.com/users/alexeifigueroa/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexeifigueroa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have the same problem. The script shouldn't run differently depending on what naming scheme you decide to use for a model.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,604 | 1,604 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased that has been retrained stored somewhere else, with a timestamp in the path
Language I am using the model on (English, Chinese ...):EN
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) MRPC
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. store a pretrained model with a different name, say /funny/path/2020-06-19_120000/retrained ,
with the tokenizer and stuff inside.
2. Run the glue example run_glue on that path with --do_train, --do_eval
3. The training will be skipped because there's [this](https://github.com/huggingface/transformers/blob/84be482f6698fac822a5113735f2242c6d3abc76/src/transformers/trainer.py#L431) funny line in the trainer that will bump the global steps because of the parsing of the date..
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would not expect the trainer to rely on an arbitrary internal convention for naming checkpoints or any kind of path to yield a spurious global step count.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-5.4.0-29-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5135/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5134/comments | https://api.github.com/repos/huggingface/transformers/issues/5134/events | https://github.com/huggingface/transformers/issues/5134 | 641,785,023 | MDU6SXNzdWU2NDE3ODUwMjM= | 5,134 | What does adjust_logits_during_generation (formerly prepare_logits_for_generation) do? | {
"login": "caozhen-alex",
"id": 26409810,
"node_id": "MDQ6VXNlcjI2NDA5ODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/26409810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caozhen-alex",
"html_url": "https://github.com/caozhen-alex",
"followers_url": "https://api.github.com/users/caozhen-alex/followers",
"following_url": "https://api.github.com/users/caozhen-alex/following{/other_user}",
"gists_url": "https://api.github.com/users/caozhen-alex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caozhen-alex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caozhen-alex/subscriptions",
"organizations_url": "https://api.github.com/users/caozhen-alex/orgs",
"repos_url": "https://api.github.com/users/caozhen-alex/repos",
"events_url": "https://api.github.com/users/caozhen-alex/events{/privacy}",
"received_events_url": "https://api.github.com/users/caozhen-alex/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Pro Sam @sshleifer, Can you help here. Many thx.",
"I think it helps to force the model generate a token such as `bos_token` or `eos_token` by setting the probability of generation of all other tokens to 0 ",
"It is only used by `MarianMTModel` and `BartForConditionalGeneration`.\r\nFor Bart, what @mariamabarham said is exactly correct\r\n> it helps to force the model generate a token such as bos_token or eos_token by setting the probability of generation of all other tokens to 0.\r\n\r\nFor Marian, it does that and one extra job, it prevents the model from ever predicting `pad_token_id`. In Marian, it's important that that logic happens before the softmax so that the probabilities of other \"legal\" tokens are not super low.\r\n\r\nGreat Q!\r\n\r\n\r\n",
"> It is only used by `MarianMTModel` and `BartForConditionalGeneration`.\r\n> For Bart, what @mariamabarham said is exactly correct\r\n> \r\n> > it helps to force the model generate a token such as bos_token or eos_token by setting the probability of generation of all other tokens to 0.\r\n> \r\n> For Marian, it does that and one extra job, it prevents the model from ever predicting `pad_token_id`. In Marian, it's important that that logic happens before the softmax so that the probabilities of other \"legal\" tokens are not super low.\r\n> \r\n> Great Q!\r\n\r\n@mariamabarham @sshleifer Many thanx for your response!\r\nActually, I am learning the code https://github.com/huggingface/transformers/blob/49c5202522bdaf66e45df505b3a3c566e56134c3/src/transformers/modeling_utils.py#L816 \r\nI am still don't understand how https://github.com/huggingface/transformers/blob/49c5202522bdaf66e45df505b3a3c566e56134c3/src/transformers/modeling_utils.py#L794 can do this. It just returns the same logits back right? What kind of operation in this method? ",
"That method is now called `adjust_logits_during_generation`.\r\nIt's overwritten for bart:\r\nhttps://github.com/huggingface/transformers/blob/482a5993c20ef32512a42661bfa404516763b72e/src/transformers/modeling_bart.py#L1028\r\n\r\nand marian \r\nhttps://github.com/huggingface/transformers/blob/482a5993c20ef32512a42661bfa404516763b72e/src/transformers/modeling_marian.py#L49\r\n",
"Hi @sshleifer - Just a quick doubt. Why ```adjust_logits_during_generation``` is required for Bart. Is it because of any specific constraints the model is having?\r\n\r\nWhat I have observed, the moment we disable it, generation is going haywire. \r\nI understood, its doing nothing more than forcing everything except 0 ( bos_token_id ) th index to be small or 0.0. But what is the logical reasoning behind it for Bart Model only. Its not applicable for all models ( say T5 )."
] | 1,592 | 1,639 | 1,595 | NONE | null | https://github.com/huggingface/transformers/blob/49c5202522bdaf66e45df505b3a3c566e56134c3/src/transformers/modeling_utils.py#L794
Hi. Anyone can share why there is this method here? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5134/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5133/comments | https://api.github.com/repos/huggingface/transformers/issues/5133/events | https://github.com/huggingface/transformers/issues/5133 | 641,713,475 | MDU6SXNzdWU2NDE3MTM0NzU= | 5,133 | How to find and use fine-tuned model for GLUE-CoLA? | {
"login": "517030910405",
"id": 42196261,
"node_id": "MDQ6VXNlcjQyMTk2MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/42196261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/517030910405",
"html_url": "https://github.com/517030910405",
"followers_url": "https://api.github.com/users/517030910405/followers",
"following_url": "https://api.github.com/users/517030910405/following{/other_user}",
"gists_url": "https://api.github.com/users/517030910405/gists{/gist_id}",
"starred_url": "https://api.github.com/users/517030910405/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/517030910405/subscriptions",
"organizations_url": "https://api.github.com/users/517030910405/orgs",
"repos_url": "https://api.github.com/users/517030910405/repos",
"events_url": "https://api.github.com/users/517030910405/events{/privacy}",
"received_events_url": "https://api.github.com/users/517030910405/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @517030910405 , you can search for the models on huggingface model hub here https://huggingface.co/models,\r\nyou can filter the models by task, CoLa is sequence classification task so you'll be able to find the model there.\r\nHere's one I found which is trained on CoLA https://huggingface.co/textattack/bert-base-uncased-CoLA.\r\n\r\nAnd to see how to use sequence classification models, check this usage guide https://huggingface.co/transformers/usage.html#sequence-classification",
"> Hi @517030910405 , you can search for the models on huggingface model hub here https://huggingface.co/models,\r\n> you can filter the models by task, CoLa is sequence classification task so you'll be able to find the model there.\r\n> Here's one I found which is trained on CoLA https://huggingface.co/textattack/bert-base-uncased-CoLA.\r\n> \r\n> And to see how to use sequence classification models, check this usage guide https://huggingface.co/transformers/usage.html#sequence-classification\r\n\r\nThank you very much. The recommended fine-tuned model works very well. "
] | 1,592 | 1,592 | 1,592 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
I am trying to use fine-tuned model for CoLA from GLUE. Is there a quick guide? Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5133/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5132/comments | https://api.github.com/repos/huggingface/transformers/issues/5132/events | https://github.com/huggingface/transformers/pull/5132 | 641,688,140 | MDExOlB1bGxSZXF1ZXN0NDM2ODc1ODU4 | 5,132 | fix bart doc | {
"login": "fuzihaofzh",
"id": 1419566,
"node_id": "MDQ6VXNlcjE0MTk1NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1419566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fuzihaofzh",
"html_url": "https://github.com/fuzihaofzh",
"followers_url": "https://api.github.com/users/fuzihaofzh/followers",
"following_url": "https://api.github.com/users/fuzihaofzh/following{/other_user}",
"gists_url": "https://api.github.com/users/fuzihaofzh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fuzihaofzh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fuzihaofzh/subscriptions",
"organizations_url": "https://api.github.com/users/fuzihaofzh/orgs",
"repos_url": "https://api.github.com/users/fuzihaofzh/repos",
"events_url": "https://api.github.com/users/fuzihaofzh/events{/privacy}",
"received_events_url": "https://api.github.com/users/fuzihaofzh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=h1) Report\n> Merging [#5132](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/84be482f6698fac822a5113735f2242c6d3abc76&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5132 +/- ##\n=======================================\n Coverage 77.19% 77.19% \n=======================================\n Files 133 133 \n Lines 22232 22232 \n=======================================\n Hits 17162 17162 \n Misses 5070 5070 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.25% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=footer). Last update [84be482...992cbe3](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Awesome thanks a lot :-) "
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | fix bart doc
change the model name in the example from "bart-large-cnn" to "facebook/bart-large-cnn" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5132/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5132",
"html_url": "https://github.com/huggingface/transformers/pull/5132",
"diff_url": "https://github.com/huggingface/transformers/pull/5132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5132.patch",
"merged_at": 1592816308000
} |
https://api.github.com/repos/huggingface/transformers/issues/5131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5131/comments | https://api.github.com/repos/huggingface/transformers/issues/5131/events | https://github.com/huggingface/transformers/pull/5131 | 641,686,075 | MDExOlB1bGxSZXF1ZXN0NDM2ODc0Mjg3 | 5,131 | Update BERT-of-Theseus model card to avoid confusion | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=h1) Report\n> Merging [#5131](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/84be482f6698fac822a5113735f2242c6d3abc76&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5131 +/- ##\n=======================================\n Coverage 77.19% 77.19% \n=======================================\n Files 133 133 \n Lines 22232 22232 \n=======================================\n Hits 17162 17162 \n Misses 5070 5070 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=footer). Last update [84be482...3fe3e47](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"The note makes it clearer! Thanks!\r\n\r\nAs discussed, I think the confusion arises because of the naming: in the hub, most of the checkpoints with a task in the checkpoint name are full checkpoint with the specific task head. Sometimes there is `finetuned` in the checkpoint name, sometimes not.\r\nThis enables a user to directly use a fine-tuned checkpoint for the task at hand out of the box (without any training).\r\n\r\nIn the long run, it might be worth it to have explicit \"guidelines\" on checkpoint naming in the hub to avoid these confusions (which should also be mentioned in the model cards if there is any ambiguity). Namely, checkpoints that are general purpose (like `bert-base-uncased`) should be clearly distinguishable from the task-specific checkpoints (like `bert-large-uncased-whole-word-masking-finetuned-squad`). What do you think @julien-c?",
"IMO, the most important thing is to allow the authors to specify the example code. A possible design would be: auto-generated example by default but authors can overwrite. We can never completely rule out the ambiguity in the name without a super complicated protocol and long names. At least a code snippet can help."
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | @VictorSanh said this checkpoint cannot be evaluated directly on MNLI. Indeed this model is for intermediate task transfer (in BERT-of-Theseus [arxiv version](https://arxiv.org/abs/2002.02925), it's called a "general-purpose" model) so it doesn't (and shouldn't) contain a classification head. This note helps prevent confusion. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5131/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5131",
"html_url": "https://github.com/huggingface/transformers/pull/5131",
"diff_url": "https://github.com/huggingface/transformers/pull/5131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5131.patch",
"merged_at": 1592619214000
} |
https://api.github.com/repos/huggingface/transformers/issues/5130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5130/comments | https://api.github.com/repos/huggingface/transformers/issues/5130/events | https://github.com/huggingface/transformers/pull/5130 | 641,666,969 | MDExOlB1bGxSZXF1ZXN0NDM2ODU4OTEz | 5,130 | added subtitle for recent contributors in readme | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=h1) Report\n> Merging [#5130](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/84be482f6698fac822a5113735f2242c6d3abc76&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5130 +/- ##\n=======================================\n Coverage 77.19% 77.19% \n=======================================\n Files 133 133 \n Lines 22232 22232 \n=======================================\n Hits 17162 17162 \n Misses 5070 5070 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=footer). Last update [84be482...4341287](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
":shipit: "
] | 1,592 | 1,593 | 1,593 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5130/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5130",
"html_url": "https://github.com/huggingface/transformers/pull/5130",
"diff_url": "https://github.com/huggingface/transformers/pull/5130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5130.patch",
"merged_at": 1593435909000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5129/comments | https://api.github.com/repos/huggingface/transformers/issues/5129/events | https://github.com/huggingface/transformers/pull/5129 | 641,626,873 | MDExOlB1bGxSZXF1ZXN0NDM2ODI3MDIw | 5,129 | Add mbart-large-cc25, support translation finetuning | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 2009457320,
"node_id": "MDU6TGFiZWwyMDA5NDU3MzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/translation",
"name": "translation",
"color": "b2d2f4",
"default": false,
"description": "machine translation utilities and models"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=h1) Report\n> Merging [#5129](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a93991158f15993eba9ab421d82766b892f948&el=desc) will **increase** coverage by `0.25%`.\n> The diff coverage is `92.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5129 +/- ##\n==========================================\n+ Coverage 76.84% 77.09% +0.25% \n==========================================\n Files 141 141 \n Lines 24685 24702 +17 \n==========================================\n+ Hits 18969 19044 +75 \n+ Misses 5716 5658 -58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.45% <92.00%> (+1.57%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.99% <0.00%> (-6.13%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=footer). Last update [d2a9399...3685a81](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Could we test the MBart tokenizer? Or do you think it's too similar to the XLM-R tokenizer to deserve its own test suite?\r\n\r\n<3 that idea. Will do in this PR. Thanks for reminding me.",
"This does not need review at the moment. I need to fix the tests first.",
"Oh, and you have the `build_doc` failing (warnings make the build fail) with the following error:\r\n\r\n```\r\ntokenization_bart.py:docstring of transformers.MBartTokenizer.build_inputs_with_special_tokens:4:Unexpected indentation.\r\n```",
"Hi Sam, \r\nThanks for your great work to add MBART-cc25.\r\nI've been following issues and threads regarding MBART generation problems for a while now.\r\nAre those problems fixed in this PR or is the model still generating English?",
"The model is still generating english, but I'm not sure whether that's a bug. I can finetune it to achieve good scores on WMT en-ro, and nobody has replied to https://github.com/pytorch/fairseq/issues/2258, so I moved on.",
"Sad :( I was really looking forward to trying it on language pairs that Marian does not support (e.g. English->Chinese, English->Arabic). Is there any way to get MBART to work without any fine-tuning?\r\nThe problem I'm facing is it always generates BOS tokens at the first decoding step even though the tgt_lang id is provided as the starting token. I tried forcing zero probability on BOS token at each step but then it generates gibberish text...",
"I don't know how the preprocessing is done in the original setup, but I found a potential problems in generating decoder_input_ids (i.e target sentence) for training. According to the documentation, it's supposed to have the format:\r\n```\r\nlanguage code - tokens - eos\r\n```\r\n(which is supposedly different from the format of the input_ids), but the tokenizer currently generates for both source (input_ids) and target (decoder_input_ids) the current format:\r\n```\r\ntokens - eos - language code\r\n```\r\nMaybe this is why you are unable to generate? Could sb verify this?",
"This was a bug, but is fixed on master (by this PR) if you use `prepare_translation_batch`.\r\n\r\nFor example\r\n```python\r\nfrom transformers import MBartTokenizer\r\nmodel_name = 'facebook/mbart-large-cc25'\r\ntok = MBartTokenizer.from_pretrained(model_name)\r\nsrc_text = ['I think I fixed the tokenizer']\r\ntgt_text = ['Cred că am rezolvat tokenizatorul']\r\nbatch = tok.prepare_translation_batch(src_text, tgt_texts=tgt_text)\r\nbatch.input_ids # (*tokens, eos, lang_code)\r\n=> tensor([[ 87, 5351, 87, 188347, 70, 47, 1098, 52825, 2, 250004]])\r\nbatch.decoder_input_ids # (lang_code, *tokens, eos)\r\n=> tensor([[250020, 68523, 1362, 444, 102432, 18, 47, 1098, 164077, 202, 2]])\r\n```\r\n\r\nMore info: mbart knows how to switch between src and tgt modes using the `set_lang` method: https://github.com/huggingface/transformers/blob/d6eab53058015483e9cbcbfee4bf900c3a8ab772/src/transformers/tokenization_bart.py#L186"
] | 1,592 | 1,594 | 1,594 | CONTRIBUTOR | null | TODO:
- [x] fix config (max_length=100)
- [x] split test_modeling_mbart.py to new file
- [x] decoder_input_ids should start with decoder_start_token_id
- [x] AutoTokenizer
- [x] test finetuning. Requires adding `--src_lang` and `--tgt_lang` clargs.
- [x] Documentation
- [ ] Model Card
- [x] fix mbart tokenizer tests
- [x] tokenizer docs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5129/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5129",
"html_url": "https://github.com/huggingface/transformers/pull/5129",
"diff_url": "https://github.com/huggingface/transformers/pull/5129.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5129.patch",
"merged_at": 1594142581000
} |
https://api.github.com/repos/huggingface/transformers/issues/5128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5128/comments | https://api.github.com/repos/huggingface/transformers/issues/5128/events | https://github.com/huggingface/transformers/pull/5128 | 641,576,176 | MDExOlB1bGxSZXF1ZXN0NDM2Nzg1MTk5 | 5,128 | Pin `sphinx-rtd-theme` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,592 | 1,592 | 1,592 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5128/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5128",
"html_url": "https://github.com/huggingface/transformers/pull/5128",
"diff_url": "https://github.com/huggingface/transformers/pull/5128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5128.patch",
"merged_at": 1592518079000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5127/comments | https://api.github.com/repos/huggingface/transformers/issues/5127/events | https://github.com/huggingface/transformers/issues/5127 | 641,548,394 | MDU6SXNzdWU2NDE1NDgzOTQ= | 5,127 | Cached feature files - Naming introduces confusion/model mixing | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Good point. We can do a simple hack for the time being but the cleanest way out will be to use 🤗nlp to process the data (which I'm adding at the moment but probably won't be fully operational in the examples before early next week).\r\n\r\n🤗nlp has a lot more reliable and general hash-naming caching scheme for all data processing based on the function and all the inputs/outputs when processing the dataset (using some smart serialization with `dill` and hashing algo).",
"Ok! I might go back to the old scripts to experiment in the meantime.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,592 | 1,598 | 1,598 | MEMBER | null | # 🐛 Bug
The problem arises when using:
* [X] the official example scripts: glue.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: any example of GLUE
* [ ] my own task or dataset: (give details below)
In `run_glue.py` (and possibly in other run_* scripts interfaced with the new Trainer), the cache naming for the pre-processed features introduces confusion between models.
https://github.com/huggingface/transformers/blob/3d3e605affb792b78c918aac48f6bc82cfbf7e3e/src/transformers/data/datasets/glue.py#L87
`tokenizer.__class__.__name__` is `BertTokenizer`, `DistilBertTokenizer`, etc. which removes the information of which tokenization is used. For instance, `bert-base-uncased` and `bert-base-cased` would end up with the same file name while being potentially two really different files.
In scripts not interfaced yet with the Trainer, the `model_name_or_path` is used which prevents conflicting files to have the same name:
https://github.com/huggingface/transformers/blob/3d3e605affb792b78c918aac48f6bc82cfbf7e3e/examples/text-classification/run_xnli.py#L315 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5127/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5126/comments | https://api.github.com/repos/huggingface/transformers/issues/5126/events | https://github.com/huggingface/transformers/pull/5126 | 641,543,902 | MDExOlB1bGxSZXF1ZXN0NDM2NzU4NDYy | 5,126 | [fix] Move _adjust_logits above postprocess to fix Marian.generate | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=h1) Report\n> Merging [#5126](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3d3e605affb792b78c918aac48f6bc82cfbf7e3e&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5126 +/- ##\n==========================================\n+ Coverage 77.26% 77.32% +0.06% \n==========================================\n Files 133 133 \n Lines 22163 22163 \n==========================================\n+ Hits 17124 17138 +14 \n+ Misses 5039 5025 -14 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.25% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `88.88% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.16% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=footer). Last update [3d3e605...c6bd992](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Don't really think that `_adjust_logits` is a better name. Also why the underscore `_` ? we don't do this for `prepare_input_ids_for_generation()` either",
"Because it's not user facing, but I guess that's inconsistent. What do you think about `adjust_logits`?",
"Hmm, I like `prepare_logits_for_generation` better than `adjust_logits`, since it has the name `generation` in it, so people know that this function only belongs to generation in bart & marian"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | Fixes slow gpu test failures caused by #5031
This logic was already discussed in a previous PR, but, to summarize, the marian model loves to predict `pad_token_id` and if we allow pad_token_id's high score to enter the softmax we get very low probabilities for everything else.
I also rename `prepare_logits_for_generation` -> `_adjust_logits` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5126/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5126",
"html_url": "https://github.com/huggingface/transformers/pull/5126",
"diff_url": "https://github.com/huggingface/transformers/pull/5126.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5126.patch",
"merged_at": 1592517988000
} |
https://api.github.com/repos/huggingface/transformers/issues/5125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5125/comments | https://api.github.com/repos/huggingface/transformers/issues/5125/events | https://github.com/huggingface/transformers/pull/5125 | 641,542,854 | MDExOlB1bGxSZXF1ZXN0NDM2NzU3NTk1 | 5,125 | [tokenizers] Fix #5081 and improve backward compatibility | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=h1) Report\n> Merging [#5125](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a7c86dc33d6def6dba44f6ed2b71e8a1644130&el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `33.33%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5125 +/- ##\n==========================================\n- Coverage 78.04% 77.99% -0.05% \n==========================================\n Files 138 138 \n Lines 23766 23772 +6 \n==========================================\n- Hits 18548 18541 -7 \n- Misses 5218 5231 +13 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <ø> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `91.07% <33.33%> (-0.63%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <0.00%> (+1.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=footer). Last update [d2a7c86...e4b0dc0](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | MEMBER | null | Adds a generic fallback for `get_special_tokens_mask()` when there is only one input sentence and `already_has_special_tokens=True`.
It's still recommended to use `return_special_mask=True` in any encoding method for efficiency but this should work as well the above indicated cases.
Removed one test related to `get_special_tokens_mask()` which didn't test its operation in a reliable manner across tokenizers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5125/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5125",
"html_url": "https://github.com/huggingface/transformers/pull/5125",
"diff_url": "https://github.com/huggingface/transformers/pull/5125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5125.patch",
"merged_at": 1592839544000
} |
https://api.github.com/repos/huggingface/transformers/issues/5124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5124/comments | https://api.github.com/repos/huggingface/transformers/issues/5124/events | https://github.com/huggingface/transformers/issues/5124 | 641,531,763 | MDU6SXNzdWU2NDE1MzE3NjM= | 5,124 | SLOW GPU test Failures | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Status Badge:\r\n/badge.svg)"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | ```bash
=========================== short test summary info ============================
FAILED tests/test_modeling_marian.py::TestMarian_RU_FR::test_batch_generation_ru_fr
FAILED tests/test_modeling_marian.py::TestMarian_MT_EN::test_batch_generation_mt_en
==== 2 failed, 1110 passed, 392 skipped, 361 warnings in 1347.28s (0:22:27) ====
```
https://github.com/huggingface/transformers/runs/782480196?check_suite_focus=true | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5124/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5123/comments | https://api.github.com/repos/huggingface/transformers/issues/5123/events | https://github.com/huggingface/transformers/issues/5123 | 641,525,303 | MDU6SXNzdWU2NDE1MjUzMDM= | 5,123 | Fine-Tuning GPT2 | {
"login": "apteryxlabs",
"id": 65966807,
"node_id": "MDQ6VXNlcjY1OTY2ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/65966807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apteryxlabs",
"html_url": "https://github.com/apteryxlabs",
"followers_url": "https://api.github.com/users/apteryxlabs/followers",
"following_url": "https://api.github.com/users/apteryxlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/apteryxlabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apteryxlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apteryxlabs/subscriptions",
"organizations_url": "https://api.github.com/users/apteryxlabs/orgs",
"repos_url": "https://api.github.com/users/apteryxlabs/repos",
"events_url": "https://api.github.com/users/apteryxlabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/apteryxlabs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This script has been replaced by `language-modeling/run_language_modeling.py`, because it now handles pre-training as well, not only fine-tuning. [Here's a useful link.](https://github.com/huggingface/transformers/tree/master/examples/language-modeling)"
] | 1,592 | 1,592 | 1,592 | NONE | null | This post references a page that no longer exists (the run_lm_finetuning.py). Where is the updated page?
"You can look into GPT-2's training on the CLM task, which is done on WikiText-2 in this example."
_Originally posted by @LysandreJik in https://github.com/huggingface/transformers/issues/1145#issuecomment-526322616_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5123/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5122/comments | https://api.github.com/repos/huggingface/transformers/issues/5122/events | https://github.com/huggingface/transformers/pull/5122 | 641,523,680 | MDExOlB1bGxSZXF1ZXN0NDM2NzQxNjM5 | 5,122 | Fix #5114 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=h1) Report\n> Merging [#5122](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca2d0f98c4a89d50b79ddb06b59b6bffc31ff137&el=desc) will **decrease** coverage by `0.12%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5122 +/- ##\n==========================================\n- Coverage 77.39% 77.27% -0.13% \n==========================================\n Files 133 133 \n Lines 22167 22167 \n==========================================\n- Hits 17157 17130 -27 \n- Misses 5010 5037 +27 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `98.33% <100.00%> (ø)` | |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-4.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.18% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.59% <0.00%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=footer). Last update [ca2d0f9...bee759a](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"👍 "
] | 1,592 | 1,592 | 1,592 | COLLABORATOR | null | This fixes #5114, with a test to make sure there is no regression again. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5122/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5122",
"html_url": "https://github.com/huggingface/transformers/pull/5122",
"diff_url": "https://github.com/huggingface/transformers/pull/5122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5122.patch",
"merged_at": 1592522405000
} |
https://api.github.com/repos/huggingface/transformers/issues/5121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5121/comments | https://api.github.com/repos/huggingface/transformers/issues/5121/events | https://github.com/huggingface/transformers/pull/5121 | 641,507,770 | MDExOlB1bGxSZXF1ZXN0NDM2NzI4NDU4 | 5,121 | AutoTokenizer supports mbart-large-en-ro | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=h1) Report\n> Merging [#5121](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5121 +/- ##\n==========================================\n+ Coverage 77.28% 77.29% +0.01% \n==========================================\n Files 133 133 \n Lines 22134 22136 +2 \n==========================================\n+ Hits 17107 17111 +4 \n+ Misses 5027 5025 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.02% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.61% <100.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.67% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=footer). Last update [355954f...65b8c8a](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,592 | 1,592 | 1,592 | CONTRIBUTOR | null | - adds `MBartConfig` so that `MBartTokenizer` can have it's own key in `TOKENIZER_MAPPING`.
- I will remove all mentions of `sshleifer/mbart-large-en-ro` right before this is merged.
It's only difference with `facebook/mbart-large-en-ro` is that `config.model_type='mbart'`. Since I didn't want to break existing Mbart callers, I will do the s3 migration right before I merge.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5121/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5121",
"html_url": "https://github.com/huggingface/transformers/pull/5121",
"diff_url": "https://github.com/huggingface/transformers/pull/5121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5121.patch",
"merged_at": 1592527658000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.