url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/7521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7521/comments | https://api.github.com/repos/huggingface/transformers/issues/7521/events | https://github.com/huggingface/transformers/pull/7521 | 713,098,848 | MDExOlB1bGxSZXF1ZXN0NDk2NDc1MDUz | 7,521 | [s2s] trainer scripts: Remove --run_name, thanks sylvain! | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're welcome ;-)",
"cc @patil-suraj "
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7521/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7521",
"html_url": "https://github.com/huggingface/transformers/pull/7521",
"diff_url": "https://github.com/huggingface/transformers/pull/7521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7521.patch",
"merged_at": 1601587128000
} |
https://api.github.com/repos/huggingface/transformers/issues/7520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7520/comments | https://api.github.com/repos/huggingface/transformers/issues/7520/events | https://github.com/huggingface/transformers/issues/7520 | 713,095,105 | MDU6SXNzdWU3MTMwOTUxMDU= | 7,520 | MultiGPU Trainer: each processes uses more memory than 1 GPU job | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107554019,
"node_id": "MDU6TGFiZWwyMTA3NTU0MDE5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models",
"name": "Distributed Training / Models",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This seems to be a duplicate of #7169, finishing something and will investigate now that I have a proper multi-GPU steup.",
"Yes, most likely a duplicate.\r\nOne clue might be the call to `DataParallel` right before the call to `DistributedDataParallel`. ",
"Investigation seems to lead to: this is normal to have slightly more memory use per GPU in distributed mode since PyTorch keeps two copies in the gradients in that case. See [this issue](https://github.com/pytorch/pytorch/issues/37030).",
"You are correct, thank you for investigating. The difference was using `--adafactor` which is saving about 3GB.\r\nI added that option to Seq2SeqTrainer. Take it if you want it :) \r\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | I tried to run an 8 GPU training job, and it OOM'd, so I investigated whether it could run on 1 GPU. It could!
So here are two commands, the first one says that it is using 14814MiB on GPU 0.
The second says it is using
`15594MiB` on each.
This doesn't happen in PL, which leads me to believe that `distributed_scalars` is to blame, but I am not sure. Has anyone run into this?
cc @sgugger @patil-suraj
### Delta between two commands
```
- CUDA_VISIBLE_DEVICES=0 python
+ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2
```
### Command 1
```bash
CUDA_VISIBLE_DEVICES=0 python finetune_trainer.py \
--model_name_or_path student_pegasus_cnn_12_2 \
--data_dir cnn_dm \
--output_dir dpx_cnn_12_2_pl_comb_noDO --overwrite_output_dir --freeze_embeds \
--learning_rate=3e-5 \
--warmup_steps 500 --sortish_sampler \
--gradient_accumulation_steps=4 \
--per_device_train_batch_size=4 --per_device_eval_batch_size=8 --eval_beams 2 \
--num_train_epochs=5 \
--save_steps 3000 --eval_steps 3000 \
--logging_first_step \
--max_target_length 56 --val_max_target_length 142 --test_max_target_length 142 \
--do_train --do_eval --do_predict --evaluate_during_training \
--predict_with_generate --load_best_model_at_end
```
### Command 2
```bash
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 fine
tune_trainer.py \
--model_name_or_path student_pegasus_cnn_12_2 \
--data_dir cnn_dm \
--output_dir dpx_cnn_12_2_pl_comb_noDO --overwrite_output_dir --freeze_embeds \
--learning_rate=3e-5 \
--warmup_steps 500 --sortish_sampler \
--gradient_accumulation_steps=4 \
--per_device_train_batch_size=4 --per_device_eval_batch_size=8 --eval_beams 2 \
--num_train_epochs=5 \
--save_steps 3000 --eval_steps 3000 \
--logging_first_step \
--max_target_length 56 --val_max_target_length 142 --test_max_target_length 142 \
--do_train --do_eval --do_predict --evaluate_during_training \
--predict_with_generate --load_best_model_at_end
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7520/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7519/comments | https://api.github.com/repos/huggingface/transformers/issues/7519/events | https://github.com/huggingface/transformers/issues/7519 | 713,052,762 | MDU6SXNzdWU3MTMwNTI3NjI= | 7,519 | XLNet finetuning | {
"login": "alshahrani2030",
"id": 55197626,
"node_id": "MDQ6VXNlcjU1MTk3NjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/55197626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alshahrani2030",
"html_url": "https://github.com/alshahrani2030",
"followers_url": "https://api.github.com/users/alshahrani2030/followers",
"following_url": "https://api.github.com/users/alshahrani2030/following{/other_user}",
"gists_url": "https://api.github.com/users/alshahrani2030/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alshahrani2030/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alshahrani2030/subscriptions",
"organizations_url": "https://api.github.com/users/alshahrani2030/orgs",
"repos_url": "https://api.github.com/users/alshahrani2030/repos",
"events_url": "https://api.github.com/users/alshahrani2030/events{/privacy}",
"received_events_url": "https://api.github.com/users/alshahrani2030/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Could you run `transformers-cli env` in your environment and paste the result here? Thank you!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | I am trying to fine-tune XLNet model and it used work fine but I think huggingface update some classes and I ran through this error:
RuntimeError: Trying to create tensor with negative dimension -1: [-1, 768]

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7519/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7518/comments | https://api.github.com/repos/huggingface/transformers/issues/7518/events | https://github.com/huggingface/transformers/pull/7518 | 713,036,706 | MDExOlB1bGxSZXF1ZXN0NDk2NDIxODg2 | 7,518 | Fix seq2seq example test | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
#7490 removed the log_history save since it is now saved along inside the `TrainerState`. I didn't catch it was used in the seq2seq examples (must be due to a more recent PR because the tests were passing) so this PR adapts the part that loads `log_history` in those tests to fix them. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7518/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7518",
"html_url": "https://github.com/huggingface/transformers/pull/7518",
"diff_url": "https://github.com/huggingface/transformers/pull/7518.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7518.patch",
"merged_at": 1601576009000
} |
https://api.github.com/repos/huggingface/transformers/issues/7517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7517/comments | https://api.github.com/repos/huggingface/transformers/issues/7517/events | https://github.com/huggingface/transformers/issues/7517 | 713,018,736 | MDU6SXNzdWU3MTMwMTg3MzY= | 7,517 | Overflow error: Can't convert negative value to unsigned it [RAG Model] | {
"login": "sashank06",
"id": 8636933,
"node_id": "MDQ6VXNlcjg2MzY5MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8636933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashank06",
"html_url": "https://github.com/sashank06",
"followers_url": "https://api.github.com/users/sashank06/followers",
"following_url": "https://api.github.com/users/sashank06/following{/other_user}",
"gists_url": "https://api.github.com/users/sashank06/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashank06/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashank06/subscriptions",
"organizations_url": "https://api.github.com/users/sashank06/orgs",
"repos_url": "https://api.github.com/users/sashank06/repos",
"events_url": "https://api.github.com/users/sashank06/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashank06/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @sashank06 - could you please post a full code snippet so that we can reproduce your error? \r\n\r\nAlso @lhoestq - this looks like one of your favorite errors haha ",
"It's already fixed on `datasets` master branch, I'm going to do a release soon :) ",
"```\r\nimport ast\r\nimport logging\r\nimport os\r\nimport sys\r\n\r\nimport pandas as pd\r\nimport torch\r\nfrom tqdm import tqdm\r\n\r\nfrom transformers import BartForConditionalGeneration, RagRetriever, RagSequenceForGeneration, RagTokenForGeneration\r\nfrom transformers import logging as transformers_logging\r\n\r\nlogger = logging.getLogger(__name__)\r\nlogging.basicConfig(level=logging.INFO)\r\n\r\ntransformers_logging.set_verbosity_info()\r\n\r\n\r\ndef infer_model_type(model_name_or_path):\r\n\tif \"token\" in model_name_or_path:\r\n\t\treturn \"rag_token\"\r\n\tif \"sequence\" in model_name_or_path:\r\n\t\treturn \"rag_sequence\"\r\n\tif \"bart\" in model_name_or_path:\r\n\t\treturn \"bart\"\r\n\treturn None\r\n\r\ndef evaluate_batch_retrieval(rag_model, questions):\r\n\tdef strip_title(title):\r\n\t\tif title.startswith('\"'):\r\n\t\t\ttitle = title[1:]\r\n\t\tif title.endswith('\"'):\r\n\t\t\ttitle = title[:-1]\r\n\t\treturn title\r\n\r\n\tretriever_input_ids = rag_model.retriever.question_encoder_tokenizer.batch_encode_plus(\r\n\t\tquestions,\r\n\t\treturn_tensors=\"pt\",\r\n\t\tpadding=True,\r\n\t\ttruncation=True,\r\n\t)[\"input_ids\"] #.to(args.device)\r\n\r\n\tquestion_enc_outputs = rag_model.rag.question_encoder(retriever_input_ids, return_dict=True)\r\n\tquestion_enc_pool_output = question_enc_outputs.pooler_output\r\n\r\n\tresult = rag_model.retriever(\r\n\t\tretriever_input_ids,\r\n\t\tquestion_enc_pool_output.cpu().detach().to(torch.float32).numpy(),\r\n\t\tprefix=rag_model.rag.generator.config.prefix,\r\n\t\tn_docs=rag_model.config.n_docs,\r\n\t\treturn_tensors=\"pt\",\r\n\t)\r\n\tall_docs = rag_model.retriever.index.get_doc_dicts(result.doc_ids)\r\n\tprovenance_strings = []\r\n\tfor docs in all_docs:\r\n\t\tprovenance = [strip_title(title) for title in docs[\"title\"]]\r\n\t\tprovenance_strings.append(\"\\t\".join(provenance))\r\n\treturn provenance_strings\r\nmodel_kwargs = {}\r\nmodel_type = \"rag\"\r\nif model_type.startswith(\"rag\"):\r\n\tmodel_class = RagTokenForGeneration if model_type == \"rag_token\" else RagSequenceForGeneration\r\n\tmodel_kwargs[\"n_docs\"] = 5 #args.n_docs\r\n\tindex_name = \"hf\"\r\n\tif index_name is not None:\r\n\t\tmodel_kwargs[\"index_name\"] = index_name\r\n# if args.index_path is not None:\r\n# model_kwargs[\"index_path\"] = args.index_path\r\nelse:\r\n\tmodel_class = BartForConditionalGeneration\r\n\r\ncheckpoint = \"facebook/rag-sequence-base\"\r\nif model_type.startswith(\"rag\"):\r\n\tretriever = RagRetriever.from_pretrained(checkpoint, **model_kwargs)\r\n\tmodel = model_class.from_pretrained(checkpoint, retriever=retriever, **model_kwargs)\r\n\tmodel.retriever.init_retrieval()\r\nelse:\r\n\tmodel = model_class.from_pretrained(checkpoint, **model_kwargs)\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\nmodel.to(device)\r\n\r\nquestions = []\r\nquestions.append(\"where was the first super bowl held?\".strip())\r\n\r\n\r\nevaluate_batch_retrieval(model, questions)",
"@patrickvonplaten I have attached the code above. Do let me know if I am doing something wrong with the code as well. ",
"@lhoestq When would the release be made? Would that fix the issue I am facing?",
"Tomorrow most probably.\r\nYes this will fix your issue",
"The new release is out :)\r\nYou can do\r\n```\r\npip install --upgrade datasets\r\n```",
"will test it out and let you know if that fixes my problem."
] | 1,601 | 1,604 | 1,604 | NONE | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-4.14.186-146.268.amzn2.x86_64-x86_64-with-glibc2.10
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
### Who can help
@LysandreJik @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): RAG sequence base
The problem arises when using:
* [x] my own modified scripts: (give details below)
I am running a modified version of eval_rag.py. I am trying to experiment with the retriever's capability for document retrieval.
I am running into the following error when using the evaluate_batch_retrieval in the eval_rag.py
```
File "retrieval.py", line 126, in <module>
evaluate_batch_retrieval(model, questions)
File "retrieval.py", line 75, in evaluate_batch_retrieval
return_tensors="pt",
File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 470, in __call__
retrieved_doc_embeds, doc_ids, docs = self.retrieve(question_hidden_states, n_docs)
File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 426, in retrieve
return retrieved_doc_embeds, doc_ids, self.index.get_doc_dicts(doc_ids)
File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 246, in get_doc_dicts
return [self.dataset[doc_ids[i].tolist()] for i in range(doc_ids.shape[0])]
File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 246, in <listcomp>
return [self.dataset[doc_ids[i].tolist()] for i in range(doc_ids.shape[0])]
File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1071, in __getitem__
format_kwargs=self._format_kwargs,
File "/home/ec2-user/anaconda3/envs/retriever/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1026, in _getitem
indices_array = pa.array([int(i) for i in indices], type=pa.uint64())
File "pyarrow/array.pxi", line 269, in pyarrow.lib.array
File "pyarrow/array.pxi", line 38, in pyarrow.lib._sequence_to_array
OverflowError: can't convert negative value to unsigned int
```
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
Just experimenting with pre-trained retriever to see how well it can retrieve the documents
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7517/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7516/comments | https://api.github.com/repos/huggingface/transformers/issues/7516/events | https://github.com/huggingface/transformers/issues/7516 | 713,018,672 | MDU6SXNzdWU3MTMwMTg2NzI= | 7,516 | huggingface transformer running on CPU behind celery/redis doens't work (but works by itself) | {
"login": "HodorTheCoder",
"id": 15326703,
"node_id": "MDQ6VXNlcjE1MzI2NzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/15326703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HodorTheCoder",
"html_url": "https://github.com/HodorTheCoder",
"followers_url": "https://api.github.com/users/HodorTheCoder/followers",
"following_url": "https://api.github.com/users/HodorTheCoder/following{/other_user}",
"gists_url": "https://api.github.com/users/HodorTheCoder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HodorTheCoder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HodorTheCoder/subscriptions",
"organizations_url": "https://api.github.com/users/HodorTheCoder/orgs",
"repos_url": "https://api.github.com/users/HodorTheCoder/repos",
"events_url": "https://api.github.com/users/HodorTheCoder/events{/privacy}",
"received_events_url": "https://api.github.com/users/HodorTheCoder/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Anwer is included in the main post. Thanks.",
"Another way that \"fixed\" the problem for me was setting the number of torch threads to 1: torch.set_num_threads(1) BEFORE loading the model in the worker.",
"Thanks, man, it took me a day to find your comment as well. 😆 "
] | 1,601 | 1,665 | 1,601 | NONE | null | Hello,
I am actually creating this for posterity because it took me a day to figure it out and if anybody else has this issue, hopefully this helps.
I am running a Bert2Bert EncoderDecoderModel inside a docker container, running behind celery that is getting jobs through redis. This is in a production test environment on a machine w/o a GPU, so yes, it's slow, but it's not a deal breaker.
Anyways-- testing and everything works great when it's by itself. However, when I put it behind celery within a task, it would load the model and then get to generate some text and just hang. I couldn't figure out what the problem was until I found this thread:
https://github.com/celery/celery/issues/4113
The issue is how the CPU version of the model does forking-- the default celery configuration breaks unless you add the following to your celery config when running celery:
--pool=solo
Setting this fixes the concurrency issues with forking and everything works. So, it's a configuration issue.
Go forth and prosper. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7516/reactions",
"total_count": 11,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 11,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7516/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7515/comments | https://api.github.com/repos/huggingface/transformers/issues/7515/events | https://github.com/huggingface/transformers/pull/7515 | 713,002,883 | MDExOlB1bGxSZXF1ZXN0NDk2MzkzMzQ2 | 7,515 | [s2s] fix nltk pytest race condition with FileLock | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Attempts to resolve Flaky test issue reported on slack.
when `nltk.download('punkt')` is run in multiple processes, bad things happen. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7515/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7515/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7515",
"html_url": "https://github.com/huggingface/transformers/pull/7515",
"diff_url": "https://github.com/huggingface/transformers/pull/7515.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7515.patch",
"merged_at": 1601571070000
} |
https://api.github.com/repos/huggingface/transformers/issues/7514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7514/comments | https://api.github.com/repos/huggingface/transformers/issues/7514/events | https://github.com/huggingface/transformers/issues/7514 | 713,002,167 | MDU6SXNzdWU3MTMwMDIxNjc= | 7,514 | [Longformer] Output both local attentions and global attentions when `output_attentions=True` -> Good Second Issue | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I am working on a pull request to address this. I don't see any major challenge so far, but this made me realize how much `attentions` in Bert-like models and in Longformers are different. Why not replace `attentions` in the Longformer by `local_attentions`?\r\n\r\nThis means that the interface of Longformers would become incompatible with every other Transformer, but maybe it should be? I don't think that there is a way to plug Longformer `attentions` into a code that expects Bert-like `attentions` and get meaningful results, so users always have to write a special case for Longformers if they use them. As is, the risk is that they get bogus output and won't realize it until they carefully read the doc (that is not yet written).\r\n\r\nWhat are your thoughts on this @patrickvonplaten?",
"I have made the [pull request](https://github.com/huggingface/transformers/pull/7562).\r\n\r\nI checked that the Longformer tests passed with my changes, and I added one more test to check the output of attention probabilities.\r\n\r\nQuite stupidly I made the pull request to the __master__ branch, I am sorry about this. I left it as is to avoid duplicating pull requests for now. You can reject it and I will make a cleaner pull request to a separate branch.\r\n",
"sorry to have been so super inactive on this issue :-/ I will find time to solve it in ~1 week :-) . This issue is related as well: https://github.com/huggingface/transformers/pull/8007/files#r514633097.",
"No worries, there is no hurry on my side. Anyway, the issue is a little trickier than it looks because you guys have to decide how to encode attention probabilities when they are too large to be represented by a dense matrix. Let me know if there is anything I can do to help.",
"Hi @patrickvonplaten. I did not use the 🤗 Transformers since our discussion in November 2020. Today I came back to it (`transformers` version: 4.4.2) and I realized that this issue is still not completely solved. I could open a new issue, but I believe that the fix is really simple so I hope we can address it here: In some models, the global attentions are computed, stored in `outputs`, but at the very last stage they are not returned.\r\n\r\nIf I am not mistaken, the issue is in [modeling_longformer.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_longformer.py). At lines 1784-1789 the code is\r\n\r\n return LongformerMaskedLMOutput(\r\n loss=masked_lm_loss,\r\n logits=prediction_scores,\r\n hidden_states=outputs.hidden_states,\r\n attentions=outputs.attentions,\r\n )\r\n\r\nbut I think it should be\r\n\r\n return LongformerMaskedLMOutput(\r\n loss=masked_lm_loss,\r\n logits=prediction_scores,\r\n hidden_states=outputs.hidden_states,\r\n attentions=outputs.attentions,\r\n global_attentions=outputs.global_attentions, # <=====\r\n )\r\n\r\nThe same goes for lines 1876 and 2124 (but it is fine for lines 2029 and 2235).\r\n",
"This sounds correct to me! Would you mind opening a new PR? ",
"I will do it, no problem.",
"I made a minimal pull request https://github.com/huggingface/transformers/pull/10906."
] | 1,601 | 1,616 | 1,604 | MEMBER | null | # 🚀 Feature request
**Good Second Issue** - A more advanced issue for contributors who want to dive more into Longformer's attention mechanism.
Longformer currently only outputs global attentions, which is suboptimal because users might be interested in the local attentions as well. I propose to change the "output_attention" logic as follows in longformer:
`attentions` should correspond to the "local" attentions and then we'll add a new output type `global_attention` that contains the global_attentions. This is consistent with the naming of `attention_mask` and `global_attention_mask` IMO and the cleanest way to implement the feature.
Implementing this feature would mean to that Longformer will require its own `ModelOutput` class =>
`BaseModelOutput,` => `LongformerBaseModelOutput` or `BaseModelOutputWithGlobalAttention` (prefer the first name though)
`BaseModelOutputWithPooling,` => ...
Also some tests will have to be adapted.
This is a slightly more difficult issue, so I'm happy to help on it. One should understand the difference between local and global attention and how Longformer's attention is different to *e.g.* Bert's attention in general.
For more detail check out discussion here: https://github.com/huggingface/transformers/issues/5646 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7514/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7513/comments | https://api.github.com/repos/huggingface/transformers/issues/7513/events | https://github.com/huggingface/transformers/pull/7513 | 712,987,292 | MDExOlB1bGxSZXF1ZXN0NDk2MzgwNDE0 | 7,513 | [Attention Mask] Fix data type | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | MEMBER | null | # What does this PR do?
Fix data type error introduced by PR #7474. My bad! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7513/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7513",
"html_url": "https://github.com/huggingface/transformers/pull/7513",
"diff_url": "https://github.com/huggingface/transformers/pull/7513.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7513.patch",
"merged_at": 1601568942000
} |
https://api.github.com/repos/huggingface/transformers/issues/7512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7512/comments | https://api.github.com/repos/huggingface/transformers/issues/7512/events | https://github.com/huggingface/transformers/issues/7512 | 712,956,184 | MDU6SXNzdWU3MTI5NTYxODQ= | 7,512 | [XLNet] attention_mask / input_mask - Why two `attention_mask` inputs? | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"I agree! Deprecation until v4.0.0 seems like a reasonable solution,",
"(We'll need at least one release with it before 4.0.0 if we want to remove the deprecation at 4.0.0)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | MEMBER | null | For whatever reason XLNet accepts both an `attention_mask` and a `input_mask`. As far as I understand `attention_mask` = 1 - `input_mask`. I don't think having the same input twice (one is the inverse of the other) has any advantage. We should remove `input_mask` IMO (first depreciate, then remove). Also, `attention_mask` should be put in the 2nd positino to make model compatible with torchscript. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7512/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7511/comments | https://api.github.com/repos/huggingface/transformers/issues/7511/events | https://github.com/huggingface/transformers/issues/7511 | 712,953,553 | MDU6SXNzdWU3MTI5NTM1NTM= | 7,511 | [Transfo-XL] Impossible to pass `attention_mask` to model | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
},
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | MEMBER | null | It is not possible to forward an `attention_mask` to transfo-xl because Transfo-XL's forward function does not accept an `attention_mask`. This makes it impossible to do batch generation with Transfo-XL for example.
IMO, this could be implemented quite easily. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7511/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7510/comments | https://api.github.com/repos/huggingface/transformers/issues/7510/events | https://github.com/huggingface/transformers/issues/7510 | 712,952,126 | MDU6SXNzdWU3MTI5NTIxMjY= | 7,510 | [Reformer, Longformer, Roberta, GPT2, CTRL] attention_mask should be at second argument | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | MEMBER | null | Reformer, Longformer and Roberta have some models where the `attention_mask` is not at the second position. IMO, this was not done on purpose, but sloppy implementation. In order to use `torchscript` with `attention_mask`, the `forward()` args should be refactored. This is a breaking change however.
Additionally, GPT2 and CTRL also don't have `attention_mask` at their 2nd position in the forward pass. One can argue that it's more intuitive to have `past_key_values` at the second position, but leaving it there would also mean that torchscript + `attention_mask` can never really be used with GPT2. I think we should re-order the position_ids here as well, even though this is a big breaking change.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7510/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7510/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7509 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7509/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7509/comments | https://api.github.com/repos/huggingface/transformers/issues/7509/events | https://github.com/huggingface/transformers/pull/7509 | 712,945,290 | MDExOlB1bGxSZXF1ZXN0NDk2MzQ1MTU3 | 7,509 | [examples/s2s] clean up finetune_trainer | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also \r\nhttps://github.com/huggingface/transformers/blob/a42f62d34f7b2acdb7298e586adbe9f0f28864ea/examples/seq2seq/seq2seq_trainer.py#L56\r\n\r\nwe are using `model.config` here, this breaks on multi-gpu, right ?\r\nand IMO we might not need this `assert` here ",
"```\r\nself.pad_token_id = self.model.config.pad_token_id\r\n```\r\nThis will also break.\r\nMaybe we could just stuff the config or (`pad_token_id`) on `data_args`?\r\n\r\n\r\n\r\n",
"I may start `metrics.py` and move rouge/bleu funcs and their helpers in there as well. ",
"> ```\r\n> self.pad_token_id = self.model.config.pad_token_id\r\n> ```\r\n> \r\n> This will also break.\r\n> Maybe we could just stuff the config or (`pad_token_id`) on `data_args`?\r\n\r\nwe could pass `config` directly to `init`",
"works for me! <3 config"
] | 1,601 | 1,601 | 1,601 | MEMBER | null | This PR
1. moves the `build_compute_metrics_fn` to `utils.py` because we need to be able to import it for `hparam` search with `Seq2SeqTrainer`. Could have made it top level but since rest of the helpers are in `utils.py`, moved it there.
2. Also moves the `Seq2SeqDataCollator` to `utils` as the dataset is also there.
@sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7509/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7509",
"html_url": "https://github.com/huggingface/transformers/pull/7509",
"diff_url": "https://github.com/huggingface/transformers/pull/7509.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7509.patch",
"merged_at": 1601569169000
} |
https://api.github.com/repos/huggingface/transformers/issues/7508 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7508/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7508/comments | https://api.github.com/repos/huggingface/transformers/issues/7508/events | https://github.com/huggingface/transformers/pull/7508 | 712,876,140 | MDExOlB1bGxSZXF1ZXN0NDk2Mjg2ODQ1 | 7,508 | Fix Ray Tune progress_reporter kwarg | {
"login": "krfricke",
"id": 14904111,
"node_id": "MDQ6VXNlcjE0OTA0MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/14904111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krfricke",
"html_url": "https://github.com/krfricke",
"followers_url": "https://api.github.com/users/krfricke/followers",
"following_url": "https://api.github.com/users/krfricke/following{/other_user}",
"gists_url": "https://api.github.com/users/krfricke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krfricke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krfricke/subscriptions",
"organizations_url": "https://api.github.com/users/krfricke/orgs",
"repos_url": "https://api.github.com/users/krfricke/repos",
"events_url": "https://api.github.com/users/krfricke/events{/privacy}",
"received_events_url": "https://api.github.com/users/krfricke/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry by the way for not catching these at the same time. This should be all for now though!",
"Oh I remembered seeing this last week and thinking: This is wrong, I should fix it... but forgot...\r\nThanks for following through!"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
There's a small error in the Ray Tune kwarg parsing: The expected argument name is `progress_reporter`, not `reporter`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7508/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7508",
"html_url": "https://github.com/huggingface/transformers/pull/7508",
"diff_url": "https://github.com/huggingface/transformers/pull/7508.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7508.patch",
"merged_at": 1601562872000
} |
https://api.github.com/repos/huggingface/transformers/issues/7507 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7507/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7507/comments | https://api.github.com/repos/huggingface/transformers/issues/7507/events | https://github.com/huggingface/transformers/pull/7507 | 712,843,118 | MDExOlB1bGxSZXF1ZXN0NDk2MjU4ODEz | 7,507 | Report Tune metrics in final evaluation | {
"login": "krfricke",
"id": 14904111,
"node_id": "MDQ6VXNlcjE0OTA0MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/14904111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krfricke",
"html_url": "https://github.com/krfricke",
"followers_url": "https://api.github.com/users/krfricke/followers",
"following_url": "https://api.github.com/users/krfricke/following{/other_user}",
"gists_url": "https://api.github.com/users/krfricke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krfricke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krfricke/subscriptions",
"organizations_url": "https://api.github.com/users/krfricke/orgs",
"repos_url": "https://api.github.com/users/krfricke/repos",
"events_url": "https://api.github.com/users/krfricke/events{/privacy}",
"received_events_url": "https://api.github.com/users/krfricke/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just to be sure, is it also backward compatible with older versions of Ray Tune?",
"Yes! Return values were never required, and in fact disregarded until recently.\n\nOn October 1, 2020 2:38:00 PM GMT+01:00, Sylvain Gugger <[email protected]> wrote:\n>Just to be sure, is it also backward compatible with older versions of\n>Ray Tune?\n>\n>-- \n>You are receiving this because you authored the thread.\n>Reply to this email directly or view it on GitHub:\n>https://github.com/huggingface/transformers/pull/7507#issuecomment-702141017\n",
"Thanks for the fix then!",
"I'm confused about this PR. Why do we only evaluate once (metrics = local_trainer.evaluate()) and then report done=True directly? Some schedulers, like PopulationBasedTraining, need to evaluate many times during training to decide if the trial is good or bad."
] | 1,601 | 1,702 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
This PR makes Ray Tune's tuning objective function report all metrics, not just the objective, in the final evaluation step. It also gets rid of the (unnecessary) return value. With these changes the training objective is fully compatible with Ray Tune's recently introduced strict metric checking.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7507/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7507",
"html_url": "https://github.com/huggingface/transformers/pull/7507",
"diff_url": "https://github.com/huggingface/transformers/pull/7507.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7507.patch",
"merged_at": 1601560357000
} |
https://api.github.com/repos/huggingface/transformers/issues/7506 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7506/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7506/comments | https://api.github.com/repos/huggingface/transformers/issues/7506/events | https://github.com/huggingface/transformers/pull/7506 | 712,825,406 | MDExOlB1bGxSZXF1ZXN0NDk2MjQzODE4 | 7,506 | configuration_utils: fix handling of `id2labels` | {
"login": "Lodifice",
"id": 6838133,
"node_id": "MDQ6VXNlcjY4MzgxMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6838133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lodifice",
"html_url": "https://github.com/Lodifice",
"followers_url": "https://api.github.com/users/Lodifice/followers",
"following_url": "https://api.github.com/users/Lodifice/following{/other_user}",
"gists_url": "https://api.github.com/users/Lodifice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lodifice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lodifice/subscriptions",
"organizations_url": "https://api.github.com/users/Lodifice/orgs",
"repos_url": "https://api.github.com/users/Lodifice/repos",
"events_url": "https://api.github.com/users/Lodifice/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lodifice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, as you can see in [this example NER configuration file](https://s3.amazonaws.com/models.huggingface.co/bert/dslim/bert-base-NER/config.json), the `id2label` and `label2id` are actually dictionaries. \r\n\r\nInstead of doing the change you propose, changing the documentation to reflect that would be better. Thank you!",
"Wow, seems like someone else already fixed the documentation in the mean time."
] | 1,601 | 1,602 | 1,602 | NONE | null | # What does this PR do?
The parameter `id2labels` of class `PretrainedConfig` is documented as `List[str]`, so enumerate() should be used rather
than dict.items() in the constructor.
Since a lot of code (including test code) passes `id2labels` as a dict, enumerate() is only used if it is a list indeed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7506/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7506",
"html_url": "https://github.com/huggingface/transformers/pull/7506",
"diff_url": "https://github.com/huggingface/transformers/pull/7506.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7506.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7505 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7505/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7505/comments | https://api.github.com/repos/huggingface/transformers/issues/7505/events | https://github.com/huggingface/transformers/pull/7505 | 712,824,997 | MDExOlB1bGxSZXF1ZXN0NDk2MjQzNDg2 | 7,505 | added script for fine-tuning roberta for sentiment analysis task | {
"login": "DhavalTaunk08",
"id": 31320833,
"node_id": "MDQ6VXNlcjMxMzIwODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/31320833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DhavalTaunk08",
"html_url": "https://github.com/DhavalTaunk08",
"followers_url": "https://api.github.com/users/DhavalTaunk08/followers",
"following_url": "https://api.github.com/users/DhavalTaunk08/following{/other_user}",
"gists_url": "https://api.github.com/users/DhavalTaunk08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DhavalTaunk08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DhavalTaunk08/subscriptions",
"organizations_url": "https://api.github.com/users/DhavalTaunk08/orgs",
"repos_url": "https://api.github.com/users/DhavalTaunk08/repos",
"events_url": "https://api.github.com/users/DhavalTaunk08/events{/privacy}",
"received_events_url": "https://api.github.com/users/DhavalTaunk08/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
Added a script in community notebooks that fine-tune's Roberta for the sentiment analysis task.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7505/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7505",
"html_url": "https://github.com/huggingface/transformers/pull/7505",
"diff_url": "https://github.com/huggingface/transformers/pull/7505.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7505.patch",
"merged_at": 1601884636000
} |
https://api.github.com/repos/huggingface/transformers/issues/7504 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7504/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7504/comments | https://api.github.com/repos/huggingface/transformers/issues/7504/events | https://github.com/huggingface/transformers/pull/7504 | 712,822,625 | MDExOlB1bGxSZXF1ZXN0NDk2MjQxNTk1 | 7,504 | added script for fine-tuning roberta for sentiment analysis task | {
"login": "DhavalTaunk08",
"id": 31320833,
"node_id": "MDQ6VXNlcjMxMzIwODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/31320833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DhavalTaunk08",
"html_url": "https://github.com/DhavalTaunk08",
"followers_url": "https://api.github.com/users/DhavalTaunk08/followers",
"following_url": "https://api.github.com/users/DhavalTaunk08/following{/other_user}",
"gists_url": "https://api.github.com/users/DhavalTaunk08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DhavalTaunk08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DhavalTaunk08/subscriptions",
"organizations_url": "https://api.github.com/users/DhavalTaunk08/orgs",
"repos_url": "https://api.github.com/users/DhavalTaunk08/repos",
"events_url": "https://api.github.com/users/DhavalTaunk08/events{/privacy}",
"received_events_url": "https://api.github.com/users/DhavalTaunk08/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7504/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7504",
"html_url": "https://github.com/huggingface/transformers/pull/7504",
"diff_url": "https://github.com/huggingface/transformers/pull/7504.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7504.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7503 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7503/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7503/comments | https://api.github.com/repos/huggingface/transformers/issues/7503/events | https://github.com/huggingface/transformers/issues/7503 | 712,794,608 | MDU6SXNzdWU3MTI3OTQ2MDg= | 7,503 | Turning the SQuAD dataset class into an iterator to save ram and redistribute time | {
"login": "mariusjohan",
"id": 49961316,
"node_id": "MDQ6VXNlcjQ5OTYxMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/49961316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariusjohan",
"html_url": "https://github.com/mariusjohan",
"followers_url": "https://api.github.com/users/mariusjohan/followers",
"following_url": "https://api.github.com/users/mariusjohan/following{/other_user}",
"gists_url": "https://api.github.com/users/mariusjohan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariusjohan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariusjohan/subscriptions",
"organizations_url": "https://api.github.com/users/mariusjohan/orgs",
"repos_url": "https://api.github.com/users/mariusjohan/repos",
"events_url": "https://api.github.com/users/mariusjohan/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariusjohan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | # 🚀 Feature request
When I'm using the SquadDataset class I sometimes run out of RAM in my Colab session. It also takes a lot of time to process the data, so if we could just preprocess the data when using the __getitem__. I think we should turn the examples into an iterator rather than a list. And when using the __getitem__ we should then convert the example to a feature
## Motivation
As I mentioned, this can save a lot of ram and redistribute the time, so we do the preprocessing simultaneously.
When I run this snippet in Google Colab it runs out of memory even though I've 12 GB of ram
```python
train_dataset = SquadDataset(
args=data_args,
tokenizer=tokenizer,
cache_dir=model_args.cache_dir)
```
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I can pretty much code all of it we you believe it would be a necessary feature
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7503/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7502 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7502/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7502/comments | https://api.github.com/repos/huggingface/transformers/issues/7502/events | https://github.com/huggingface/transformers/issues/7502 | 712,790,184 | MDU6SXNzdWU3MTI3OTAxODQ= | 7,502 | Functionality to pass first few tokens as input to the decoder in T5 model | {
"login": "ayushtiku5",
"id": 40797286,
"node_id": "MDQ6VXNlcjQwNzk3Mjg2",
"avatar_url": "https://avatars.githubusercontent.com/u/40797286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushtiku5",
"html_url": "https://github.com/ayushtiku5",
"followers_url": "https://api.github.com/users/ayushtiku5/followers",
"following_url": "https://api.github.com/users/ayushtiku5/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushtiku5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushtiku5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushtiku5/subscriptions",
"organizations_url": "https://api.github.com/users/ayushtiku5/orgs",
"repos_url": "https://api.github.com/users/ayushtiku5/repos",
"events_url": "https://api.github.com/users/ayushtiku5/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushtiku5/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,603 | 1,603 | CONTRIBUTOR | null | I am finetuning the T5 model on a downstream sequence to sequence task. I wanted to know if it is possible to pass first few tokens as input to the T5 decoder during inference (like it is done in text generation models) apart from the input provided to the encoder? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7502/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7502/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7501 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7501/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7501/comments | https://api.github.com/repos/huggingface/transformers/issues/7501/events | https://github.com/huggingface/transformers/pull/7501 | 712,773,557 | MDExOlB1bGxSZXF1ZXN0NDk2MjAxNDMz | 7,501 | Add GPT2ForSequenceClassification based on DialogRPT | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,602 | 1,602 | MEMBER | null | # What does this PR do?
This PR implements `GPT2ForSequenceClassification` in order to support DialogRPT.
Closes https://github.com/huggingface/transformers/issues/7493.
`GPT2ForSequenceClassification` uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a pad token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the pad tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch).
Here's how to replicate the results shown on the [original implementation](https://github.com/golsun/DialogRPT#use-rankers-only):
```py
from transformers import GPT2Tokenizer, GPT2ForSequenceClassification
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2ForSequenceClassification.from_pretrained("directory_where_pth_config_are_saved")
model_input = tokenizer.encode("I love NLP!<|endoftext|>Here’s a free textbook (URL) in case anyone needs it.", return_tensors="pt")
result = model(model_input, return_dict=True)
final_output = torch.sigmoid(result.logits)
print(final_output)
# tensor([[0.6129]], grad_fn=<SigmoidBackward>)
```
Once this PR is merged I'll open two "Good first issues":
- Implement `GPT2ForSequenceClassification` in TF2
- Implement sequence classification models for other causal transformers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7501/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/7501/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7501",
"html_url": "https://github.com/huggingface/transformers/pull/7501",
"diff_url": "https://github.com/huggingface/transformers/pull/7501.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7501.patch",
"merged_at": 1602019882000
} |
https://api.github.com/repos/huggingface/transformers/issues/7500 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7500/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7500/comments | https://api.github.com/repos/huggingface/transformers/issues/7500/events | https://github.com/huggingface/transformers/issues/7500 | 712,749,292 | MDU6SXNzdWU3MTI3NDkyOTI= | 7,500 | Trucated Outputs while finetuning 'bart-base' on XSUM [Summarization Task] | {
"login": "yashgupta-7",
"id": 45476875,
"node_id": "MDQ6VXNlcjQ1NDc2ODc1",
"avatar_url": "https://avatars.githubusercontent.com/u/45476875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yashgupta-7",
"html_url": "https://github.com/yashgupta-7",
"followers_url": "https://api.github.com/users/yashgupta-7/followers",
"following_url": "https://api.github.com/users/yashgupta-7/following{/other_user}",
"gists_url": "https://api.github.com/users/yashgupta-7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yashgupta-7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yashgupta-7/subscriptions",
"organizations_url": "https://api.github.com/users/yashgupta-7/orgs",
"repos_url": "https://api.github.com/users/yashgupta-7/repos",
"events_url": "https://api.github.com/users/yashgupta-7/events{/privacy}",
"received_events_url": "https://api.github.com/users/yashgupta-7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Try adding `decoder_start_token_id=2` to `best_tfmr/config.json` and let me know if that changes anything!",
"I just did the above-mentioned change and decoded (run_eval.py) without training any further. Still, the issue persists. ",
"Ok, you could try re-training/training more with new training code. We can't reproduce this on `master`",
"okay, thanks!"
] | 1,601 | 1,602 | 1,602 | NONE | null | Hey!
I am trying to finetune bart-base model on the XSUM using standard commands. In the test-generations.txt file, the outputs I am getting after a few epochs (2-3) are truncated arbitrarily. Here is the exact command I am using:
`./finetune.sh --data_dir $XSUM_DIR --train_batch_size=8 --eval_batch_size=8 --output_dir=xsum_results --num_train_epochs 1 --model_name_or_path facebook/bart-base`
xsum_results is the directory I created and I am running this inside the examples/seq2seq directory.
I referred to these issues but could not find anything that could help me:
https://github.com/huggingface/transformers/issues/5656
https://github.com/huggingface/transformers/issues/6502
Some examples of the outputs I am getting:
German carmaker Daimler has reported a rise in sales of its cars and trucks in
Angelina Jolie has been honoured at a film festival in Bosnia, where she was the
Cuba's President Raul Castro has said he will introduce a series of reforms to the country
People who are suicidal should be given more help to stop them from jumping, a charity has
My pip freeze:
`absl-py==0.10.0
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
dill==0.3.2
filelock==3.0.12
future==0.18.2
gitdb==4.0.5
GitPython==3.1.8
google-auth==1.21.3
google-auth-oauthlib==0.4.1
grpcio==1.32.0
idna==2.10
joblib==0.16.0
Markdown==3.2.2
nlp==0.4.0
nltk==3.5
numpy==1.19.2
oauthlib==3.1.0
packaging==20.4
pandas==1.1.2
Pillow==7.2.0
portalocker==2.0.0
protobuf==3.13.0
pyarrow==1.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==2.4.7
python-dateutil==2.8.1
pytorch-lightning==0.9.0
pytz==2020.1
PyYAML==5.3.1
regex==2020.9.27
requests==2.24.0
requests-oauthlib==1.3.0
rouge-score==0.0.4
rsa==4.6
sacrebleu==1.4.14
sacremoses==0.0.43
sentencepiece==0.1.91
six==1.15.0
smmap==3.0.4
tensorboard==2.2.0
tensorboard-plugin-wit==1.7.0
tokenizers==0.8.1rc2
torch==1.6.0+cu101
torchvision==0.7.0+cu101
tqdm==4.49.0
transformers @ git+https://github.com/yashgupta-7/transformers@9e68d075a4100906509170498480823e7e61874a
urllib3==1.25.10
Werkzeug==1.0.1
xxhash==2.0.0
zipp==3.2.0`
Here is the recommended setting for XSUM.
`--max_target_length=60 --val_max_target_length=60 --test_max_target_length=100`
But the default values are all higher than these, so this cannot be a problem in my opinion.
It would be great if someone can point me to the potential problems that may be the reason. I am looking forward to fine-tuning on a custom dataset and really want the standard XSUM to get working! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7500/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7499 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7499/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7499/comments | https://api.github.com/repos/huggingface/transformers/issues/7499/events | https://github.com/huggingface/transformers/issues/7499 | 712,736,345 | MDU6SXNzdWU3MTI3MzYzNDU= | 7,499 | german distilbert not available? | {
"login": "datistiquo",
"id": 47474379,
"node_id": "MDQ6VXNlcjQ3NDc0Mzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/47474379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datistiquo",
"html_url": "https://github.com/datistiquo",
"followers_url": "https://api.github.com/users/datistiquo/followers",
"following_url": "https://api.github.com/users/datistiquo/following{/other_user}",
"gists_url": "https://api.github.com/users/datistiquo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datistiquo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datistiquo/subscriptions",
"organizations_url": "https://api.github.com/users/datistiquo/orgs",
"repos_url": "https://api.github.com/users/datistiquo/repos",
"events_url": "https://api.github.com/users/datistiquo/events{/privacy}",
"received_events_url": "https://api.github.com/users/datistiquo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Or is this model only available for pytorch and not TF?",
"https://huggingface.co/distilbert-base-german-cased\r\n\r\n^^ currently only the Pytorch weights, but you can load into TF pretty easily (follow the README/doc). If needed we can upload the converted TF weights (cc @stefan-it who I believe trained this model? This predates our user/organization namespaces)",
"And I've added a `de` language tag so that the model is discoverable via https://huggingface.co/models?filter=de&search=distilbert\r\n\r\nThanks for reporting!",
"Oh, we've an open issue regarding to the TF checkpoint 😅\r\n\r\nhttps://github.com/dbmdz/berts/issues/8\r\n\r\n@julien-c could you convert the model and upload it (not sure if I've access to root S3), thanks :heart: \r\n\r\n",
"> but you can load into TF pretty easily (follow the README/doc).\r\n\r\nCOuld you show me where I can read about htis? Cab just find from the getting stared tour:\r\n\r\n`bert_model = TFDistilBertModel.from_pretrained('distilbert-base-german-cased', from_tf=False)\r\n`\r\n...but this does not work...",
"`from_pt=True`?",
"Ok, great. So easy.^^ "
] | 1,601 | 1,601 | 1,601 | NONE | null | I try using the distilbert-base-german-cased
which is also listed here
https://huggingface.co/transformers/pretrained_models.html
but I get the message that this model is not avaiable. Seems also not here listed?
https://huggingface.co/models?filter=tf,de
Is this just currently the case? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7499/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7498 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7498/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7498/comments | https://api.github.com/repos/huggingface/transformers/issues/7498/events | https://github.com/huggingface/transformers/pull/7498 | 712,708,230 | MDExOlB1bGxSZXF1ZXN0NDk2MTQ1MjI2 | 7,498 | Update README.md | {
"login": "akshayrkg",
"id": 31383758,
"node_id": "MDQ6VXNlcjMxMzgzNzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/31383758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akshayrkg",
"html_url": "https://github.com/akshayrkg",
"followers_url": "https://api.github.com/users/akshayrkg/followers",
"following_url": "https://api.github.com/users/akshayrkg/following{/other_user}",
"gists_url": "https://api.github.com/users/akshayrkg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akshayrkg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akshayrkg/subscriptions",
"organizations_url": "https://api.github.com/users/akshayrkg/orgs",
"repos_url": "https://api.github.com/users/akshayrkg/repos",
"events_url": "https://api.github.com/users/akshayrkg/events{/privacy}",
"received_events_url": "https://api.github.com/users/akshayrkg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Making transformers readme more robust.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7498/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7498",
"html_url": "https://github.com/huggingface/transformers/pull/7498",
"diff_url": "https://github.com/huggingface/transformers/pull/7498.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7498.patch",
"merged_at": 1601552527000
} |
https://api.github.com/repos/huggingface/transformers/issues/7497 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7497/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7497/comments | https://api.github.com/repos/huggingface/transformers/issues/7497/events | https://github.com/huggingface/transformers/issues/7497 | 712,545,062 | MDU6SXNzdWU3MTI1NDUwNjI= | 7,497 | How to generate data using beam search from a custom gpt2 model? | {
"login": "nrjvarshney",
"id": 19836137,
"node_id": "MDQ6VXNlcjE5ODM2MTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/19836137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nrjvarshney",
"html_url": "https://github.com/nrjvarshney",
"followers_url": "https://api.github.com/users/nrjvarshney/followers",
"following_url": "https://api.github.com/users/nrjvarshney/following{/other_user}",
"gists_url": "https://api.github.com/users/nrjvarshney/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nrjvarshney/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nrjvarshney/subscriptions",
"organizations_url": "https://api.github.com/users/nrjvarshney/orgs",
"repos_url": "https://api.github.com/users/nrjvarshney/repos",
"events_url": "https://api.github.com/users/nrjvarshney/events{/privacy}",
"received_events_url": "https://api.github.com/users/nrjvarshney/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Here's an example using beam search with GPT-2:\r\n\r\n```py\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\n\r\ninput_ids = tokenizer(\"The day starts with\", return_tensors='pt')['input_ids']\r\nprint(tokenizer.decode(model.generate(input_ids, num_beams=3)[0]))\r\n```\r\nResult:\r\n```\r\nThe day starts with a long walk to the top of the hill.\r\n\r\nThe first thing you\r\n```",
"[Here's the doc for the `generate` method](https://huggingface.co/transformers/main_classes/model.html#transformers.generation_utils.GenerationMixin.generate)",
"Thanks @LysandreJik for the response but I have a custom model, I want to know how can I generate using my model",
"@LysandreJik - How can I generate sentences using my custom model?",
"I would recommend you check the[ source code for the `generate` method](https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L111) and see how the beam search is implemented. It is not trivial, however.\r\n\r\nMaybe @sshleifer and @patrickvonplaten have better tips on how to best do this.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@nrjvarshney Did you find any suitable way to use `generate` function for a custom model? I am facing a similar issue with a model of mine, and would be really grateful if you could let me know how to solve the issue. "
] | 1,601 | 1,655 | 1,607 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
I have a custom model with classification and an LM head.
`
self.config = AutoConfig.from_pretrained("gpt2", num_labels=3)
self.base_model = AutoModel.from_pretrained("gpt2", config=self.config)
self.classifier = nn.Sequential(
nn.Linear(self.config.hidden_size, self.config.num_labels),
)
self.lm_head = nn.Linear(self.base_model.config.n_embd, self.base_model.config.vocab_size, bias=False)`
I want to generate the sentences using this model (given the initial prefix) via beam search.
How can I achieve that?
I know that LM with double head exists but it's not fit for my usecase | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7497/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7496 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7496/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7496/comments | https://api.github.com/repos/huggingface/transformers/issues/7496/events | https://github.com/huggingface/transformers/issues/7496 | 712,543,515 | MDU6SXNzdWU3MTI1NDM1MTU= | 7,496 | BertforSequenceClassification MSELoss() without normalizing using sigmoid/softmax | {
"login": "liusiyi641",
"id": 34046462,
"node_id": "MDQ6VXNlcjM0MDQ2NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/34046462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liusiyi641",
"html_url": "https://github.com/liusiyi641",
"followers_url": "https://api.github.com/users/liusiyi641/followers",
"following_url": "https://api.github.com/users/liusiyi641/following{/other_user}",
"gists_url": "https://api.github.com/users/liusiyi641/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liusiyi641/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liusiyi641/subscriptions",
"organizations_url": "https://api.github.com/users/liusiyi641/orgs",
"repos_url": "https://api.github.com/users/liusiyi641/repos",
"events_url": "https://api.github.com/users/liusiyi641/events{/privacy}",
"received_events_url": "https://api.github.com/users/liusiyi641/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> * `transformers` version: 3.3.0\r\n> \r\n> * Platform: Darwin-18.7.0-x86_64-i386-64bit\r\n> \r\n> * Python version: 3.7.4\r\n> \r\n> * PyTorch version (GPU?): 1.6.0 (False)\r\n> \r\n> * Tensorflow version (GPU?): not installed (NA)\r\n> \r\n> * Using GPU in script?: No\r\n> \r\n> * Using distributed or parallel set-up in script?: No\r\n> \r\n> \r\n> @LysandreJik @sshleifer\r\n> \r\n> I'm trying to use BartForSequenceClassification() to do a regression, which should use a MSELoss() when (num_labels=1), as stated in the documentation. However, when I went over the source code for modeling_Bart.py, it didn't seem like the regression functionality with MSELoss() was added in the source code. Within the class BartForSequenceClassfication(), It only has CrossEntropyLoss() to do classification. I wonder if you will be adding this functionality.\r\n> \r\n> So I went to BertForSequenceClassification() class to see how it did it, and I found that it might have a problem.\r\n> \r\n> In the class BertForSequenceClassification(BertPreTrainedModel) (line 1352):\r\n> \r\n> if labels is not None:\r\n> if self.num_labels == 1:\r\n> # We are doing regression\r\n> loss_fct = MSELoss()\r\n> loss = loss_fct(logits.view(-1), labels.view(-1))\r\n> else:\r\n> loss_fct = CrossEntropyLoss()\r\n> loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n> \r\n> Here the MSELoss() and CrossEntropyLoss() are both loss functions from pytorch.\r\n> \r\n> So you passed in the logits, which are unnormalized probabilities, to both of the loss functions. It is ok to do so for the CrossEntropyLoss(), since from pytorch's documentation they expect the inputs to be unnormalized logits and from their source code they first log softmax it before actually computing the loss, but I don't think it's okay to do the same for the MSELoss() functions. If you look at their implementation of it, it did not normalize it first using either softmax or sigmoid, and they also indicate that the function expects unnormalized logits in their documentation. I'm not sure if this behavior is intended (which is unlikely since we want to normalize it first before loss functions or there could be gradient explosions), but I think this confusion/inconsistency from pytorch may cause a problem and you probably want to change it. Please correct me if I'm wrong and thanks for this wonderful package!\r\n> \r\n> Thanks!\r\n\r\nI am trying to use BertforSequeneClassification to do regression too and faced the same issue. Do u have any walk around? thx! ",
"@liusiyi641 Just to clarify - do you expect to rewrite that code snippet as follows:\r\n```python\r\nif labels is not None:\r\n if self.num_labels == 1:\r\n # We are doing regression\r\n loss_fct = MSELoss()\r\n normalizer = nn.Sigmoid()\r\n logits = normalizer(logits) * (B - A) + A # we expect the regression values be on [A, B] interval\r\n loss = loss_fct(logits.view(-1), labels.view(-1))\r\n else:\r\n loss_fct = CrossEntropyLoss()\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n```\r\nright?\r\n\r\nI tried this change but it doesn't have any influence on the training process in my case: the same accuracy, the same behavior of the learning curve.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,601 | 1,619 | 1,619 | NONE | null | - `transformers` version: 3.3.0
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.4
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
@LysandreJik @sshleifer
I'm trying to use BartForSequenceClassification() to do a regression, which should use a MSELoss() when (num_labels=1), as stated in the documentation. However, when I went over the source code for modeling_Bart.py, it didn't seem like the regression functionality with MSELoss() was added in the source code. Within the class BartForSequenceClassfication(), It only has CrossEntropyLoss() to do classification. I wonder if you will be adding this functionality.
So I went to BertForSequenceClassification() class to see how it did it, and I found that it might have a problem.
In the class BertForSequenceClassification(BertPreTrainedModel) (line 1352):
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
Here the MSELoss() and CrossEntropyLoss() are both loss functions from pytorch.
So you passed in the logits, which are unnormalized probabilities, to both of the loss functions. It is ok to do so for the CrossEntropyLoss(), since from pytorch's documentation they expect the inputs to be unnormalized logits and from their source code they first log softmax it before actually computing the loss, but I don't think it's okay to do the same for the MSELoss() functions. If you look at their implementation of it, it did not normalize it first using either softmax or sigmoid, and they also indicate that the function expects unnormalized logits in their documentation. I'm not sure if this behavior is intended (which is unlikely since we want to normalize it first before loss functions or there could be gradient explosions), but I think this confusion/inconsistency from pytorch may cause a problem and you probably want to change it. Please correct me if I'm wrong and thanks for this wonderful package!
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7496/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7496/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7495 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7495/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7495/comments | https://api.github.com/repos/huggingface/transformers/issues/7495/events | https://github.com/huggingface/transformers/issues/7495 | 712,417,857 | MDU6SXNzdWU3MTI0MTc4NTc= | 7,495 | quick questions about the `BertModelLMHeadModel`. | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Bert is a Masked Lanuage Model. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | Hello,
I have a few questions about the `BertModelLMHeadModel`:
1. Is `BertModelLMHeadModel` used to conduct the causal language modeling (next token prediction), as it is the case for the `GPT2LMHeadModel`?
2. For `GPT2LMHeadModel`, I can just specify `labels = input_ids` for convenience. I just specify the `labels` in this way for the `BertModelLMHeadModel` as well?
Thanks, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7495/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7494 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7494/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7494/comments | https://api.github.com/repos/huggingface/transformers/issues/7494/events | https://github.com/huggingface/transformers/issues/7494 | 712,417,025 | MDU6SXNzdWU3MTI0MTcwMjU= | 7,494 | Is the multiple-choice head for the pre-trained `LongformerForMultipleChoice` model pre-trained? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This depends on the checkpoint. If you're using a checkpoint pre-trained on multiple-choice, then it's very possible that it is pre-trained. If you're using a checkpoint pre-trained on another task, it might not be pre-trained.\r\n\r\nYou should be wary of the task on which the model was trained when leveraging a model with a pre-trained head, as it might not overlap with your current task."
] | 1,601 | 1,601 | 1,601 | NONE | null | Hello,
Is the multiple-choice head for the pre-trained `LongformerForMultipleChoice` model pre-trained as well?
I am asking because like for the pre-trained `GPT2DoubleHeadsModel`, the main body of the model is trained but its multiple-choice head is not.
Is the multiple-choice head for the pre-trained `LongformerForMultipleChoice` model untrained, just like the case for the `GPT2DoubleHeadsModel`?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7494/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7493 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7493/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7493/comments | https://api.github.com/repos/huggingface/transformers/issues/7493/events | https://github.com/huggingface/transformers/issues/7493 | 712,375,714 | MDU6SXNzdWU3MTIzNzU3MTQ= | 7,493 | Sharing Microsoft's DialogRPT (new dialog ranking model) | {
"login": "golsun",
"id": 8718898,
"node_id": "MDQ6VXNlcjg3MTg4OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8718898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/golsun",
"html_url": "https://github.com/golsun",
"followers_url": "https://api.github.com/users/golsun/followers",
"following_url": "https://api.github.com/users/golsun/following{/other_user}",
"gists_url": "https://api.github.com/users/golsun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/golsun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/golsun/subscriptions",
"organizations_url": "https://api.github.com/users/golsun/orgs",
"repos_url": "https://api.github.com/users/golsun/repos",
"events_url": "https://api.github.com/users/golsun/events{/privacy}",
"received_events_url": "https://api.github.com/users/golsun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi @golsun! Thanks a lot for opening an issue and offering to contribute it!\r\n\r\nIndeed, there is no `GPT2ForSequenceClassification` model in the library (yet!) I'm adding it right now with the goal of supporting DialogRPT. I'll get back to you in a bit.",
"Hi @golsun! `GPT2ForSequenceClassification` has been implemented on #7501 and I verified that I obtain the same results as you do on your README using your examples.\r\n\r\nYou should only need to upload your models on the model hub now! Some helpers regarding the configuration:\r\n\r\n- You should upload a model configuration on the hub, for every model.\r\n- You can simply copy-paste the `gpt2-medium` configuration that you can find [here](https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-config.json).\r\n- You will need to add a `num_labels=1` field to these configurations.\r\n- In the `architectures` field, you should put `GPT2ForSequenceClassification`",
"wow, super fast!!! \r\nthank you @LysandreJik , I'll update my repo to reflect this once the [pull](https://github.com/huggingface/transformers/pull/7501) is merged.\r\n\r\n",
"The pul request is now merged @golsun!",
"Thank you so much @LysandreJik !\r\nI just tried `GPT2ForSequenceClassification` and it works! 👍 \r\nThen I created this [model card](https://huggingface.co/microsoft/DialogRPT-updown), but `model = AutoModelForSequenceClassification.from_pretrained(\"microsoft/DialogRPT-updown\")` gives me the following error, which can be reproduced with [this Notebook](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing):\r\n```\r\n/content/transformers/src/transformers/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 1203 config.__class__,\r\n 1204 cls.__name__,\r\n-> 1205 \", \".join(c.__name__ for c in MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING.keys()),\r\n 1206 )\r\n 1207 )\r\n\r\nValueError: Unrecognized configuration class <class 'transformers.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForSequenceClassification.\r\nModel type should be one of DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, BartConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, XLNetConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, FunnelConfig, DebertaConfig.\r\n```\r\n",
"Indeed, this should be solved by #7630.",
"thank you @LysandreJik `AutoModelForSequenceClassification` works now.\r\nThe [inference webpage](https://huggingface.co/microsoft/DialogRPT-updown) still gives the `Unrecognized configuration class` error but I guess it will sync with the latest code soon. \r\nI'm going to introduce model card in the original repo.\r\nThanks again for the help!",
"We just updated the API inference so that it uses the latest code. I've taken the liberty to add a padding token to your models, in your configuration (`pad_token_id: 50256`) and in the `special_tokens_map.json`: `pad_token: \"<|endoftext|>\"`, as it is necessary for the models to have a padding token to run in the API inference.\r\n\r\nI've taken these values from your code [here](https://github.com/golsun/DialogRPT/blob/master/src/feeder.py#L51) and [here](https://github.com/golsun/DialogRPT/blob/master/src/feeder.py#L18). \r\n\r\nModels should now work correctly in the [inference webpage :) ](https://huggingface.co/microsoft/DialogRPT-width?text=I+like+you.+I+love+you)",
"Great! Thank you for updating the config and special_tokens_map for us! :)\r\nThe inference webpage will output a score of 1 no matter what input is. I guess it's because it outputs `softmax(logits)`, which is always 1 if `num_labels==1`. Maybe the following if-else will fix it? \r\n```\r\nif num_labels == 1:\r\n return torch.sigmoid(logits)\r\nelse:\r\n return torch.softmax(logits)\r\n```\r\nthe case `num_labels==1` follows the DialogRPT code [here](https://github.com/golsun/DialogRPT/blob/master/src/model.py#L95)",
"You're correct! Solving that in #7726.",
"Also @golsun on the inference API, you can have custom label names (instead of just `LABEL_0` here) if you set your label names in your `config.json`\r\n\r\nSee https://huggingface.co/roberta-large-mnli's config.json file for an example",
"Awesome! thank you @LysandreJik @julien-c "
] | 1,601 | 1,602 | 1,602 | NONE | null | # 🌟 New model addition
## Model description
Thanks for the awesome work!
[DialogRPT](https://github.com/golsun/DialogRPT) (Dialog Ranking Pretrained Transformers) is a set of GPT-2 based dialogue ranking models recently released with an [EMNLP paper](https://arxiv.org/abs/2009.06978) by Microsoft Research. It's a follow-up work of [DialoGPT](https://huggingface.co/transformers/model_doc/dialogpt.html) (thanks for hosting it!)
The architecture is pretty simple: a `GPT2Model` followed by a `torch.nn.Linear(n_embd, 1, bias=False)`, and implemented based on a [previous HuggingFace commit](https://github.com/huggingface/transformers/commit/4d456542e9d381090f9a00b2bcc5a4cb07f6f3f7)
At first, I'm trying to create a model card for it, but then realized that it seems there's no existing model architecture in HuggingFace is compatible with DialogRPT. I noticed a lot of BERT-based sequence classification models, but ours is GPT-2 based.
If there's a simple fix (or I missed something) please let me know!
If implementation in modeling_gpt2.py is necessary, I'm also glad to help!
## Open source status
* [x] the model implementation is available: (https://github.com/golsun/DialogRPT)
* [x] the model weights are available: (https://github.com/golsun/DialogRPT)
* [x] who are the authors: @golsun @dreasysnail
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7493/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7493/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7492 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7492/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7492/comments | https://api.github.com/repos/huggingface/transformers/issues/7492/events | https://github.com/huggingface/transformers/issues/7492 | 712,365,525 | MDU6SXNzdWU3MTIzNjU1MjU= | 7,492 | `run_squad_trainer` doesn't actually use a Rust tokenizer + errors in `squad_convert_example_to_features` when using a Rust tokenizer | {
"login": "k8si",
"id": 3207674,
"node_id": "MDQ6VXNlcjMyMDc2NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3207674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/k8si",
"html_url": "https://github.com/k8si",
"followers_url": "https://api.github.com/users/k8si/followers",
"following_url": "https://api.github.com/users/k8si/following{/other_user}",
"gists_url": "https://api.github.com/users/k8si/gists{/gist_id}",
"starred_url": "https://api.github.com/users/k8si/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/k8si/subscriptions",
"organizations_url": "https://api.github.com/users/k8si/orgs",
"repos_url": "https://api.github.com/users/k8si/repos",
"events_url": "https://api.github.com/users/k8si/events{/privacy}",
"received_events_url": "https://api.github.com/users/k8si/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Indeed, the Rust tokenizers are not handled by the SQuAD data processing. This is one item we would like to resolve when refactoring the data processing methods, which will soon be implemented directly in `datasets` rather than in `transformers`.\r\n\r\nThank you for your detailed issue!",
"For what it's worth, my main issue is with the behavioral issues with the Python vs. Rust tokenizers, not with the SQuAD data processing itself (I can easily write my own SQuAD processor but writing my own tokenizers is more work--the tokenizers are part of why I've been using the library in the first place). \r\n\r\nIt places a sizeable burden on people coding against the library that different kwargs result in completely different behaviors (and different errors) between the Rust vs. Python implementations, and these differences often aren't documented. And Item # 6 seems like a fundamental error somewhere in the Rust codebase.\r\n\r\nAre there any plans to address these issues in the near future? For what it's worth, I've never had an issue with the Python tokenizers. I'd like to use the Rust ones because they're Fast, plus I can train them myself easily, but navigating the API weirdness has been a slog.",
"You're correct that there is currently a mismatch between the python and rust tokenizers, and thank you for writing such a detailed issue explaining all of your pain points. We'll keep a close eye to the issues mentioned here as we continue working and improving the compatibility between the two APIs, which is something we will be focusing on in the near future.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.14.35-1902.303.4.1.el7uek.x86_64-x86_64-with-oracle-7.8
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
and
- `transformers` version: 3.3.1
- Platform: macOS-10.15.6-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@mfuntowicz
@LysandreJik
@patil-suraj
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Firstly, in `run_squad_trainer.py`, I noticed that the "use_fast" arg doesn't get propagated into the tokenizer instantiation: https://github.com/huggingface/transformers/blob/0acd1ffa09a06084efa7cfa0e4e9d97cffdda5f9/examples/question-answering/run_squad_trainer.py#L107 Probably should be
```
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=model_args.use_fast
)
```
However, when I make that change, the script hangs at the call to `squad_convert_examples_to_features` in SquadProcessor.
So, I did a little digging. The error is in `squad_convert_example_to_features` and seems to be due to inconsistencies in the behavior of `tokenizer.encode_plus` between the Python and Rust tokenizers, detailed below. I've also provided a [gist](https://gist.github.com/k8si/a143346dfa875c28d98e95cba1f82f1b) that hopefully elucidates & will help reproduce each of these points. I tested both BertTokenizer/BertTokenizerFast and GPT2Tokenizer/GPT2TokenizerFast.
1) Python tokenizers handle negative values for `stride`, Rust tokenizers throw an exception (`OverflowError: can't convert negative int to unsigned`)
2) For sequence pairs, Python tokenizers are fine if the first arg (`text`) is a list of ints and the second arg (`text_pair`) is a list of strings. The Rust tokenizers throw an exception `ValueError: PreTokenizedInputSequence must be Union[List[str], Tuple[str]]`. (Furthermore, the typehints for these arguments indicate that a string, a list of strings, or a list of ints are all fine.)
3) Leaving the `is_split_into_words` kwarg at its default value (`False`), then running `tokenizer.encode_plus(list of ints)` works fine for the Python tokenizers. The Rust tokenizers raise an exception `ValueError: TextInputSequence must be str`.
4) When running on a pair of sequences and setting `return_tensors=None`, the Python tokenizers return an output dict with input_ids (and other elements) as a list of ints i.e.`input_ids = [id1, id2, ...]` whereas the Rust tokenizers return a dict with input_ids as a list of list of ints i.e. `input_ids = [[id1, id2, ...]]`. I also noticed that if you set `return_tensors="pt"`, both the Python and Rust tokenizers return `input_ids = tensor([[id1, id2, ...]])`.
5) When `return_overflowing_tokens=True`, the Python tokenizers return a list of the overflowing tokens at key `overflowing_tokens` as expected. The Rust tokenizers return them at key `overflow_to_sample_mapping` which is not documented anywhere, as far as I can tell. The values seem to be different for the Python output vs. Rust output.
6) Running the same procedure on the same input twice produces the same result each time for the Python tokenizer. For the Rust tokenizer, the result of the second run is **different**. I am not familiar enough with the Rust tokenizer internals at this point to have a theory as to why this is the case. Anyway, this is the point at which I stopped debugging and decided to file an issue.
## To reproduce
Steps to reproduce the behavior:
1. Download squad 2.0 dataset from ["official" squad website](https://rajpurkar.github.io/SQuAD-explorer/)
2. Make fix in `run_squad_training.py` described above to correctly instantiate a Rust tokenizer
3. Run script: `python examples/question-answering/run_squad_trainer.py --model_name_or_path bert-base-uncased --use_fast --output_dir "./outputs-squad" --do_train --data_dir "./squad-data" --version_2_with_negative`
Also see gist detailing issues described above: https://gist.github.com/k8si/a143346dfa875c28d98e95cba1f82f1b
## Expected behavior
1) I expected `run_squad_trainer.py` to use a Rust tokenizer when the `use_fast` arg was set to True
2) I expected `SquadProcessor.squad_convert_example_to_features` to not raise exceptions when processing squad data when using a Rust tokenizer
3) I expected `tokenizer.encode_plus` to return the same outputs given the same inputs, regardless of whether the tokenizer is a Rust tokenizer or a Python tokenizer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7492/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7491 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7491/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7491/comments | https://api.github.com/repos/huggingface/transformers/issues/7491/events | https://github.com/huggingface/transformers/pull/7491 | 712,337,486 | MDExOlB1bGxSZXF1ZXN0NDk1ODM2MDg1 | 7,491 | Update README.md | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Nice! FYI we'll have model versioning rolled out in ~1 month or so"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | @julien-c
Model is now fine-tuned on Transformers 3.1.0. Previous model fine-tuned on Transformers 2.3.0 is out-of-date. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7491/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7491",
"html_url": "https://github.com/huggingface/transformers/pull/7491",
"diff_url": "https://github.com/huggingface/transformers/pull/7491.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7491.patch",
"merged_at": 1601556608000
} |
https://api.github.com/repos/huggingface/transformers/issues/7490 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7490/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7490/comments | https://api.github.com/repos/huggingface/transformers/issues/7490/events | https://github.com/huggingface/transformers/pull/7490 | 712,288,413 | MDExOlB1bGxSZXF1ZXN0NDk1Nzk1MTkx | 7,490 | Clean the Trainer state | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Tests pass in a multi-GPU environment and the specific `test_distributed_trainer` passes too. There is just one test that requires a batch size not too big so I manually skip it if there are more than 2 GPUs.",
"Nice PR!!!\r\n\r\nI think it is a nice addition. Only the `global_step` won't be necessary as it is already integrated by default into the TF checkpoints. This is perfect!\r\n\r\nI plan also to add Keras callbacks into the TF Trainer that can handles few more arguments that are in this \"state\" class such as best checkpoint, and metrics.\r\n\r\nIt won't change much in the TF Trainer and could be very easily integrated as most of the \"state\" arguments work the same way for both Trainers. We can clearly imagine this state class for a much broader usage.",
"Checked the tests on TPU are passing, so can safely merge this."
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
This PRs clean the fields used inside the `Trainer` to store state and gather them all in a clear class named `TrainerState` that is typed so the user knows exactly what they can access when subclassing and overriding methods (and for my next step, when writing callbacks).
As a result, the `log_history` does not need to be saved and loaded separately and the hack to get the steps we were at from the checkpoint folder name is removed (the user can copy all the checkpoint saved by the `Trainer` in a previous training in any folder and use it for resuming training). This PRs adds a test of the full reproducibility of a training resumed from a checkpoint.
There is a tiny breaking change that would affect users that trained a model using an earlier version of transformers and would like to resume it with a version obtained after this commit but I don't think this matters much.
It also enforces that the `TrainingArguments` passed to a `Trainer` are not changed during training, to avoid subtle bugs when launching several trainings in a row. This is verified by a new test. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7490/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7490",
"html_url": "https://github.com/huggingface/transformers/pull/7490",
"diff_url": "https://github.com/huggingface/transformers/pull/7490.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7490.patch",
"merged_at": 1601572025000
} |
https://api.github.com/repos/huggingface/transformers/issues/7489 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7489/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7489/comments | https://api.github.com/repos/huggingface/transformers/issues/7489/events | https://github.com/huggingface/transformers/issues/7489 | 712,280,644 | MDU6SXNzdWU3MTIyODA2NDQ= | 7,489 | Use of global attention of Longformer when generating | {
"login": "alexyalunin",
"id": 23011284,
"node_id": "MDQ6VXNlcjIzMDExMjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/23011284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexyalunin",
"html_url": "https://github.com/alexyalunin",
"followers_url": "https://api.github.com/users/alexyalunin/followers",
"following_url": "https://api.github.com/users/alexyalunin/following{/other_user}",
"gists_url": "https://api.github.com/users/alexyalunin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexyalunin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexyalunin/subscriptions",
"organizations_url": "https://api.github.com/users/alexyalunin/orgs",
"repos_url": "https://api.github.com/users/alexyalunin/repos",
"events_url": "https://api.github.com/users/alexyalunin/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexyalunin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @alexyalunin - you are completely right! I'm working on a bigger generation refactor at the moment to better handle cases like this. \r\n\r\nFor this case, I would propose to modify the code yourself and do a little hack, something along the lines \r\n\r\n```python \r\nif \"global_attention_mask\" in model_kwargs:\r\n encoder_outputs: ModelOutput = encoder(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask, return_dict=True)\r\nelse:\r\n encoder_outputs: ModelOutput = encoder(input_ids, attention_mask=attention_mask, return_dict=True) \r\n```\r\n\r\nI don't want to merge this into master because it's quite hacky and the `generate()` function needs a refactor before we start adding more and more hacks. Hope this works for you for now.",
"This should be solved soon by the new generate() design: #6949 in like ~1,2 weeks",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,609 | 1,609 | NONE | null | I'm training Longformer2Roberta, the encoder part of this Seq2Seq model is Longformer. The one feature Longformer brings is global attention, I found the use of it during training, but it is never used at the point of generation. I guess it should be used somewhere here https://github.com/huggingface/transformers/blob/03e46c1de3864b8464a1b40d2a414b35f6b7f0df/src/transformers/generation_utils.py#L402.
I guess @patrickvonplaten is working on these models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7489/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7488 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7488/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7488/comments | https://api.github.com/repos/huggingface/transformers/issues/7488/events | https://github.com/huggingface/transformers/pull/7488 | 712,276,643 | MDExOlB1bGxSZXF1ZXN0NDk1Nzg1MTkz | 7,488 | [s2s] fix kwargs style | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7488/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7488",
"html_url": "https://github.com/huggingface/transformers/pull/7488",
"diff_url": "https://github.com/huggingface/transformers/pull/7488.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7488.patch",
"merged_at": 1601499607000
} |
https://api.github.com/repos/huggingface/transformers/issues/7487 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7487/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7487/comments | https://api.github.com/repos/huggingface/transformers/issues/7487/events | https://github.com/huggingface/transformers/pull/7487 | 712,212,260 | MDExOlB1bGxSZXF1ZXN0NDk1NzMwNTMz | 7,487 | [s2s] Fix t5 warning for distributed eval | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7487/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7487",
"html_url": "https://github.com/huggingface/transformers/pull/7487",
"diff_url": "https://github.com/huggingface/transformers/pull/7487.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7487.patch",
"merged_at": 1601499483000
} |
https://api.github.com/repos/huggingface/transformers/issues/7486 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7486/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7486/comments | https://api.github.com/repos/huggingface/transformers/issues/7486/events | https://github.com/huggingface/transformers/issues/7486 | 712,205,021 | MDU6SXNzdWU3MTIyMDUwMjE= | 7,486 | Using BERT for spelling correction | {
"login": "naturecreator",
"id": 39854185,
"node_id": "MDQ6VXNlcjM5ODU0MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39854185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naturecreator",
"html_url": "https://github.com/naturecreator",
"followers_url": "https://api.github.com/users/naturecreator/followers",
"following_url": "https://api.github.com/users/naturecreator/following{/other_user}",
"gists_url": "https://api.github.com/users/naturecreator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naturecreator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naturecreator/subscriptions",
"organizations_url": "https://api.github.com/users/naturecreator/orgs",
"repos_url": "https://api.github.com/users/naturecreator/repos",
"events_url": "https://api.github.com/users/naturecreator/events{/privacy}",
"received_events_url": "https://api.github.com/users/naturecreator/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | I am currently working on the task of spelling correction. I used BERT by masking the misspelled word to get predictions with their probability score. However, the results are not so good. Hence, I want to fine-tune the BERT. For this spelling correction task, I would like to know which is the suitable method to fine-tune the BERT.
I would be glad if anyone could help me in this regard. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7486/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7485 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7485/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7485/comments | https://api.github.com/repos/huggingface/transformers/issues/7485/events | https://github.com/huggingface/transformers/issues/7485 | 712,140,918 | MDU6SXNzdWU3MTIxNDA5MTg= | 7,485 | Tenosrflow Loading the saved Model For GPT2 | {
"login": "santhoshkolloju",
"id": 4193817,
"node_id": "MDQ6VXNlcjQxOTM4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4193817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santhoshkolloju",
"html_url": "https://github.com/santhoshkolloju",
"followers_url": "https://api.github.com/users/santhoshkolloju/followers",
"following_url": "https://api.github.com/users/santhoshkolloju/following{/other_user}",
"gists_url": "https://api.github.com/users/santhoshkolloju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santhoshkolloju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santhoshkolloju/subscriptions",
"organizations_url": "https://api.github.com/users/santhoshkolloju/orgs",
"repos_url": "https://api.github.com/users/santhoshkolloju/repos",
"events_url": "https://api.github.com/users/santhoshkolloju/events{/privacy}",
"received_events_url": "https://api.github.com/users/santhoshkolloju/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"~Hi! What's the problem?~ Edited your message so that we can read it.\r\n\r\n",
"Could you put the error you had, as well as the environment? i.e., everything asked in the issue template.",
"I am running it in colab CPU\r\n\r\n`TypeError` Traceback (most recent call last)\r\n<ipython-input-5-0c3c85adea42> in <module>()\r\n 5 start = time()\r\n 6 for i in range(100):\r\n----> 7 output, past = infer([context, past])\r\n 8 logits = output[0, -1, :]\r\n 9 tok = tf.argmax(logits)\r\n\r\n2 frames\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _call_with_flat_signature(self, args, kwargs, cancellation_manager)\r\n 1719 raise TypeError(\"{}: expected argument #{}(zero-based) to be a Tensor; \"\r\n 1720 \"got {} ({})\".format(self._flat_signature_summary(), i,\r\n-> 1721 type(arg).__name__, str(arg)))\r\n 1722 return self._call_flat(args, self.captured_inputs, cancellation_manager)\r\n 1723 \r\n\r\nTypeError: signature_wrapper(input_ids): expected argument #0(zero-based) to be a Tensor; got list ([<tf.Tensor: shape=(1, 10), dtype=int32, numpy=\r\narray([[2061, 389, 345, 1804, 706, 345, 423, 5201, 1762, 30]],\r\n dtype=int32)>, None])``\r\n\r\n> `",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"What is the resolution for this. Having the same issue.",
"If you can open a new issue with the issue template filled out (environment information, code that fails, expected behavior), then we can help you! Thank you."
] | 1,601 | 1,613 | 1,607 | NONE | null | ```py
from time import time
from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
import tensorflow as tf
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = TFGPT2LMHeadModel.from_pretrained('gpt2')
text = "What are you doing after you have finished working?"
generated = tokenizer.encode(text)
context = tf.constant([generated])
past = None
start = time()
for i in range(100):
output, past = model([context, past])
logits = output[0, -1, :]
tok = tf.argmax(logits)
generated.append(tok.numpy())
context = tf.expand_dims(tf.expand_dims(tok, 0), 0)
sequence = tokenizer.decode(generated)
print(time() - start, sequence)
#save the model
tf.saved_model.save(model,"temp")
#loading back the model
nm = tf.saved_model.load("temp")
infer = nm.signatures['serving_default']
```
Not able to call the model as shown below
```py
text = "What are you doing after you have finished working?"
generated = tokenizer.encode(text)
context = tf.constant([generated])
past = None
start = time()
for i in range(100):
output, past = **infer**([context, past])
logits = output[0, -1, :]
tok = tf.argmax(logits)
generated.append(tok.numpy())
context = tf.expand_dims(tf.expand_dims(tok, 0), 0)
sequence = tokenizer.decode(generated)
print(time() - start, sequence)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7485/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7484 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7484/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7484/comments | https://api.github.com/repos/huggingface/transformers/issues/7484/events | https://github.com/huggingface/transformers/pull/7484 | 712,139,277 | MDExOlB1bGxSZXF1ZXN0NDk1NjY4NTA0 | 7,484 | Bump isort version. | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
Had some problem on my local setup with isort wanting to change `test_modeling_deberta.py`. Updating to 5.5.4 (from 5.4.2) fixed the issue so I think we should pin our setup to it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7484/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7484",
"html_url": "https://github.com/huggingface/transformers/pull/7484",
"diff_url": "https://github.com/huggingface/transformers/pull/7484.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7484.patch",
"merged_at": 1601487899000
} |
https://api.github.com/repos/huggingface/transformers/issues/7483 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7483/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7483/comments | https://api.github.com/repos/huggingface/transformers/issues/7483/events | https://github.com/huggingface/transformers/pull/7483 | 712,136,077 | MDExOlB1bGxSZXF1ZXN0NDk1NjY1ODc4 | 7,483 | Add forgotten return_dict argument in the docs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
The documentation wasn't updated to `return_dict=True` not being the default for all models. This PR fixes that.
Fixes #7482 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7483/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7483",
"html_url": "https://github.com/huggingface/transformers/pull/7483",
"diff_url": "https://github.com/huggingface/transformers/pull/7483.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7483.patch",
"merged_at": 1601541690000
} |
https://api.github.com/repos/huggingface/transformers/issues/7482 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7482/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7482/comments | https://api.github.com/repos/huggingface/transformers/issues/7482/events | https://github.com/huggingface/transformers/issues/7482 | 712,097,241 | MDU6SXNzdWU3MTIwOTcyNDE= | 7,482 | Issue with Summary of the tasks - Named Entity Recognition in Docs | {
"login": "Matt-Munns",
"id": 69989590,
"node_id": "MDQ6VXNlcjY5OTg5NTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/69989590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Matt-Munns",
"html_url": "https://github.com/Matt-Munns",
"followers_url": "https://api.github.com/users/Matt-Munns/followers",
"following_url": "https://api.github.com/users/Matt-Munns/following{/other_user}",
"gists_url": "https://api.github.com/users/Matt-Munns/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Matt-Munns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Matt-Munns/subscriptions",
"organizations_url": "https://api.github.com/users/Matt-Munns/orgs",
"repos_url": "https://api.github.com/users/Matt-Munns/repos",
"events_url": "https://api.github.com/users/Matt-Munns/events{/privacy}",
"received_events_url": "https://api.github.com/users/Matt-Munns/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, this example (and all the others) is missing a `return_dict=True` in the call to `from_pretrained`. Thanks for flagging, the PR mentioned above will fix this."
] | 1,601 | 1,601 | 1,601 | NONE | null | - `transformers` version: 3.3.1
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.3
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
examples/token-classification: @stefan-it
documentation: @sgugger
## Information
I am trying to run the Pytorch version of named entity recognition from the "Summary of the tasks" section in the documentation.
## To reproduce
Steps to reproduce the behavior:
I'm running the exact example from the docs, but will attach the code below
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
label_list = [
"O", # Outside of a named entity
"B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity
"I-MISC", # Miscellaneous entity
"B-PER", # Beginning of a person's name right after another person's name
"I-PER", # Person's name
"B-ORG", # Beginning of an organisation right after another organisation
"I-ORG", # Organisation
"B-LOC", # Beginning of a location right after another location
"I-LOC" # Location
]
sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \
"close to the Manhattan Bridge."
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
inputs = tokenizer.encode(sequence, return_tensors="pt")
outputs = model(inputs).logits
predictions = torch.argmax(outputs, dim=2)
```
Running this leads to the error:
```
Traceback (most recent call last):
File "test.py", line 21, in <module>
outputs = model(inputs).logits
AttributeError: 'tuple' object has no attribute 'logits'
```
## Expected behavior
I expect this to run successfully and produce predictions for the example sequence. I've run this example before and it succeeded so I'm not sure what's happening differently now. I feel like I'm making a dumb mistake somewhere, but idk.
Thanks!
Note:
changing line 21 to
`outputs = model(inputs)[0]`
seems to lead to the expected output, but this might not be the kind of behavior you all are looking for.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7482/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7481 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7481/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7481/comments | https://api.github.com/repos/huggingface/transformers/issues/7481/events | https://github.com/huggingface/transformers/pull/7481 | 712,080,125 | MDExOlB1bGxSZXF1ZXN0NDk1NjIxMTQy | 7,481 | Minor dead code clean-up | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing as it seems the changes were added in another PR"
] | 1,601 | 1,606 | 1,606 | CONTRIBUTOR | null | Hello,
I am not sure how sensitive you generally are about dead code in the repository. I have identified a few places with dead code, where I believe a clean-up would improve readability.
- removal of a couple of unused dropouts I came across in Albert and XLNet
- removal of an unused code block for relative attention shift for XLNet
## Who can review?
@LysandreJik , @TevenLeScao
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7481/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7481",
"html_url": "https://github.com/huggingface/transformers/pull/7481",
"diff_url": "https://github.com/huggingface/transformers/pull/7481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7481.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7480 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7480/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7480/comments | https://api.github.com/repos/huggingface/transformers/issues/7480/events | https://github.com/huggingface/transformers/issues/7480 | 712,047,707 | MDU6SXNzdWU3MTIwNDc3MDc= | 7,480 | Upload models using transformers-cli fails | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes this is a known issue with our current system that will be fixed in ~1 month.\r\n\r\nIn the meantime, if you can upload to a different S3 bucket I can cp the files to your account on ours. Would you be able to do this?",
"I don't have access to S3. However, I uploaded the model in my dropbox:\r\nhttps://www.dropbox.com/sh/0e7weo5l6g1uvqi/AADBZN_vuawdR3YOUOzZRo8Pa?dl=0\r\n\r\nIs it possible to download and upload it from the dropbox folder?",
"Super I'll take care of it! ",
"model is uploaded here: https://huggingface.co/Rostlab/prot_t5_xl_bfd",
"Perfect, thanks a lot @patrickvonplaten for your help.\r\nThis solves my issue 😄 \r\n\r\nI will test the model to make sure everything is working as expected.\r\n\r\nShould we close this issue as it solved my current problem, or should we leave it open until the \"transformers-cli\" uploading problem is solved?\r\n\r\nI will leave it to you.",
"Let's leave it open :-) ",
"Hi! I'm having an issue uploading a model as well. I've tried several different iterations of the CLI command to get it to work. I'm following the instructions from the [model sharing docs](https://huggingface.co/transformers/model_sharing.html). \r\n\r\nHere's the info about my setup:\r\n\r\n- transformers version: 3.3.1\r\n- Platform: Ubuntu (it's a Google Cloud Platform VM)\r\n- Python version: 3.8.5\r\n- PyTorch version (GPU?): 1.4.0 (True)\r\n- Tensorflow version (GPU?): 2.3.1 (True)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n\r\nFirst, I tried `transformers-cli upload distilbert-for-food-extraction`, as it says to do in the docs. This fails because for some reason the directory is not found, even though `ls distilbert-for-food-extraction` confirms that the directory and its files exist in this location.\r\n```\r\n(hf-nlp) charlenechambliss@charlene-gpu:~/.cache/food-ner/models$ transformers-cli upload chambliss/distilbert-for-food-extraction\r\n2020-10-10 21:43:16.899194: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most recent call last):\r\n File \"/home/charlenechambliss/anaconda3/envs/hf-nlp/bin/transformers-cli\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/charlenechambliss/anaconda3/envs/hf-nlp/lib/python3.8/site-packages/transformers/commands/transformers_cli.py\", line 33, in main\r\n service.run()\r\n File \"/home/charlenechambliss/anaconda3/envs/hf-nlp/lib/python3.8/site-packages/transformers/commands/user.py\", line 197, in run\r\n files = self.walk_dir(rel_path)\r\n File \"/home/charlenechambliss/anaconda3/envs/hf-nlp/lib/python3.8/site-packages/transformers/commands/user.py\", line 180, in walk_dir\r\n entries: List[os.DirEntry] = list(os.scandir(rel_path))\r\nFileNotFoundError: [Errno 2] No such file or directory: 'distilbert-for-food-extraction'\r\n```\r\n\r\nThen I tried nesting it under a directory matching my HuggingFace username, so now the path is `chambliss/distilbert-for-food-extraction`. Attempting the upload again seems to result in 3 out of 6 files being uploaded, then the process is aborted. Here is the full output I'm getting:\r\n\r\n```\r\n(hf-nlp) charlenechambliss@charlene-gpu:~/.cache/food-ner/models$ transformers-cli upload chambliss\r\n2020-10-10 21:43:28.932647: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nAbout to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/special_tokens_map.json to S3 under filename chambliss/distilbert-for-food-extraction/special_tokens_map.json and namespace chambliss\r\nAbout to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/vocab.txt to S3 under filename chambliss/distilbert-for-food-extraction/vocab.txt and namespace chambliss\r\nAbout to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/pytorch_model.bin to S3 under filename chambliss/distilbert-for-food-extraction/pytorch_model.bin and namespace chambliss\r\nAbout to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/config.json to S3 under filename chambliss/distilbert-for-food-extraction/config.json and namespace chambliss\r\nAbout to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/tokenizer_config.json to S3 under filename chambliss/distilbert-for-food-extraction/tokenizer_config.json and namespace chambliss\r\nAbout to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/tf_model.h5 to S3 under filename chambliss/distilbert-for-food-extraction/tf_model.h5 and namespace chambliss\r\nProceed? [Y/n] Y\r\nUploading... This might take a while if files are large\r\nYour file now lives at: \r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/chambliss/chambliss/distilbert-for-food-extraction/special_tokens_map.json\r\nYour file now lives at: \r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/chambliss/chambliss/distilbert-for-food-extraction/vocab.txt\r\nYour file now lives at: \r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/chambliss/chambliss/distilbert-for-food-extraction/pytorch_model.bin\r\n400 Client Error: Bad Request for url: https://huggingface.co/api/presign\r\nFilename invalid, model must be at exactly one level of nesting, i.e. \"user/model_name\".\r\n```\r\n\r\nIf there is not a fix available for this at the moment, would it be possible to have my model uploaded via Dropbox as well?\r\n\r\nThanks!\r\nCharlene",
"Hey @chambliss - it looks like you are uploading the wrong folder. Instead of running \r\n\r\n```\r\n~/.cache/food-ner/models$ transformers-cli upload chambliss\r\n```\r\n\r\nyou should run \r\n\r\n```\r\n~/.cache/food-ner/models/chambliss$ transformers-cli upload distilbert-for-food-extraction\r\n```\r\n\r\nI think",
"I'll second that. If `ls distilbert-for-food-extraction` works and shows the correct files, `transformers-cli upload distilbert-for-food-extraction` should work and would be able to find the correct directory.",
"@patrickvonplaten @julien-c Thanks for the response guys! I'm not sure why the directory wasn't found the first time, but I tried it again just now (from inside the /chambliss directory, so `~/.cache/food-ner/models/chambliss$ transformers-cli upload distilbert-for-food-extraction`, as suggested) and it worked. \r\n\r\nAs a user, it is a little confusing for a reference to the correct directory not to work, and to have to be exactly one level above the directory in order for the upload to succeed. The example given on the page (`transformers-cli upload path/to/awesome-name-you-picked/`) implies that you can do the upload from anywhere relative to the folder. If that is a constraint, it may be worth updating the docs to reflect it.\r\n\r\nThanks again for the help! ",
"no, it is indeed supposed to work as you describe, specifying the dir from any point in your filesystem. \r\n\r\nLet us know if that's not the case.",
"Will reopen this for clarity until the fix mentioned in https://github.com/huggingface/transformers/issues/8480#issuecomment-726731046 is deployed",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Ok, closing this for real now! 😎"
] | 1,601 | 1,610 | 1,610 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Model Cards: @julien-c
T5: @patrickvonplaten
## Information
Model I am using T5:
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Command:
`transformers-cli upload ./prot_t5_xl_bfd/ --organization Rostlab`
Error:
```
About to upload file /mnt/lsf-nas-1/lsf/job/repo/elnaggar/prot-transformers/models/transformers/prot_t5_xl_bfd/pytorch_model.bin to S3 under filename prot_t5_xl_bfd/pytorch_model.bin and namespace Rostl
ab
Proceed? [Y/n] y
Uploading... This might take a while if files are large
0%|▌ | 48242688/11276091454 [00:02<14:55, 12534308.31it/s]
Traceback (most recent call last):
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/connectionpool.py", line 392, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1049, in _send_output
self.send(chunk)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 971, in send
self.sock.sendall(data)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/ssl.py", line 1204, in sendall
v = self.send(byte_view[count:])
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/ssl.py", line 1173, in send
return self._sslobj.write(data)
BrokenPipeError: [Errno 32] Broken pipe
Traceback (most recent call last):
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/connectionpool.py", line 726, in urlopen
retries = retries.increment(
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/util/retry.py", line 403, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/connectionpool.py", line 392, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1049, in _send_output
self.send(chunk)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 971, in send
self.sock.sendall(data)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/ssl.py", line 1204, in sendall
v = self.send(byte_view[count:])
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/ssl.py", line 1173, in send
return self._sslobj.write(data)
urllib3.exceptions.ProtocolError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 33, in main
service.run()
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/transformers/commands/user.py", line 232, in run
access_url = self._api.presign_and_upload(
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/transformers/hf_api.py", line 167, in presign_and_upload
r = requests.put(urls.write, data=data, headers={"content-type": urls.type})
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/api.py", line 134, in put
return request('put', url, data=data, **kwargs)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe'))
```
## Expected behavior
I am trying to upload our T5-3B model using transformers-cli, but it always fails and gives "BrokenPipeError".
It only uploads small files like configuration files but it fails for the model files.
I have tried two different machines and both of them gives the same error.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7480/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7479 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7479/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7479/comments | https://api.github.com/repos/huggingface/transformers/issues/7479/events | https://github.com/huggingface/transformers/issues/7479 | 712,038,433 | MDU6SXNzdWU3MTIwMzg0MzM= | 7,479 | Loading saved model not working | {
"login": "irocks04",
"id": 20955417,
"node_id": "MDQ6VXNlcjIwOTU1NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/20955417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/irocks04",
"html_url": "https://github.com/irocks04",
"followers_url": "https://api.github.com/users/irocks04/followers",
"following_url": "https://api.github.com/users/irocks04/following{/other_user}",
"gists_url": "https://api.github.com/users/irocks04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/irocks04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/irocks04/subscriptions",
"organizations_url": "https://api.github.com/users/irocks04/orgs",
"repos_url": "https://api.github.com/users/irocks04/repos",
"events_url": "https://api.github.com/users/irocks04/events{/privacy}",
"received_events_url": "https://api.github.com/users/irocks04/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"What do you mean it's not working? Could you provide all the information required in the template? What is your environment? What code are you running? What is the error shown? How do you use `from_pretrained`?",
"What is meant is how do we utilize the “from_pretrained” functionality after saving the pipeline? I want to load the pipeline back up to start making predictions locally the saved file(s) below:\r\n\r\n\r\nimport transformers\r\n\r\nner_original = pipeline(\"ner\")\r\nner = pipeline(\"ner\",grouped_entities=True)\r\n\r\npath = 'path to folder'\r\n\r\nner.save_pretrained(path)\r\n\r\nI am currently running transformers==2.11.0 and python 3.7.4. I have tried:\r\n\r\npipe = transformers.pipeline(task=\"ner\", model=\"pytorch_model.bin\",\r\ntokenizer=\"tokenizer_config.json\")\r\n\r\nThis gave an error: ValueError: Unrecognized model in tokenizer_config.json. Should have a `model_type` key in its config.json\r\n\r\npipe = transformers.TokenClassificationPipeline(model=\"pytorch_model.bin\",\r\ntokenizer=\"tokenizer_config.json\")\r\n\r\nThis gave an error: AttributeError: 'str' object has no attribute 'config'\r\n\r\nChanging the tokenizer to \"config.json\" yielded the following error:\r\n\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte\r\n",
"This is the code which gives the error\r\n\r\nfrom transformers import AutoModel, AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(path)\r\nmodel = AutoModel.from_pretrained(path)\r\n\r\nlabel_list = [\r\n \"O\", # Outside of a named entity\r\n \"B-MISC\", # Beginning of a miscellaneous entity right after another miscellaneous entity\r\n \"I-MISC\", # Miscellaneous entity\r\n \"B-PER\", # Beginning of a person's name right after another person's name\r\n \"I-PER\", # Person's name\r\n \"B-ORG\", # Beginning of an organisation right after another organisation\r\n \"I-ORG\", # Organisation\r\n \"B-LOC\", # Beginning of a location right after another location\r\n \"I-LOC\" # Location\r\n]\r\n\r\nsequence = \"Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very\" \\\r\n \"close to the Manhattan Bridge.\"\r\n\r\n# Bit of a hack to get the tokens with the special tokens\r\ntokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))\r\ninputs = tokenizer.encode(sequence, return_tensors=\"pt\")\r\n\r\noutputs = model(inputs)[0]\r\npredictions = torch.argmax(outputs, dim=2)\r\n\r\nprint([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())])\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | Do we know do to load the saved model pipeline back up and make predictions again locally? The from_pretrained()is not working.
Pls provide few instructions how to load the model using from pretrained | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7479/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7478 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7478/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7478/comments | https://api.github.com/repos/huggingface/transformers/issues/7478/events | https://github.com/huggingface/transformers/pull/7478 | 712,003,723 | MDExOlB1bGxSZXF1ZXN0NDk1NTU3NjQz | 7,478 | Alphabetize model lists | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
The model lists have grown a bit and so, like the doc navbar, I think we'll find our way better by alphabetizing them.
Adding new models will be easy in the README since Markdwon supports enumerated lists with 1. for all items. RestructuredText is more annoying but I'll make one script generate the proper part of index.rst automatically to make sure it's in sync with the README while I'm procrastinating something more important. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7478/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7478",
"html_url": "https://github.com/huggingface/transformers/pull/7478",
"diff_url": "https://github.com/huggingface/transformers/pull/7478.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7478.patch",
"merged_at": 1601477039000
} |
https://api.github.com/repos/huggingface/transformers/issues/7477 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7477/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7477/comments | https://api.github.com/repos/huggingface/transformers/issues/7477/events | https://github.com/huggingface/transformers/pull/7477 | 712,001,071 | MDExOlB1bGxSZXF1ZXN0NDk1NTU1NDE4 | 7,477 | [s2strainer] fix eval dataset loading | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"tiny fix, might as well do this in #7467 and close this one"
] | 1,601 | 1,601 | 1,601 | MEMBER | null | `eval_dataset` should be loaded if either `--do_eval` or`EvaluationStrategy` is not `no`
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7477/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7477",
"html_url": "https://github.com/huggingface/transformers/pull/7477",
"diff_url": "https://github.com/huggingface/transformers/pull/7477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7477.patch",
"merged_at": 1601483953000
} |
https://api.github.com/repos/huggingface/transformers/issues/7476 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7476/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7476/comments | https://api.github.com/repos/huggingface/transformers/issues/7476/events | https://github.com/huggingface/transformers/issues/7476 | 711,988,315 | MDU6SXNzdWU3MTE5ODgzMTU= | 7,476 | RAG: Can we have a document that explains the fine-tuning mechanism? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"https://github.com/huggingface/transformers/tree/master/examples/rag#finetuning should help you :-) ",
"Thanks"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | I want to fine-tune RUG with a custom dataset. Please help me. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7476/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7475 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7475/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7475/comments | https://api.github.com/repos/huggingface/transformers/issues/7475/events | https://github.com/huggingface/transformers/pull/7475 | 711,968,208 | MDExOlB1bGxSZXF1ZXN0NDk1NTI3NjUz | 7,475 | Small QOL improvements to TrainingArguments | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
Some small QOL improvements as discussed on the [forum](https://discuss.huggingface.co/t/seq2seqtrainer-questions/1276/):
- make `do_eval` defaults to `evaluation_strategy != "no"` so there is no need to pass the two
- make `run_name` defaults to `output_dir`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7475/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7475",
"html_url": "https://github.com/huggingface/transformers/pull/7475",
"diff_url": "https://github.com/huggingface/transformers/pull/7475.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7475.patch",
"merged_at": 1601482323000
} |
https://api.github.com/repos/huggingface/transformers/issues/7474 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7474/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7474/comments | https://api.github.com/repos/huggingface/transformers/issues/7474/events | https://github.com/huggingface/transformers/pull/7474 | 711,952,294 | MDExOlB1bGxSZXF1ZXN0NDk1NTE0MzU3 | 7,474 | [Seq2Seq] Fix a couple of bugs and clean examples | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You got a little zero-shot BLEU boost it seems!\r\n\r\nThis Branch: 34.433 (on en-de)\r\nMaster: 34.4052\r\n\r\n",
"@patrickvonplaten \r\n\r\nIs the docstring still wrong? [T5ForConditionalGeneration](https://huggingface.co/transformers/model_doc/t5.html?highlight=t5forconditional#transformers.T5ForConditionalGeneration), under `decoder_input_ids` says \"if both decoder_input_ids and decoder_inputs_embeds are both unset, decoder_input_ids takes the value of input_ids\"",
"> @patrickvonplaten\r\n> \r\n> Is the docstring still wrong? [T5ForConditionalGeneration](https://huggingface.co/transformers/model_doc/t5.html?highlight=t5forconditional#transformers.T5ForConditionalGeneration), under `decoder_input_ids` says \"if both decoder_input_ids and decoder_inputs_embeds are both unset, decoder_input_ids takes the value of input_ids\"\r\n\r\nI can't find this docstring, can you link to it?",
"> > @patrickvonplaten\r\n> > Is the docstring still wrong? [T5ForConditionalGeneration](https://huggingface.co/transformers/model_doc/t5.html?highlight=t5forconditional#transformers.T5ForConditionalGeneration), under `decoder_input_ids` says \"if both decoder_input_ids and decoder_inputs_embeds are both unset, decoder_input_ids takes the value of input_ids\"\r\n> \r\n> I can't find this docstring, can you link to it?\r\n\r\nIn T5, it's under method `forward` and argument `decoder_input_ids`. Here's [code link](https://github.com/huggingface/transformers/blob/eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d/src/transformers/modeling_t5.py#L869)",
"Oh yeah you're right that was a bad copy & past probably! Do you feel like opening a PR to fix it / delete it? That would be amazing :-) "
] | 1,601 | 1,605 | 1,601 | MEMBER | null | # What does this PR do?
## **IMPORTANT** - BREAKING CHANGES
This PR changes the behavior of **T5** and **TFT5** due to 3 bugs and 1 small change in forward API to support onnx and torchscript.
It also slighlty changes the behavior of **Bart** and **EncoderDecoderModel**
## Description
_1st Bug_: Due to a sloppy review from my part these lines got merged into the PyTorch T5 model: https://github.com/huggingface/transformers/pull/5518/files#r496058908, which set `decoder_input_ids = input_ids` if `decoder_input_ids` were not provided. This is misleading and also just wrong. Never should `decoder_input_ids=input_ids` in T5. This is not done during training nor during inference, so these lines don't make much sense. Because T5 is mostly either used with `.generate()` or in training with `model(input_ids=input_ids, labels=labels)`, in which cases the change has no effect we only received one issue recently about it #7358. The change was done to make T5 work with onnx, but is just wrong IMO.
_2nd Bug_: T5 was implemented with a small bug regarding the relative distance bias calculation for the cross-attention layer. It was spotted here: #7323. The correction leads to slightly different results when doing beam search. @sshleifer - if it's easy for you could maybe run a quick eval on WMT to see if bleu improves in this PR?
_3rd Bug_: T5 currently cuts the `input_ids` to the last token when `past` is used. This is a convenient function for the user, but has the potential to lead to bugs as mentioned here: https://github.com/huggingface/transformers/issues/4368#issuecomment-630244541. It's not really in the spirit of the library to do some magic under-the-hood which make certain use-cases easier for the user, but prevents other edge cases as shown in the issue above.
Feature request: support torchscript and onnx. This PR allows to use T5 with torchscript and onnx.
Now the difficult part: **This PR has breaking changes!**
For once, all three bugs lead to breaking changes. Then in order to solve the torchscript, onnx problem we are having with T5 (and we actually have with all Seq2Seq models), I had to change the positional ordering of T5's forward pass slightly, which should have minimal breaking changes because I doubt anybody has used T5 with positional arguments as follows:
`tf_model(input_ids, None, None, decoder_input_ids)`.
We had a couple of issues *e.g.* #5647 about supporting torchscript and onnx for Bart/T5. If we ever want to support onnx and torchscript in the future, I think we need to do this positional reordering. As shown by @mfuntowicz onnx can lead to great speed improvements and we also know now that `torchscript` can give ~30% speed improvement on dynamic input sizes. => I would be really happy if we could accept this slight breaking change here.
I thought about this quite a bit and I think it's very important that we agree on ONE positional argument ordering for the forward pass of Seq2Seq models. At the moment the ordering of Bart, EncoderDecoder, T5, ... is not coherent and done in a way that does not support onnx and torchscript. At the moment no seq2seq model really supports torchscript (Bart does in the test, but one cannot provide `decoder_input_ids` when using torchscript which effectively makes torchscript useless for inference). The ordering should be as follows IMO:
`input_ids`
`attention_mask`
`decoder_input_ids`
`decoder_attention_mask`
`encoder_outputs`
...,
meaning that all `required` inputs should come first to comply with onnx and torchscript and optional ones should come after.
I changed the ordering of all seq2seq models to comply with this format even though we have some positional ordering breaking changes for `T5`, `Bart` and `EncoderDecoder`.
## UPDATE:
Added tests that the encoder decoder forward signature stays the same
Applied changes to all Seq2Seq models
Cleaned docs
Fixed TF slow tests | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7474/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7474",
"html_url": "https://github.com/huggingface/transformers/pull/7474",
"diff_url": "https://github.com/huggingface/transformers/pull/7474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7474.patch",
"merged_at": 1601566730000
} |
https://api.github.com/repos/huggingface/transformers/issues/7473 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7473/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7473/comments | https://api.github.com/repos/huggingface/transformers/issues/7473/events | https://github.com/huggingface/transformers/pull/7473 | 711,867,573 | MDExOlB1bGxSZXF1ZXN0NDk1NDQyNzU4 | 7,473 | Make transformers install check positive | {
"login": "FremyCompany",
"id": 364405,
"node_id": "MDQ6VXNlcjM2NDQwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/364405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FremyCompany",
"html_url": "https://github.com/FremyCompany",
"followers_url": "https://api.github.com/users/FremyCompany/followers",
"following_url": "https://api.github.com/users/FremyCompany/following{/other_user}",
"gists_url": "https://api.github.com/users/FremyCompany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FremyCompany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FremyCompany/subscriptions",
"organizations_url": "https://api.github.com/users/FremyCompany/orgs",
"repos_url": "https://api.github.com/users/FremyCompany/repos",
"events_url": "https://api.github.com/users/FremyCompany/events{/privacy}",
"received_events_url": "https://api.github.com/users/FremyCompany/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
When transformers is correctly installed, I feel you should get a positive message. It's called huggingface not angryface, after all ;-)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
Given this concerns the documentation, mentioning @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7473/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7473",
"html_url": "https://github.com/huggingface/transformers/pull/7473",
"diff_url": "https://github.com/huggingface/transformers/pull/7473.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7473.patch",
"merged_at": 1601466281000
} |
https://api.github.com/repos/huggingface/transformers/issues/7472 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7472/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7472/comments | https://api.github.com/repos/huggingface/transformers/issues/7472/events | https://github.com/huggingface/transformers/pull/7472 | 711,843,346 | MDExOlB1bGxSZXF1ZXN0NDk1NDIyNjQx | 7,472 | Number of GPUs for multi-gpu | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | MEMBER | null | Print number of GPUs when running the multi-gpu testing suites. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7472/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7472",
"html_url": "https://github.com/huggingface/transformers/pull/7472",
"diff_url": "https://github.com/huggingface/transformers/pull/7472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7472.patch",
"merged_at": 1601463200000
} |
https://api.github.com/repos/huggingface/transformers/issues/7471 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7471/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7471/comments | https://api.github.com/repos/huggingface/transformers/issues/7471/events | https://github.com/huggingface/transformers/pull/7471 | 711,836,290 | MDExOlB1bGxSZXF1ZXN0NDk1NDE2NjU5 | 7,471 | Fix LXMERT with DataParallel | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | MEMBER | null | This PR fixes LXMERT when using DataParallel, similar to https://github.com/huggingface/transformers/pull/4300 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7471/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7471",
"html_url": "https://github.com/huggingface/transformers/pull/7471",
"diff_url": "https://github.com/huggingface/transformers/pull/7471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7471.patch",
"merged_at": 1601462485000
} |
https://api.github.com/repos/huggingface/transformers/issues/7470 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7470/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7470/comments | https://api.github.com/repos/huggingface/transformers/issues/7470/events | https://github.com/huggingface/transformers/pull/7470 | 711,692,798 | MDExOlB1bGxSZXF1ZXN0NDk1MzAxMDEw | 7,470 | Seq2SeqDataset: avoid passing src_lang everywhere | {
"login": "amanpreet692",
"id": 42522643,
"node_id": "MDQ6VXNlcjQyNTIyNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/42522643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amanpreet692",
"html_url": "https://github.com/amanpreet692",
"followers_url": "https://api.github.com/users/amanpreet692/followers",
"following_url": "https://api.github.com/users/amanpreet692/following{/other_user}",
"gists_url": "https://api.github.com/users/amanpreet692/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amanpreet692/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanpreet692/subscriptions",
"organizations_url": "https://api.github.com/users/amanpreet692/orgs",
"repos_url": "https://api.github.com/users/amanpreet692/repos",
"events_url": "https://api.github.com/users/amanpreet692/events{/privacy}",
"received_events_url": "https://api.github.com/users/amanpreet692/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Changed Constructor argument for AbstractSeq2SeqDataset to kwargs to avoid passing unwanted parameters to tokenizers,
eg. src and tgt lang to T5 tokenizer.
# What does this PR do?
tokenization_util.py was generating unnecessary warning statements continuously for tokenizer arguments for seq2seq batch processing that shouldn't have been passed in the first place. Fixed this phenomenon and added a test case as suggested.
Fixes # (issue)
#7454
## Before submitting
- This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
N/A
-Did you read the [contributor guideline]
Yes
- Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
Yes, #7454
- Did you make sure to update the documentation with your changes? Here are the
N/A
-Did you write any new necessary tests?
Yes, added a relevant test in examples/seq2seq/test_datasets.py
## Who can review?
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7470/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7470",
"html_url": "https://github.com/huggingface/transformers/pull/7470",
"diff_url": "https://github.com/huggingface/transformers/pull/7470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7470.patch",
"merged_at": 1601486869000
} |
https://api.github.com/repos/huggingface/transformers/issues/7469 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7469/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7469/comments | https://api.github.com/repos/huggingface/transformers/issues/7469/events | https://github.com/huggingface/transformers/pull/7469 | 711,688,701 | MDExOlB1bGxSZXF1ZXN0NDk1Mjk4MTc3 | 7,469 | fix the first chunk's lower triangle | {
"login": "Line290",
"id": 26078517,
"node_id": "MDQ6VXNlcjI2MDc4NTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/26078517?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Line290",
"html_url": "https://github.com/Line290",
"followers_url": "https://api.github.com/users/Line290/followers",
"following_url": "https://api.github.com/users/Line290/following{/other_user}",
"gists_url": "https://api.github.com/users/Line290/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Line290/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Line290/subscriptions",
"organizations_url": "https://api.github.com/users/Line290/orgs",
"repos_url": "https://api.github.com/users/Line290/repos",
"events_url": "https://api.github.com/users/Line290/events{/privacy}",
"received_events_url": "https://api.github.com/users/Line290/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Sorry, I made a mistake, ignore this.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,608 | 1,608 | CONTRIBUTOR | null | Correct the first chunk's lower triangle.
For details, please look at Page 4 in a google shared document.
Link is: https://docs.google.com/document/d/12rv879j2m5VkfTvk0F-WSOPFF5gqbE0kgk60PvY5nHc/edit?usp=sharing
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7469/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7469",
"html_url": "https://github.com/huggingface/transformers/pull/7469",
"diff_url": "https://github.com/huggingface/transformers/pull/7469.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7469.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7468 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7468/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7468/comments | https://api.github.com/repos/huggingface/transformers/issues/7468/events | https://github.com/huggingface/transformers/pull/7468 | 711,670,824 | MDExOlB1bGxSZXF1ZXN0NDk1MjgzNDcx | 7,468 | Create README.md | {
"login": "allenyummy",
"id": 36063123,
"node_id": "MDQ6VXNlcjM2MDYzMTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/36063123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allenyummy",
"html_url": "https://github.com/allenyummy",
"followers_url": "https://api.github.com/users/allenyummy/followers",
"following_url": "https://api.github.com/users/allenyummy/following{/other_user}",
"gists_url": "https://api.github.com/users/allenyummy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allenyummy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allenyummy/subscriptions",
"organizations_url": "https://api.github.com/users/allenyummy/orgs",
"repos_url": "https://api.github.com/users/allenyummy/repos",
"events_url": "https://api.github.com/users/allenyummy/events{/privacy}",
"received_events_url": "https://api.github.com/users/allenyummy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7468/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7468",
"html_url": "https://github.com/huggingface/transformers/pull/7468",
"diff_url": "https://github.com/huggingface/transformers/pull/7468.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7468.patch",
"merged_at": 1601556626000
} |
https://api.github.com/repos/huggingface/transformers/issues/7467 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7467/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7467/comments | https://api.github.com/repos/huggingface/transformers/issues/7467/events | https://github.com/huggingface/transformers/pull/7467 | 711,644,792 | MDExOlB1bGxSZXF1ZXN0NDk1MjYyMDgw | 7,467 | [s2sTrainer] test + code cleanup | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | - add a 10 second test for Seq2SeqTrainer
- general code cleanup
- pass `data_args` to Seq2SeqTrainer
@patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7467/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7467",
"html_url": "https://github.com/huggingface/transformers/pull/7467",
"diff_url": "https://github.com/huggingface/transformers/pull/7467.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7467.patch",
"merged_at": 1601526782000
} |
https://api.github.com/repos/huggingface/transformers/issues/7466 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7466/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7466/comments | https://api.github.com/repos/huggingface/transformers/issues/7466/events | https://github.com/huggingface/transformers/issues/7466 | 711,606,923 | MDU6SXNzdWU3MTE2MDY5MjM= | 7,466 | Seq2SeqTrainer: add a fast test that doesn't learn anything but can run on CPU | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'll take it :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | CONTRIBUTOR | null | @patil-suraj do you want to take this or should I?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7466/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7465 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7465/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7465/comments | https://api.github.com/repos/huggingface/transformers/issues/7465/events | https://github.com/huggingface/transformers/issues/7465 | 711,587,084 | MDU6SXNzdWU3MTE1ODcwODQ= | 7,465 | RAG - reproducing RAG-Sequence QA score | {
"login": "acslk",
"id": 11131839,
"node_id": "MDQ6VXNlcjExMTMxODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/11131839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/acslk",
"html_url": "https://github.com/acslk",
"followers_url": "https://api.github.com/users/acslk/followers",
"following_url": "https://api.github.com/users/acslk/following{/other_user}",
"gists_url": "https://api.github.com/users/acslk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/acslk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/acslk/subscriptions",
"organizations_url": "https://api.github.com/users/acslk/orgs",
"repos_url": "https://api.github.com/users/acslk/repos",
"events_url": "https://api.github.com/users/acslk/events{/privacy}",
"received_events_url": "https://api.github.com/users/acslk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"Gently pinging @ola13 here, she probably knows best which command to run to reproduce the eval results :-) ",
"Hi @acslk, thanks for your post!\r\n\r\nYou should be able to reproduce paper results for the RAG Token model (44.1 EM on NQ) by evaluating `facebook/rag-token-nq` with 20 docs.\r\n\r\nAs for the RAG Sequence model - we have lost some quality when translating the checkpoint from `fairseq` (the experimentation framework we used to obtain the original paper results) to HuggingFace. We are now working on replicating the paper numbers in HF and we'll update the official `facebook/rag-sequence-nq` model weights once we have that so stay tuned!",
"Thanks for the response, I tried the command above with RAG Token model and n_docs 20 on NQ test set and can confirm it matches paper results:\r\nINFO:__main__:F1: 51.44\r\nINFO:__main__:EM: 44.10",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | I'm trying to reproduce RAG-Sequence NQ score of 44.5 presented in Table 1 of the paper at https://arxiv.org/abs/2005.11401.
I used the command in the examples/rag readme
```bash
python examples/rag/eval_rag.py \
--model_name_or_path facebook/rag-sequence-nq \
--model_type rag_sequence \
--evaluation_set path/to/test.source \
--gold_data_path path/to/gold_data \
--predictions_path path/to/e2e_preds.txt \
--eval_mode e2e \
--gold_data_mode qa \
--n_docs 5 \
--print_predictions \
--recalculate \
```
For gold_data_path I used data.retriever.qas.nq-test from DPR repo, consisting of 3610 questions and answers: https://github.com/facebookresearch/DPR/blob/master/data/download_data.py#L91-L97
For evaluation_set, my understanding it should be the questions, so I extracted just the questions from the qas.nq-test csv file.
I tried the above command with n_docs 5 and 10, with the following results:
n_docs 5
INFO:__main__:F1: 49.67
INFO:__main__:EM: 42.58
n_docs 10
INFO:__main__:F1: 50.62
INFO:__main__:EM: 43.49
With n_docs 10 it's still 1 point below the score in paper. What would be the proper setup to reproduce the number, is the pretrained model loaded different, higher n_docs, or different test data?
Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7465/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7464 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7464/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7464/comments | https://api.github.com/repos/huggingface/transformers/issues/7464/events | https://github.com/huggingface/transformers/pull/7464 | 711,528,345 | MDExOlB1bGxSZXF1ZXN0NDk1MTY4MTU2 | 7,464 | Remove config assumption in Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
This PR tries to limit the access to `model.config` in `Trainer` to the minimum so that it works with regular PyTorch modules (as long as they accept dict inputs and return loss first like our models). The most challenging part was the storing/restoring of the `total_flos`, which I moved to the newly created `TrainerState`. It should work as before and be saved along the rest of the training state. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7464/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7464",
"html_url": "https://github.com/huggingface/transformers/pull/7464",
"diff_url": "https://github.com/huggingface/transformers/pull/7464.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7464.patch",
"merged_at": 1601471005000
} |
https://api.github.com/repos/huggingface/transformers/issues/7463 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7463/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7463/comments | https://api.github.com/repos/huggingface/transformers/issues/7463/events | https://github.com/huggingface/transformers/pull/7463 | 711,468,382 | MDExOlB1bGxSZXF1ZXN0NDk1MTE4MDEz | 7,463 | Trainer should not modify its TrainingArguments | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Shouldn't we have a properly-typed self.state on the Trainer instance? I always find assigning instance properties within the code a bit messy",
"We can put everything in the new `TrainerState`. I just did the same as for `self.epoch` or `self.global_step` (and there are plenty more).",
"Yes I think that’d be nice ",
"Closing this PR as this will require a bit more work then :-)"
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
This fixes a bug that took me some time to track in a notebook with several trainings. The bottom line is that `Trainer` should not modify its `TrainingArguments` so this fixes that part by saving the number of `max_steps` desired in the state instead of in the args. Also storing the number of training epochs for easy access in subclasses. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7463/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7463/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7463",
"html_url": "https://github.com/huggingface/transformers/pull/7463",
"diff_url": "https://github.com/huggingface/transformers/pull/7463.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7463.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7462 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7462/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7462/comments | https://api.github.com/repos/huggingface/transformers/issues/7462/events | https://github.com/huggingface/transformers/issues/7462 | 711,463,321 | MDU6SXNzdWU3MTE0NjMzMjE= | 7,462 | RAG - how to precompute custom document index? | {
"login": "aced125",
"id": 44452903,
"node_id": "MDQ6VXNlcjQ0NDUyOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/44452903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aced125",
"html_url": "https://github.com/aced125",
"followers_url": "https://api.github.com/users/aced125/followers",
"following_url": "https://api.github.com/users/aced125/following{/other_user}",
"gists_url": "https://api.github.com/users/aced125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aced125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aced125/subscriptions",
"organizations_url": "https://api.github.com/users/aced125/orgs",
"repos_url": "https://api.github.com/users/aced125/repos",
"events_url": "https://api.github.com/users/aced125/events{/privacy}",
"received_events_url": "https://api.github.com/users/aced125/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Second this. \r\n\r\nhttps://github.com/deepset-ai/haystack may be useful to you. They leverage huggingface and have an DPR implementation with an end-to-end example. Will not be surprised to see RAG implemented soon.\r\n",
"@Weilin37 Thanks. I'm also looking at the Faiss docs now (https://github.com/facebookresearch/faiss/wiki/Faiss-indexes).",
"@lhoestq can maybe help here as well",
"Yep I'm thinking of adding a script in `examples/rag` that shows how to create an indexed dataset for RAG.\r\nI'll let you know how it goes",
"@lhoestq Can you please let me know on how we can index the custom datasets? Appreciate your help on this",
"@lhoestq I have a bunch of documents to perform Q&A and currently, in the config it says, \r\ndataset (str, optional, defaults to \"wiki_dpr\") – A dataset identifier of the indexed dataset on HuggingFace AWS bucket (list all available datasets and ids using datasets.list_datasets()). So how can we create an indexed file and input that to the pretrained model for evaluation. ",
"> @lhoestq I have a bunch of documents to perform Q&A and currently, in the config it says,\r\n> dataset (str, optional, defaults to \"wiki_dpr\") – A dataset identifier of the indexed dataset on HuggingFace AWS bucket (list all available datasets and ids using datasets.list_datasets()). So how can we create an indexed file and input that to the pretrained model for evaluation.\r\n\r\nYes right... We'll have to edit the `RagRetriever` and the `HfIndex` to accept custom ones.\r\nIf you wanto to give it a try in the meantime, feel free to do so :)",
"Any progress on this @lhoestq @patrickvonplaten ? Awesome work guys :)",
"@tholor @Timoeller Do you reckon you guys could integrate this work into haystack?",
"@aced125 Yep, we will integrate RAG in Haystack soon (https://github.com/deepset-ai/haystack/issues/443).",
"> Any progress on this @lhoestq @patrickvonplaten ? Awesome work guys :)\r\n\r\nYou can expect a PR by tomorrow",
"Awesome thanks everyone @tholor @lhoestq @patrickvonplaten !!!!",
"Thank you @lhoestq . Really appreciate for getting back quickly on this issue.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hello everyone,\r\nI am interesting in studying how RAG behaves without the DPR retriever.\r\nFor example in the code below\r\n\r\n``from transformers import RagRetriever\r\nfrom transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\r\n\r\nretriever = RagRetriever.from_pretrained('./rag-token-nq', indexed_dataset=dataset)\r\ntokenizer = RagTokenizer.from_pretrained(\"./rag-token-nq\")\r\nmodel = RagTokenForGeneration.from_pretrained(\"./rag-token-nq\", retriever=retriever)\r\n\r\n**input_dict = tokenizer.prepare_seq2seq_batch(\"How many people live in Paris?\", \"In Paris, there are 10 million people.\", return_tensors=\"pt\")**\r\ninput_ids = input_dict[\"input_ids\"]\r\n\r\nmodel = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\n\r\ngenerated_ids = model.generate(input_ids=input_ids, labels=input_dict[\"labels\"])\r\n\r\ngenerated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\nprint(generated_string) ``\r\n\r\nIn the line '' **input_dict = tokenizer.prepare_seq2seq_batch(\"How many people live in Paris?\", \"In Paris, there are 10 million people.\", return_tensors=\"pt\")** ``, I want to use \"How many people live in Paris ?\" as the question and \"In Paris, there are 10 million people.\" the passage / context which should be used to generate the answer.\r\n\r\nKindly let me know how to do this?\r\n\r\nIs my understanding of the code correct and if not, how to go about it?\r\n\r\nThanks,\r\nKrishanu",
"For RAG you can pass both your question as `input_ids` and your context as `context_input_ids` to `model.generate`.\r\nYou can provide several contexts for one question.\r\n\r\nYou can find more information in the documentation [here](https://huggingface.co/transformers/model_doc/rag.html#transformers.RagTokenForGeneration.generate)",
"@lhoestq Thanks for the reply.\r\nThere is this doc_score parameter in the model.generate function. Is it necessary or optional?",
"If you pass the `context_input_ids` you also need to provide the `doc_scores` indeed."
] | 1,601 | 1,611 | 1,608 | NONE | null | Was wondering if there was any code snippet / blog post showing how one could load their own documents and index them, so they can be used by the RAG retriever.
Cheers! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7462/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7461 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7461/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7461/comments | https://api.github.com/repos/huggingface/transformers/issues/7461/events | https://github.com/huggingface/transformers/pull/7461 | 711,457,731 | MDExOlB1bGxSZXF1ZXN0NDk1MTA5NjMx | 7,461 | Distributed Trainer: 2 little fixes | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can we see when the config is accessed (in your error message)? `model.config` should be accessed as sparsely as possible in `Trainer` to work with any kind of model and I'll probably remove the requirement entirely soon.",
"`Seq2SeqTrainer` uses model.config 8 times. Mostly `pad_token_id` to avoid counting padding in the loss func.",
"It should add an assert the model is a `PreTrainedModel` at __init__ just to be clean, then for your specific problem, it should use the function `self._actual_model()` to grab the config to avoid your error (e.g., `self.model.config` -> `self._actual_model().config`).\r\n\r\n`Trainer` is on its way to fully handle models without config, see #7464.",
"OK. I reduced scope of this PR to just the `Tensor` -> `tensor`."
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | 1) fix DDP access to `model.config`. We could also set `self.config = model.config` earlier in `__init__`
2) switch torch.Tensor -> torch.tensor. The latter "infers the dtype automatically"
After which the command in #7460 works.
CC @patil-suraj , @TevenLeScao | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7461/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7461",
"html_url": "https://github.com/huggingface/transformers/pull/7461",
"diff_url": "https://github.com/huggingface/transformers/pull/7461.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7461.patch",
"merged_at": 1601518455000
} |
https://api.github.com/repos/huggingface/transformers/issues/7460 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7460/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7460/comments | https://api.github.com/repos/huggingface/transformers/issues/7460/events | https://github.com/huggingface/transformers/issues/7460 | 711,457,401 | MDU6SXNzdWU3MTE0NTc0MDE= | 7,460 | Seq2SeqTrainer Distributed: AttributeError and the RuntimeError | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger After that bug fix, the next bug is:\r\n```\r\nRuntimeError: Precision loss when unpacking double\r\n tensorized_scalar = torch.Tensor(scalars).cuda()\r\nRuntimeError: Precision loss when unpacking double\r\nTraceback (most recent call last):\r\n File \"finetune_trainer.py\", line 442, in <module>\r\n main()\r\n File \"finetune_trainer.py\", line 383, in main\r\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\r\n File \"/home/shleifer/transformers_fork/src/transformers/trainer.py\", line 809, in train\r\n self.log(logs)\r\n File \"/home/shleifer/transformers_fork/src/transformers/trainer.py\", line 1031, in log\r\n total_flos = distributed_broadcast_scalars([self.total_flos]).sum().item()\r\n File \"/home/shleifer/transformers_fork/src/transformers/trainer_utils.py\", line 206, in distributed_broadcast_scalars\r\n tensorized_scalar = torch.Tensor(scalars).cuda()\r\nRuntimeError: Precision loss when unpacking double\r\n```\r\n### Env\r\n\r\nApex installed.\r\n```\r\n- `transformers` version: 3.3.1\r\n- Platform: Linux-4.9.0-11-amd64-x86_64-with-debian-9.12\r\n- Python version: 3.7.4\r\n- PyTorch version (GPU?): 1.5.1+cu101 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```",
"The next error is on @TevenLeScao ",
"I fixed it, will stuff into 1 PR when everything is working."
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | The following command (on 8 GPUS) fails with
```python
AttributeError: DistributedDataParallel has no attribute "config"
```
### Command
```bash
export WANDB_PROJECT=dmar
export BS=64
export GAS=1
export m=sshleifer/student_marian_en_ro_6_3
export MAX_LEN=128
python -m torch.distributed.launch --nproc_per_node=8 finetune_trainer.py \
--tokenizer_name $m --model_name_or_path $m \
--data_dir wmt_mar_pl \
--output_dir marian_en_ro_6_3 --overwrite_output_dir --predict_with_generate \
--learning_rate=3e-4 \
--warmup_steps 500 --sortish_sampler \
--fp16 \
--gradient_accumulation_steps=$GAS \
--per_device_train_batch_size=$BS --per_device_eval_batch_size=$BS \
--freeze_encoder --freeze_embeds \
--num_train_epochs=6 \
--save_steps 3000 --eval_steps 3000 \
--max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \
--do_train --do_eval --do_predict --evaluate_during_training\
--predict_with_generate --logging_first_step \
--task translation --label_smoothing 0.1 --n_gpu 8 \
--run_name builtin_trainer_63_v8_pl \
"$@"
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7460/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7459 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7459/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7459/comments | https://api.github.com/repos/huggingface/transformers/issues/7459/events | https://github.com/huggingface/transformers/pull/7459 | 711,410,481 | MDExOlB1bGxSZXF1ZXN0NDk1MDcwNjg5 | 7,459 | Update README.md | {
"login": "mar-muel",
"id": 19345805,
"node_id": "MDQ6VXNlcjE5MzQ1ODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/19345805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mar-muel",
"html_url": "https://github.com/mar-muel",
"followers_url": "https://api.github.com/users/mar-muel/followers",
"following_url": "https://api.github.com/users/mar-muel/following{/other_user}",
"gists_url": "https://api.github.com/users/mar-muel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mar-muel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mar-muel/subscriptions",
"organizations_url": "https://api.github.com/users/mar-muel/orgs",
"repos_url": "https://api.github.com/users/mar-muel/repos",
"events_url": "https://api.github.com/users/mar-muel/events{/privacy}",
"received_events_url": "https://api.github.com/users/mar-muel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks! FYI we'll have proper model versioning in ~1 month or so"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Update/reference v2 model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7459/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7459",
"html_url": "https://github.com/huggingface/transformers/pull/7459",
"diff_url": "https://github.com/huggingface/transformers/pull/7459.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7459.patch",
"merged_at": 1601556686000
} |
https://api.github.com/repos/huggingface/transformers/issues/7458 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7458/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7458/comments | https://api.github.com/repos/huggingface/transformers/issues/7458/events | https://github.com/huggingface/transformers/pull/7458 | 711,330,163 | MDExOlB1bGxSZXF1ZXN0NDk1MDAzNTIw | 7,458 | Fix Trainer tests in a multiGPU env | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
Should fix the multiple GPU CI test (tests are passing locally). Will merge as soon as the CI passes to make the CI green. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7458/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7458",
"html_url": "https://github.com/huggingface/transformers/pull/7458",
"diff_url": "https://github.com/huggingface/transformers/pull/7458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7458.patch",
"merged_at": 1601402802000
} |
https://api.github.com/repos/huggingface/transformers/issues/7457 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7457/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7457/comments | https://api.github.com/repos/huggingface/transformers/issues/7457/events | https://github.com/huggingface/transformers/pull/7457 | 711,316,333 | MDExOlB1bGxSZXF1ZXN0NDk0OTkyMTA4 | 7,457 | Get a better error when check_copies fails | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=h1) Report\n> Merging [#7457](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52e8392b7ebd4ebc7b796e8f14b9dae271139f5f?el=desc) will **increase** coverage by `2.09%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7457 +/- ##\n==========================================\n+ Coverage 77.07% 79.17% +2.09% \n==========================================\n Files 181 181 \n Lines 35858 35858 \n==========================================\n+ Hits 27638 28391 +753 \n+ Misses 8220 7467 -753 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.75% <0.00%> (-66.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `66.34% <0.00%> (-28.85%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.70% <0.00%> (-22.68%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (+39.78%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `89.67% <0.00%> (+68.14%)` | :arrow_up: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `94.60% <0.00%> (+77.88%)` | :arrow_up: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <0.00%> (+78.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=footer). Last update [52e8392...f4761cf](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
Prints a cleaner error message when `check_copies.py` encounters a bad copy. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7457/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7457",
"html_url": "https://github.com/huggingface/transformers/pull/7457",
"diff_url": "https://github.com/huggingface/transformers/pull/7457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7457.patch",
"merged_at": 1601453114000
} |
https://api.github.com/repos/huggingface/transformers/issues/7456 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7456/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7456/comments | https://api.github.com/repos/huggingface/transformers/issues/7456/events | https://github.com/huggingface/transformers/pull/7456 | 711,315,263 | MDExOlB1bGxSZXF1ZXN0NDk0OTkxMjEz | 7,456 | Catch import datasets common errors | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=h1) Report\n> Merging [#7456](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52e8392b7ebd4ebc7b796e8f14b9dae271139f5f?el=desc) will **decrease** coverage by `0.23%`.\n> The diff coverage is `75.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7456 +/- ##\n==========================================\n- Coverage 77.07% 76.84% -0.24% \n==========================================\n Files 181 181 \n Lines 35858 35860 +2 \n==========================================\n- Hits 27638 27555 -83 \n- Misses 8220 8305 +85 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.97% <75.00%> (-0.40%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `83.58% <0.00%> (-8.96%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.68% <0.00%> (-0.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.90% <0.00%> (-0.32%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=footer). Last update [52e8392...c20be03](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
This PR adds more checks when trying to import datasets to check we actually are using the datasets library and not a local folder/module.
Fixes #7430
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7456/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7456/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7456",
"html_url": "https://github.com/huggingface/transformers/pull/7456",
"diff_url": "https://github.com/huggingface/transformers/pull/7456.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7456.patch",
"merged_at": 1601401330000
} |
https://api.github.com/repos/huggingface/transformers/issues/7455 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7455/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7455/comments | https://api.github.com/repos/huggingface/transformers/issues/7455/events | https://github.com/huggingface/transformers/pull/7455 | 711,302,449 | MDExOlB1bGxSZXF1ZXN0NDk0OTgwNjAz | 7,455 | Adding the Streamlit demo app code for the RAG model | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=h1) Report\n> Merging [#7455](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e9a1fb8c75e2ef00fea9c4c0dc511fc0178081c?el=desc) will **increase** coverage by `2.31%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7455 +/- ##\n==========================================\n+ Coverage 76.60% 78.91% +2.31% \n==========================================\n Files 181 181 \n Lines 35865 35865 \n==========================================\n+ Hits 27473 28302 +829 \n+ Misses 8392 7563 -829 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `17.46% <0.00%> (-81.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.69% <0.00%> (-55.46%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `81.81% <0.00%> (-18.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.31% <0.00%> (-10.12%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.61% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `83.33% <0.00%> (+4.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <0.00%> (+30.00%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=footer). Last update [9e9a1fb...291cea0](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thank you for the demo! I added some comments with an issue I faced when running it regarding Streamlit.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,601 | 1,619 | 1,619 | MEMBER | null | # Adding RAG demo code
This PR shares the code for the RAG demo running [here](https://huggingface.co/rag/) for future reference. The code is added in `examples/rag`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7455/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7455",
"html_url": "https://github.com/huggingface/transformers/pull/7455",
"diff_url": "https://github.com/huggingface/transformers/pull/7455.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7455.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7454 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7454/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7454/comments | https://api.github.com/repos/huggingface/transformers/issues/7454/events | https://github.com/huggingface/transformers/issues/7454 | 711,284,145 | MDU6SXNzdWU3MTEyODQxNDU= | 7,454 | Seq2seq example for T5 keeps on generating warning | {
"login": "amanpreet692",
"id": 42522643,
"node_id": "MDQ6VXNlcjQyNTIyNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/42522643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amanpreet692",
"html_url": "https://github.com/amanpreet692",
"followers_url": "https://api.github.com/users/amanpreet692/followers",
"following_url": "https://api.github.com/users/amanpreet692/following{/other_user}",
"gists_url": "https://api.github.com/users/amanpreet692/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amanpreet692/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanpreet692/subscriptions",
"organizations_url": "https://api.github.com/users/amanpreet692/orgs",
"repos_url": "https://api.github.com/users/amanpreet692/repos",
"events_url": "https://api.github.com/users/amanpreet692/events{/privacy}",
"received_events_url": "https://api.github.com/users/amanpreet692/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes, great PR! Send it and tag me!\r\nBonus points for adding a test that used to break but now doesn't maybe to seq2seq/test_datasets.py",
"Here you go! #7470 Added a simple test case as well to test the arguments that would be sent to collate() "
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | In the latest version from Master, on running finetune.sh for T5, was getting the following warning continuously:
**Keyword arguments {'src_lang':None,'tgt_lang':None,'add_prefix_space':False} not recognized.**
I found out this is because the translation and BART parameters are being passed to prepare_seq2seq_batch of T5Tokenizer which it cannot handle and the tokenizer in the end spits out warning for unused kwargs.
I made a small change in utils.py to the constructor at https://github.com/huggingface/transformers/blob/9e9a1fb8c75e2ef00fea9c4c0dc511fc0178081c/examples/seq2seq/utils.py#L100:
```python
**dataset_kwargs
):
super().__init__()
self.src_file = Path(data_dir).joinpath(type_path + ".source")
self.tgt_file = Path(data_dir).joinpath(type_path + ".target")
self.len_file = Path(data_dir).joinpath(type_path + ".len")
if os.path.exists(self.len_file):
self.src_lens = pickle_load(self.len_file)
self.used_char_len = False
else:
self.src_lens = self.get_char_lens(self.src_file)
self.used_char_len = True
self.max_source_length = max_source_length
self.max_target_length = max_target_length
assert min(self.src_lens) > 0, f"found empty line in {self.src_file}"
self.tokenizer = tokenizer
self.prefix = prefix if prefix is not None else ""
if n_obs is not None:
self.src_lens = self.src_lens[:n_obs]
self.pad_token_id = self.tokenizer.pad_token_id
self.dataset_kwargs = dataset_kwargs
dataset_kwargs.update({'add_prefix_space' : True} if isinstance(self.tokenizer, BartTokenizer) else {})
```
since src_lang and tgt_lang weren't being used anywhere else other than passing on to prepare_seq2seq_batch as parameters.
While calling the method I used dataset_kwargs as the paremeter which sorted out the issue:
```python
self.tokenizer.prepare_seq2seq_batch(
[x["src_texts"] for x in batch],
tgt_texts=[x["tgt_texts"] for x in batch],
max_length=self.max_source_length,
max_target_length=self.max_target_length,
return_tensors="pt",
**self.dataset_kwargs
)
```
If this seems reasonable I can raise a PR and check it in?
@sshleifer @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7454/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7454/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7453 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7453/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7453/comments | https://api.github.com/repos/huggingface/transformers/issues/7453/events | https://github.com/huggingface/transformers/pull/7453 | 711,168,393 | MDExOlB1bGxSZXF1ZXN0NDk0ODc0NjA4 | 7,453 | Multi-GPU Testing setup | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=h1) Report\n> Merging [#7453](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1fc4de69ed024e18b88cb6f040021630599de2f7?el=desc) will **decrease** coverage by `2.51%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7453 +/- ##\n==========================================\n- Coverage 79.35% 76.83% -2.52% \n==========================================\n Files 181 181 \n Lines 35800 35800 \n==========================================\n- Hits 28410 27508 -902 \n- Misses 7390 8292 +902 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-39.79%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.12% <0.00%> (-3.79%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.51%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=footer). Last update [1fc4de6...5a26051](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | MEMBER | null | # What does this PR do?
This PR contributes a testing suite that runs on a multi-GPU machine.
The machine has two T4 GPUs (better than k80 in almost every way, and cheaper), and the testing suite is identical to the single-GPU machine testing suite. Two jobs are run:
- One job on each commit to the `master` branch
- One job on a scheduled basis, that additionally runs all the slow tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7453/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7453",
"html_url": "https://github.com/huggingface/transformers/pull/7453",
"diff_url": "https://github.com/huggingface/transformers/pull/7453.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7453.patch",
"merged_at": 1601459615000
} |
https://api.github.com/repos/huggingface/transformers/issues/7452 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7452/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7452/comments | https://api.github.com/repos/huggingface/transformers/issues/7452/events | https://github.com/huggingface/transformers/pull/7452 | 711,077,815 | MDExOlB1bGxSZXF1ZXN0NDk0ODA0MjIx | 7,452 | LayoutLM: add exception handling for bbox values | {
"login": "av-maslov",
"id": 71869629,
"node_id": "MDQ6VXNlcjcxODY5NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/71869629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/av-maslov",
"html_url": "https://github.com/av-maslov",
"followers_url": "https://api.github.com/users/av-maslov/followers",
"following_url": "https://api.github.com/users/av-maslov/following{/other_user}",
"gists_url": "https://api.github.com/users/av-maslov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/av-maslov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/av-maslov/subscriptions",
"organizations_url": "https://api.github.com/users/av-maslov/orgs",
"repos_url": "https://api.github.com/users/av-maslov/repos",
"events_url": "https://api.github.com/users/av-maslov/events{/privacy}",
"received_events_url": "https://api.github.com/users/av-maslov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=h1) Report\n> Merging [#7452](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1fc4de69ed024e18b88cb6f040021630599de2f7?el=desc) will **decrease** coverage by `1.65%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7452 +/- ##\n==========================================\n- Coverage 79.35% 77.70% -1.66% \n==========================================\n Files 181 181 \n Lines 35800 35801 +1 \n==========================================\n- Hits 28410 27819 -591 \n- Misses 7390 7982 +592 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `94.47% <100.00%> (+69.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.69% <0.00%> (-74.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.05% <0.00%> (-63.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.32% <0.00%> (-51.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `81.81% <0.00%> (-18.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `83.74% <0.00%> (-14.14%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `76.94% <0.00%> (-9.53%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.16% <0.00%> (-2.42%)` | :arrow_down: |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=footer). Last update [1fc4de6...6162c88](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
Fixes unhandled error when trying to use bbox balues greater than maximum allowed threshold 1000. To replicate error:
- In `test_modelling_layoutlm.py` set `range_bbox=1025`, i.e. greater 1024
- Run `pytest tests/test_modeling_layoutlm.py`
Requirement for bbox values to be within the range 0-1000 is documented
but if it is violated then it is not clear what is the issue from error
message.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
@liminghao1630 @vblagoje | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7452/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7452",
"html_url": "https://github.com/huggingface/transformers/pull/7452",
"diff_url": "https://github.com/huggingface/transformers/pull/7452.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7452.patch",
"merged_at": 1601885835000
} |
https://api.github.com/repos/huggingface/transformers/issues/7451 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7451/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7451/comments | https://api.github.com/repos/huggingface/transformers/issues/7451/events | https://github.com/huggingface/transformers/issues/7451 | 711,075,550 | MDU6SXNzdWU3MTEwNzU1NTA= | 7,451 | T5 unsupervised training | {
"login": "amlarraz",
"id": 17318832,
"node_id": "MDQ6VXNlcjE3MzE4ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/17318832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amlarraz",
"html_url": "https://github.com/amlarraz",
"followers_url": "https://api.github.com/users/amlarraz/followers",
"following_url": "https://api.github.com/users/amlarraz/following{/other_user}",
"gists_url": "https://api.github.com/users/amlarraz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amlarraz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amlarraz/subscriptions",
"organizations_url": "https://api.github.com/users/amlarraz/orgs",
"repos_url": "https://api.github.com/users/amlarraz/repos",
"events_url": "https://api.github.com/users/amlarraz/events{/privacy}",
"received_events_url": "https://api.github.com/users/amlarraz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"__UPDATE:__\r\n\r\nI'm trying to make my own masking_function for this task. According with the T5 original paper, if you have two consecutive tokens to masking you must mask them using only one sentinel token so I need a function that searches the consecutive tokens (\"rachas\" in my language) in the random indices choosed. Here you have my code:\r\n\r\n```\r\ndef racha_detection(lista):\r\n # It returns a list of lists where each sub-list contains the consecutive tokens in the list\r\n rachas = []\r\n racha = []\r\n for i, element in enumerate(lista):\r\n if (i<len(lista)-1) and (lista[i+1] == element+1):\r\n racha.append(element)\r\n else:\r\n if len(racha)>0:\r\n rachas.append(racha + [element]) \r\n else:# (i!=len(lista)-1):\r\n rachas.append([element])\r\n racha = []\r\n return rachas\r\n\r\ndef masking(tokenized_sentence, rachas):\r\n # Function to mask a tokenized_sentence (token ids) following the rachas described in rachas\r\n # Only one sentinel_token per racha\r\n sent_token_id = 0\r\n enmascared = tokenized_sentence.copy()\r\n for racha in rachas:\r\n sent_token = f'<extra_id_{sent_token_id}>'\r\n sent_id = tokenizer.encode(sent_token)[0]\r\n for i, idx in enumerate(racha):\r\n if i==0:\r\n enmascared[idx] = sent_id\r\n else:\r\n enmascared[idx] = -100\r\n sent_token_id += 1\r\n \r\n enmascared = [t for t in enmascared if t!=-100] \r\n\r\n return enmascared\r\n\r\ndef add_noise(sentence, tokenizer, percent=0.15):\r\n # Function that takes a sentence, tokenizer and a noise percentage and returns\r\n # the masked input_ids and masked target_ids accordling with the T5 paper and HuggingFace docs\r\n # To see the process working uncomment all the prints ;)\r\n tokenized_sentence = tokenizer.encode(sentence)\r\n #print('PRE-MASKED:')\r\n #print('INPUT: {}'.format(tokenizer.convert_ids_to_tokens(tokenized_sentence)))\r\n \r\n idxs_2_mask = sorted(random.sample(range(len(tokenized_sentence)), \r\n int(len(tokenized_sentence)*percent)))\r\n rachas = racha_detection(idxs_2_mask)\r\n enmascared_input = masking(tokenized_sentence, rachas)\r\n #print('RACHAS INPUT: {}'.format(rachas))\r\n idxs_2_mask = [idx for idx in range(len(tokenized_sentence)) if idx not in idxs_2_mask]\r\n rachas = racha_detection(idxs_2_mask)\r\n enmascared_target = masking(tokenized_sentence, rachas)\r\n #print('RACHAS TARGET: {}'.format(rachas))\r\n \r\n #print('POST-MASKED:')\r\n #print('INPUT: {}'.format(tokenizer.convert_ids_to_tokens(enmascared_input)))\r\n #print('TARGET: {}'.format(tokenizer.convert_ids_to_tokens(enmascared_target)))\r\n\r\n return enmascared_input, enmascared_target\r\n```\r\n\r\nI dont know if it is correct but it generates sequences like the sequences in the examples. What do you think?",
"Another question comes to my mind, is it neccesary to add the pad token at the beggining of the label in this task too? I'm using the \"labels\" argument to add the targets_ids to the model, I mean:\r\n\r\n`model(input_ids=input_ids, labels=target_ids)`\r\n\r\nThank you in advance!",
"Hey @amlarraz,\r\n\r\nAs far as I know there is no pre-written function or script for unsupervised \"sentinel masking\" for T5. But it shouldn't be too difficult to do so. The innovation of T5's sentinel masking is exactly that you can mask multiple tokens with a single masking token which has been shown to yield better results as norrmal single token masking (a la BERT). \r\n\r\nSo to answer your questions:\r\n1) The data should be pre-processed as described in the paper and in the example in the docs, here: https://huggingface.co/transformers/model_doc/t5.html#training . The forum: http://discuss.huggingface.co/ is probably a better place to ask more specific questions about your code.\r\n\r\n2) You don't need to add a padding token to the labels - this is done automatically here: https://github.com/huggingface/transformers/blob/2977bd528f06bada54afcf740219e65afd1c0883/src/transformers/modeling_t5.py#L638",
"Hi @patrickvonplaten !\r\n\r\nMany thanks for answer me. As you said I've moved the question to the huggingface forum. If there are somebody interested in follow this topic this is the[ link to the conversation.](https://discuss.huggingface.co/t/train-t5-from-scratch/1781?u=amlarraz)"
] | 1,601 | 1,603 | 1,603 | NONE | null | I want to train T5 in a new language from scratch an I think the best way to do this is through the unsupervised denoising task (because you can have all text you want! no labels required! hurra!)
However I have some doubts, I've seen [the HuggingFace documentation about this](https://huggingface.co/transformers/model_doc/t5.html#training) and I wonder how to create the train data, I mean, is there any function in the library to add the sentinel tokens? In my own research I've worked with [the original T5 library](https://github.com/google-research/text-to-text-transfer-transformer) and I've seen that in this library they have some functions to do that, but this functions not apply the sentinel_tokens to the text but it replace the "noising" tokens by " always.
My questions are:
1. It exists some functions to do that in HuggingFace?
2. If the question 1 answer is not, anybody have any?
3. If the question 1 and 2 answers are not, What are the rules I must follow to create this function? Rules like:
- How much concatenated tokens must the sentinel tokens mask? I mean, in the sentence "The cute dog walks in the park” they put the sentinel tokens in "cute dog" and my question is why this words election? I can take 1 token forever?
- In each sentence I must start using the sentinel token number 1?
Thank you in advance! :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7451/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7450 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7450/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7450/comments | https://api.github.com/repos/huggingface/transformers/issues/7450/events | https://github.com/huggingface/transformers/pull/7450 | 710,977,160 | MDExOlB1bGxSZXF1ZXN0NDk0NzI2OTg1 | 7,450 | deleted | {
"login": "av-maslov",
"id": 71869629,
"node_id": "MDQ6VXNlcjcxODY5NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/71869629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/av-maslov",
"html_url": "https://github.com/av-maslov",
"followers_url": "https://api.github.com/users/av-maslov/followers",
"following_url": "https://api.github.com/users/av-maslov/following{/other_user}",
"gists_url": "https://api.github.com/users/av-maslov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/av-maslov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/av-maslov/subscriptions",
"organizations_url": "https://api.github.com/users/av-maslov/orgs",
"repos_url": "https://api.github.com/users/av-maslov/repos",
"events_url": "https://api.github.com/users/av-maslov/events{/privacy}",
"received_events_url": "https://api.github.com/users/av-maslov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=h1) Report\n> Merging [#7450](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1fc4de69ed024e18b88cb6f040021630599de2f7?el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7450 +/- ##\n==========================================\n- Coverage 79.35% 79.35% -0.01% \n==========================================\n Files 181 181 \n Lines 35800 35801 +1 \n==========================================\n Hits 28410 28410 \n- Misses 7390 7391 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7450/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.00% <0.00%> (-0.07%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7450/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7450/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=footer). Last update [1fc4de6...c5759b1](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Deleted | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7450/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7450",
"html_url": "https://github.com/huggingface/transformers/pull/7450",
"diff_url": "https://github.com/huggingface/transformers/pull/7450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7450.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7449 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7449/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7449/comments | https://api.github.com/repos/huggingface/transformers/issues/7449/events | https://github.com/huggingface/transformers/issues/7449 | 710,942,212 | MDU6SXNzdWU3MTA5NDIyMTI= | 7,449 | What's the most straightforward way to initialise BertForSequenceClassification for different token rather than [CLS]? | {
"login": "amirj",
"id": 1645137,
"node_id": "MDQ6VXNlcjE2NDUxMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1645137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amirj",
"html_url": "https://github.com/amirj",
"followers_url": "https://api.github.com/users/amirj/followers",
"following_url": "https://api.github.com/users/amirj/following{/other_user}",
"gists_url": "https://api.github.com/users/amirj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amirj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amirj/subscriptions",
"organizations_url": "https://api.github.com/users/amirj/orgs",
"repos_url": "https://api.github.com/users/amirj/repos",
"events_url": "https://api.github.com/users/amirj/events{/privacy}",
"received_events_url": "https://api.github.com/users/amirj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | # ❓ Questions & Help
## Details
BertForSequenceClassification uses [CLS] token's representation to feed a linear classifier. I want to leverage another token (say [X] in the input sequence) rather than [CLS]. What's the most straightforward way to implement that in Transformers?
**A link to original question on the forum/Stack Overflow**: https://stackoverflow.com/questions/64094098/how-to-initialize-bertforsequenceclassification-for-different-input-rather-than | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7449/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7448 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7448/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7448/comments | https://api.github.com/repos/huggingface/transformers/issues/7448/events | https://github.com/huggingface/transformers/issues/7448 | 710,940,069 | MDU6SXNzdWU3MTA5NDAwNjk= | 7,448 | v3.3.0 - Issue with name conflict in transformers & datasets - AttributeError: module 'datasets' has no attribute '__version__' | {
"login": "nreimers",
"id": 10706961,
"node_id": "MDQ6VXNlcjEwNzA2OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/10706961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nreimers",
"html_url": "https://github.com/nreimers",
"followers_url": "https://api.github.com/users/nreimers/followers",
"following_url": "https://api.github.com/users/nreimers/following{/other_user}",
"gists_url": "https://api.github.com/users/nreimers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nreimers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nreimers/subscriptions",
"organizations_url": "https://api.github.com/users/nreimers/orgs",
"repos_url": "https://api.github.com/users/nreimers/repos",
"events_url": "https://api.github.com/users/nreimers/events{/privacy}",
"received_events_url": "https://api.github.com/users/nreimers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed we'll fix this and release a patch soon.",
"The bug has been fixed in #7456 and v3.3.1 is out with this fix. The problem should be solved for now, let us know if that's not the case!",
"Great, thanks for the quick fix and release of a new version.\r\n\r\nIt is now working fine in my case :)",
"I had the same error but my setup only included a `data/` folder but now **it works fine** with version `3.3.1`."
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Version 3.3.0 tries to import the module [datasets](https://pypi.org/project/datasets/):
https://github.com/huggingface/transformers/blob/v3.3.0/src/transformers/file_utils.py#L69
However, this can cause some undesirable behavior if there is a "datasets" folder in the same folder.
An example to re-produce the error:
```
datasets/ <= Folder that contains your own data files
myscript.py
```
myscript.py with the following content:
```
import transformers
```
This produces the following error:
```
python myscript.py
Traceback (most recent call last):
File "myscript.py", line 1, in <module>
import transformers
File "/home/user/miniconda3/envs/sberttest/lib/python3.7/site-packages/transformers/__init__.py", line 22, in <module>
from .integrations import ( # isort:skip
File "/home/user/miniconda3/envs/sberttest/lib/python3.7/site-packages/transformers/integrations.py", line 42, in <module>
from .trainer_utils import PREFIX_CHECKPOINT_DIR, BestRun # isort:skip
File "/home/user/miniconda3/envs/sberttest/lib/python3.7/site-packages/transformers/trainer_utils.py", line 6, in <module>
from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
File "/home/user/miniconda3/envs/sberttest/lib/python3.7/site-packages/transformers/file_utils.py", line 72, in <module>
logger.debug(f"Succesfully imported datasets version {datasets.__version__}")
AttributeError: module 'datasets' has no attribute '__version__'
```
The issue is with the import logic of Python. The datasets-folder will be treated as a module and transformers tries to load this module. This obviously fails, as we talk here about the datasets-folder and not [datasets package](https://pypi.org/project/datasets/).
As *datasets* is a quite common folder name in many setups to contain the files for the own datasets, I can image that this name collision will appear frequently. As soon as there is a datasets folder, you can no-longer import transformers.
## Solution
I am not sure what the best solution is for this. One quick fix would be to change:
https://github.com/huggingface/transformers/blob/v3.3.0/src/transformers/file_utils.py#L74
to
```
except:
_datasets_available = False
```
This would catch all exceptions. Old scripts, that have a `datasets/` folder would then still be working.
## Environment info
- `transformers` version: 3.3.0
- Platform: Linux-4.15.0-39-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (False)
- datasets package is not installed
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7448/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7448/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7447 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7447/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7447/comments | https://api.github.com/repos/huggingface/transformers/issues/7447/events | https://github.com/huggingface/transformers/issues/7447 | 710,916,689 | MDU6SXNzdWU3MTA5MTY2ODk= | 7,447 | Getting Bert Embeddings in Batch | {
"login": "ChawlaAvi",
"id": 36801774,
"node_id": "MDQ6VXNlcjM2ODAxNzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/36801774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChawlaAvi",
"html_url": "https://github.com/ChawlaAvi",
"followers_url": "https://api.github.com/users/ChawlaAvi/followers",
"following_url": "https://api.github.com/users/ChawlaAvi/following{/other_user}",
"gists_url": "https://api.github.com/users/ChawlaAvi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChawlaAvi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChawlaAvi/subscriptions",
"organizations_url": "https://api.github.com/users/ChawlaAvi/orgs",
"repos_url": "https://api.github.com/users/ChawlaAvi/repos",
"events_url": "https://api.github.com/users/ChawlaAvi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChawlaAvi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | Hi all
I have a list of sentences(a batch during training) and for every word in each sentence, I need an aligned bert embedding which should be the mean of every word-piece that word was split into.
Right now, I am doing it sentence by sentence and obtain the aligned embedding for every word by reiterating over the sentence, tokenise the individual word, note the number of word-pieces it was split into and lookup into the Bert embedding matrix to average out those rows of the matrix. Following is the code to get the aligned embeddings:
`
def get_bert_aligned_embeddings(self, last_hidden_states, tokens):
count = 0
aligned_embeddings = []
for i in tokens:
tokenisation_length = len(self.tokenizer.tokenize(i))
emb = torch.mean(last_hidden_states[count:count+tokenisation_length], axis = 0)
count += tokenisation_length
aligned_embeddings.append(emb)
aligned_embeddings = torch.stack(aligned_embeddings)
return aligned_embeddings
`
tokens is the list of words in a sentence, last_hidden_states are the embeddings I obtained from Bert.
So, the above function runs for every sentence and every sentence is passed one by one to Bert Model.
I want to know if there is any faster way of doing this? Can this entire process be done in batches? Any suggestions that could help me speed up this process would be great. Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7447/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7446 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7446/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7446/comments | https://api.github.com/repos/huggingface/transformers/issues/7446/events | https://github.com/huggingface/transformers/pull/7446 | 710,878,098 | MDExOlB1bGxSZXF1ZXN0NDk0NjQ5OTkx | 7,446 | Adding gradient checkpointing to GPT2 | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=h1) Report\n> Merging [#7446](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7dfdf793bb5e3a865f33ed597b10fc4526364af9?el=desc) will **decrease** coverage by `1.92%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7446 +/- ##\n==========================================\n- Coverage 80.98% 79.06% -1.93% \n==========================================\n Files 181 181 \n Lines 35750 35757 +7 \n==========================================\n- Hits 28953 28271 -682 \n- Misses 6797 7486 +689 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.70% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.36% <100.00%> (+0.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `87.03% <100.00%> (+0.20%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.39% <0.00%> (-51.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `65.26% <0.00%> (-33.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.12% <0.00%> (-3.79%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=footer). Last update [7dfdf79...6139c24](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"The slow tests are passing - I've also added a test for generation with checkpointing, although of course to be sure, one should also check the contents of the backwards pass."
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | This PR adds gradient checkpointing capabilities to GPT-2, imitating the Longformer and Bert checkpointing code. It also disables `find_unused_parameters` in Trainer if the model is using gradient checkpointing, as per #4659 they are incompatible. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7446/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7446",
"html_url": "https://github.com/huggingface/transformers/pull/7446",
"diff_url": "https://github.com/huggingface/transformers/pull/7446.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7446.patch",
"merged_at": 1601396787000
} |
https://api.github.com/repos/huggingface/transformers/issues/7445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7445/comments | https://api.github.com/repos/huggingface/transformers/issues/7445/events | https://github.com/huggingface/transformers/pull/7445 | 710,829,161 | MDExOlB1bGxSZXF1ZXN0NDk0NjEwMTE0 | 7,445 | Add is_split_into_words as an argument to tokenize | {
"login": "madaan",
"id": 1304693,
"node_id": "MDQ6VXNlcjEzMDQ2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1304693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madaan",
"html_url": "https://github.com/madaan",
"followers_url": "https://api.github.com/users/madaan/followers",
"following_url": "https://api.github.com/users/madaan/following{/other_user}",
"gists_url": "https://api.github.com/users/madaan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madaan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madaan/subscriptions",
"organizations_url": "https://api.github.com/users/madaan/orgs",
"repos_url": "https://api.github.com/users/madaan/repos",
"events_url": "https://api.github.com/users/madaan/events{/privacy}",
"received_events_url": "https://api.github.com/users/madaan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The failing test `FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_text_generation` passes locally for me.",
"The `is_split_into_words` flag means that instead of passing a string defining a sequence: `This happened to me`, you're instead passing an array of words: `['This', 'happened', 'to', 'me']`.\r\n\r\nIf instead of passing strings to the tokenizers you passed an array of words, do you get the same behaviour?\r\n\r\nSomething like:\r\n\r\n```py\r\nfrom transformers import RobertaTokenizer\r\ntokenizer = RobertaTokenizer.from_pretrained('roberta-large')\r\nprint(tokenizer.encode([\"happened\"], is_split_into_words=True))\r\nprint(tokenizer.encode_plus([\"happened\"], is_split_into_words = True))\r\nprint(tokenizer.batch_encode_plus([[\"happened\"]], is_split_into_words=True))\r\n```",
">If instead of passing strings to the tokenizers you passed an array of words, do you get the same behaviour?\r\n\r\nAh yes, that works. I guess I was just using these functions wrong :) Thanks! ",
"No worries, thanks a lot for opening a PR and proposing a fix!"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
Two calls to `self.tokenize` in `tokenization_utils.py` were missing the argument `is_split_into_words`. Since `is_split_into_words` is not present in the `kwargs`, `self.tokenize` resorts to the default behavior of not adding a space before every word.
This PR fixes this issue by adding the missing arguments in the calls to self.tokenize().
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Environment info
- `transformers` version: 3.3.0
- Platform: Linux-4.15.0-99-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.3.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?:No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
Trainer: @sgugger (because this might be related to https://github.com/huggingface/transformers/pull/7236)
## Information
`tokenizer.encode`, `tokenizer.encode_plus`, and `tokenizer.batch_encode_plus` ignore the flag `is_split_into_words`.
## To reproduce
Steps to reproduce the behavior:
```py
from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
print(tokenizer.encode("happened", is_split_into_words=True))
print(tokenizer.encode_plus("happened", is_split_into_words = True)
)
print(tokenizer.batch_encode_plus(["happened"], is_split_into_words=True))
```
## Actual behavior
The word ``happened`` is tokenized without a space prefix which would be expected because of `is_split_into_words=True`:
```
[0, 298, 3340, 4490, 2]
{'input_ids': [0, 298, 3340, 4490, 2], 'attention_mask': [1, 1, 1, 1, 1]}
{'input_ids': [[0, 298, 3340, 4490, 2]], 'attention_mask': [[1, 1, 1, 1, 1]]}
```
## Expected behavior
a space should be prefixed in front of "happened" before tokenization, giving the following outputs:
```
[0, 1102, 2]
{'input_ids': [0, 1102, 2], 'attention_mask': [1, 1, 1]}
{'input_ids': [[0, 1102, 2]], 'attention_mask': [[1, 1, 1]]}
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7445/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7445",
"html_url": "https://github.com/huggingface/transformers/pull/7445",
"diff_url": "https://github.com/huggingface/transformers/pull/7445.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7445.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/7444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7444/comments | https://api.github.com/repos/huggingface/transformers/issues/7444/events | https://github.com/huggingface/transformers/pull/7444 | 710,819,893 | MDExOlB1bGxSZXF1ZXN0NDk0NjAyNTQw | 7,444 | Update README.md | {
"login": "gianfrancobarone",
"id": 18675023,
"node_id": "MDQ6VXNlcjE4Njc1MDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/18675023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gianfrancobarone",
"html_url": "https://github.com/gianfrancobarone",
"followers_url": "https://api.github.com/users/gianfrancobarone/followers",
"following_url": "https://api.github.com/users/gianfrancobarone/following{/other_user}",
"gists_url": "https://api.github.com/users/gianfrancobarone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gianfrancobarone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gianfrancobarone/subscriptions",
"organizations_url": "https://api.github.com/users/gianfrancobarone/orgs",
"repos_url": "https://api.github.com/users/gianfrancobarone/repos",
"events_url": "https://api.github.com/users/gianfrancobarone/events{/privacy}",
"received_events_url": "https://api.github.com/users/gianfrancobarone/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=h1) Report\n> Merging [#7444](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/74d8d69bd42c253c255dc69904ee1fbd1eece0cf?el=desc) will **increase** coverage by `0.92%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7444 +/- ##\n==========================================\n+ Coverage 77.73% 78.65% +0.92% \n==========================================\n Files 181 181 \n Lines 35800 35800 \n==========================================\n+ Hits 27830 28160 +330 \n+ Misses 7970 7640 -330 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `20.38% <0.00%> (-67.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.04% <0.00%> (+1.03%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.58% <0.00%> (+1.59%)` | :arrow_up: |\n| [src/transformers/configuration\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `100.00% <0.00%> (+2.22%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=footer). Last update [74d8d69...fb96b01](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Hi, just corrected the example code, add 2 links and fixed some typos
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7444/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7444",
"html_url": "https://github.com/huggingface/transformers/pull/7444",
"diff_url": "https://github.com/huggingface/transformers/pull/7444.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7444.patch",
"merged_at": 1601363882000
} |
https://api.github.com/repos/huggingface/transformers/issues/7443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7443/comments | https://api.github.com/repos/huggingface/transformers/issues/7443/events | https://github.com/huggingface/transformers/issues/7443 | 710,797,171 | MDU6SXNzdWU3MTA3OTcxNzE= | 7,443 | Error training GPT-2 from scratch on Hindi | {
"login": "parthplc",
"id": 35425925,
"node_id": "MDQ6VXNlcjM1NDI1OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/35425925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parthplc",
"html_url": "https://github.com/parthplc",
"followers_url": "https://api.github.com/users/parthplc/followers",
"following_url": "https://api.github.com/users/parthplc/following{/other_user}",
"gists_url": "https://api.github.com/users/parthplc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parthplc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parthplc/subscriptions",
"organizations_url": "https://api.github.com/users/parthplc/orgs",
"repos_url": "https://api.github.com/users/parthplc/repos",
"events_url": "https://api.github.com/users/parthplc/events{/privacy}",
"received_events_url": "https://api.github.com/users/parthplc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You seem to have a `\"model_type\": \"albert\"` in your config.json which should be a `gpt2`.\r\n\r\nAlso, I would suggest using the Trainer directly instead of shelling out to `run_language_modeling.py`, as described in https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb",
"Finally, this question's probably better suited to the [Forum](http://discuss.huggingface.co/), please ask over there!"
] | 1,601 | 1,601 | 1,601 | NONE | null | I was trying to retrain GPT-2 from scratch. I was able able to train a tokenizer but was facing issues while running run_language_modeling.py.
```
2020-09-29 06:00:14.450718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
/usr/local/lib/python3.6/dist-packages/transformers/training_args.py:299: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
09/29/2020 06:00:16 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False
09/29/2020 06:00:16 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/content/data', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=2, per_device_eval_batch_size=2, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=5.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Sep29_06-00-16_fcba31604e1d', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=2, no_cuda=False, seed=108, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=None, disable_tqdm=False, remove_unused_columns=True, label_names=None)
Traceback (most recent call last):
File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 313, in <module>
main()
File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 205, in main
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_auto.py", line 251, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1428, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1575, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_albert.py", line 155, in __init__
self.sp_model.Load(vocab_file)
File "/usr/local/lib/python3.6/dist-packages/sentencepiece.py", line 367, in Load
return self.LoadFromFile(model_file)
File "/usr/local/lib/python3.6/dist-packages/sentencepiece.py", line 177, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
TypeError: not a string
```
Here is the link to my colab file. https://colab.research.google.com/drive/1rWHwWCB_U_rTOnfGyXdgZ9Kb4HpbNdVs?usp=sharing | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7443/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7442/comments | https://api.github.com/repos/huggingface/transformers/issues/7442/events | https://github.com/huggingface/transformers/issues/7442 | 710,778,943 | MDU6SXNzdWU3MTA3Nzg5NDM= | 7,442 | Setting up transformers/examples/seq2seq | {
"login": "jsrozner",
"id": 1113285,
"node_id": "MDQ6VXNlcjExMTMyODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1113285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jsrozner",
"html_url": "https://github.com/jsrozner",
"followers_url": "https://api.github.com/users/jsrozner/followers",
"following_url": "https://api.github.com/users/jsrozner/following{/other_user}",
"gists_url": "https://api.github.com/users/jsrozner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jsrozner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jsrozner/subscriptions",
"organizations_url": "https://api.github.com/users/jsrozner/orgs",
"repos_url": "https://api.github.com/users/jsrozner/repos",
"events_url": "https://api.github.com/users/jsrozner/events{/privacy}",
"received_events_url": "https://api.github.com/users/jsrozner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can install those libraries by doing `pip install -r requirements.txt` in the `transformers/examples` folder!",
"Aha, thank you! \r\n\r\nCould I submit a pull request with updates to the README as I go through and encounter these little snags?\r\n\r\nAlso, is there a reason that the README in the examples/seq2seq directory doesn't show up here? https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration \r\n\r\nI couldn't find the (quite helpful) README (in the examples/seq2seq dir) and example scripts for training a seq2seq model until I started exploring the github repository!",
"I think all we are missing is some pointer to examples/README.md. Would that have helped you?",
"I would add\r\n- mention to both pip install -e ., pip instal -r requirements.txt\r\n- reference the seq2seq readme from the transformers T5 conditional generation page\r\n\r\nThe only other bug I've encountered so far, #7426, was fixed already!\r\n\r\nfinetune.py ended up being a great script for me. But I wasn't sure if I needed to do any other special things. Is there a place where it would be good to write up the couple of steps I did need? For example:\r\n- load tokenizer /model and modify the vocab (special tokens), then resave them locally (then use load_from_pretrained() on these files)\r\n- prep data files ({val|test|train}.{source|target}) and put into directory\r\n- mod the finetune.sh command\r\n\r\nAnd I do have one other question:\r\nIt seems sort of weird that these models (e.g. SummarizationModel) are hidden in the examples directory. Is it because you guys don't generally expect them to be subclassed? I would have expected them to be in, say, transformers/language_generation or similar. ",
"`SummarizationModule` is a pytorch_lightning.Module. The models under src/ are `nn.Module`.\r\n\r\nThe main package under `src/` does not depend on pytorch_lightning, or have scripts to train things. Everything that does is under examples/\r\n\r\nYou could write a forums post with your steps needed: would be super helpful!\r\nhttps://discuss.huggingface.co/",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | CONTRIBUTOR | null | @sshleifer
I'm following the instructions in the README for seq2seq. In particular, I forked, cloned, and then ran "pip install -e ."
But then when I tried to run finetune.sh, a number of libraries had not been installed. I had to just manually install them with pip (e.g. rouge_score, git, sacrebleu). Should these all have been included automatically? Or was there a better way to get them?
Is this still the best way to go about finetuning a seq2seq model on a custom task (in my case I am doing T5)?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7442/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7441/comments | https://api.github.com/repos/huggingface/transformers/issues/7441/events | https://github.com/huggingface/transformers/issues/7441 | 710,739,386 | MDU6SXNzdWU3MTA3MzkzODY= | 7,441 | Faced the TypeError:forward() got an unexpected keyword argument 'output_all_encoded_layers' | {
"login": "SUFEHeisenberg",
"id": 44188955,
"node_id": "MDQ6VXNlcjQ0MTg4OTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44188955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SUFEHeisenberg",
"html_url": "https://github.com/SUFEHeisenberg",
"followers_url": "https://api.github.com/users/SUFEHeisenberg/followers",
"following_url": "https://api.github.com/users/SUFEHeisenberg/following{/other_user}",
"gists_url": "https://api.github.com/users/SUFEHeisenberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SUFEHeisenberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SUFEHeisenberg/subscriptions",
"organizations_url": "https://api.github.com/users/SUFEHeisenberg/orgs",
"repos_url": "https://api.github.com/users/SUFEHeisenberg/repos",
"events_url": "https://api.github.com/users/SUFEHeisenberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/SUFEHeisenberg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I think you're using a script that's intended to be used with a different library. The argument `output_all_encoded_layers ` does not exist with `transformers`, it is named `output_hidden_states`.",
"Thanks a Lot, I will check it.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.3
- PyTorch version (GPU?): 1.1.0 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Trainer: @sgugger
TransfoXL/XLNet: @TevenLeScao
-->
## Information
Model I am using XLNet...:
The problem arises when using:
* [ ] my own modified scripts like below url:
[chinese-bert-pytorch](https://github.com/649453932/Bert-Chinese-Text-Classification-Pytorch)
The tasks I am working on is:
My own dataset Chinese Text Classification
## To reproduce
Steps to reproduce the behavior:
1.Firstly, load the xlnet model and tokenize:
```heisenberg
from pytorch_transformers import XLNetModel,XLNetTokenizer,XLNetConfig
class Model(nn.Module):
def __init__(self, config):
super(Model, self).__init__()
self.bert =XLNetModel.from_pretrained(config.bert_path)
for param in self.bert.parameters():
param.requires_grad = True
self.fc = nn.Linear(config.hidden_size, config.num_classes)
def forward(self, x):
context = x[0] # 输入的句子
mask = x[2] # 对padding部分进行mask,和句子一个size,padding部分用0表示,如:[1, 1, 1, 1, 0, 0]
_, pooled = self.bert(context, attention_mask=mask, output_all_encoded_layers=False)
out = self.fc(pooled)
return out
```
2.Then start training finetuning and evaluating:
```
def train(config, model,model_name, train_iter, dev_iter, test_iter):
model.train()
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}]
# optimizer = torch.optim.Adam(model.parameters(), lr=config.learning_rate)
optimizer = BertAdam(optimizer_grouped_parameters, lr=config.learning_rate,warmup=0.05,t_total=len(train_iter) * config.num_epochs)
total_batch = 0
dev_best_loss = float('inf')
last_improve = 0
flag = False
model.train()
for epoch in range(config.num_epochs):
print('Epoch [{}/{}]'.format(epoch + 1, config.num_epochs))
for i, (trains, labels) in enumerate(train_iter):
outputs = model(trains)
model.zero_grad()
loss = F.cross_entropy(outputs, labels)
loss.backward()
optimizer.step()
if total_batch % 100 == 0:
true = labels.data.cpu()
predic = torch.max(outputs.data, 1)[1].cpu()
train_acc = metrics.accuracy_score(true, predic)
dev_acc, dev_loss = evaluate(config, model, dev_iter)
```
3. But Error Occured:
```
$ python run.py --model xlnet_base
Loading data...
401it [00:01, 225.08it/s]
140it [00:00, 260.37it/s]
135it [00:00, 240.91it/s]
Time usage: 0:00:03
Epoch [1/1]
Traceback (most recent call last):
File "run.py", line 40, in <module>
train(config, model,model_name, train_iter, dev_iter, test_iter)
File "F:\PycharmProjects\Bert-Chinese-Text-Classification-Pytorch-master\train_eval.py", line 52, in train
outputs = model(trains)
File "D:\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "F:\PycharmProjects\Bert-Chinese-Text-Classification-Pytorch-master\models\xlnet_base.py", line 47, in forward
_, pooled = self.bert(context, attention_mask=mask, output_all_encoded_layers=False)
File "D:\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'output_all_encoded_layers'
```
4. I googled the error and found the [issuses ](https://github.com/huggingface/transformers/issues/3541) in transformers:
So I changed the Model Load code like below:
```
class Model(nn.Module):
def __init__(self, config):
super(Model, self).__init__()
model_config = XLNetConfig.from_pretrained(config.bert_path,output_hidden_states=False)
self.bert = XLNetModel.from_pretrained(config.bert_path,config=model_config)
for param in self.bert.parameters():
param.requires_grad = True
self.fc = nn.Linear(config.hidden_size, config.num_classes)
def forward(self, x):
context = x[0]
mask = x[2]
_, pooled = self.bert(context, attention_mask=mask, output_all_encoded_layers=False)
out = self.fc(pooled)
return out
```
BUT I STILL encounter the same problem, I dunno why.
Hope Ur Reply. Thanks A Lot!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7441/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7440/comments | https://api.github.com/repos/huggingface/transformers/issues/7440/events | https://github.com/huggingface/transformers/pull/7440 | 710,722,053 | MDExOlB1bGxSZXF1ZXN0NDk0NTIwMTkw | 7,440 | creating readme for bert-base-mongolian-uncased | {
"login": "bayartsogt-ya",
"id": 43239645,
"node_id": "MDQ6VXNlcjQzMjM5NjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bayartsogt-ya",
"html_url": "https://github.com/bayartsogt-ya",
"followers_url": "https://api.github.com/users/bayartsogt-ya/followers",
"following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}",
"gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions",
"organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs",
"repos_url": "https://api.github.com/users/bayartsogt-ya/repos",
"events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}",
"received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=h1) Report\n> Merging [#7440](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/74d8d69bd42c253c255dc69904ee1fbd1eece0cf?el=desc) will **decrease** coverage by `0.88%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7440 +/- ##\n==========================================\n- Coverage 77.73% 76.85% -0.89% \n==========================================\n Files 181 181 \n Lines 35800 35800 \n==========================================\n- Hits 27830 27513 -317 \n- Misses 7970 8287 +317 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.12% <0.00%> (-3.52%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.04% <0.00%> (+1.03%)` | :arrow_up: |\n| [src/transformers/configuration\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `100.00% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.22% <0.00%> (+2.23%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=footer). Last update [74d8d69...38f3ad5](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@julien-c \r\nhi, I am wondering if you guys accepting new model cards?",
"If you'd like, it'd be awesome if you could add default input texts in Mongolian for https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts (you can open a PR)\r\n\r\nso the inference widget on your model pages is correctly populated"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | I am adding the model card for bert-base mongolian uncased.
Can you review this for me please! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7440/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7440",
"html_url": "https://github.com/huggingface/transformers/pull/7440",
"diff_url": "https://github.com/huggingface/transformers/pull/7440.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7440.patch",
"merged_at": 1601556323000
} |
https://api.github.com/repos/huggingface/transformers/issues/7439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7439/comments | https://api.github.com/repos/huggingface/transformers/issues/7439/events | https://github.com/huggingface/transformers/pull/7439 | 710,720,918 | MDExOlB1bGxSZXF1ZXN0NDk0NTE5MjQ5 | 7,439 | Creating readme for bert-base-mongolian-cased | {
"login": "bayartsogt-ya",
"id": 43239645,
"node_id": "MDQ6VXNlcjQzMjM5NjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bayartsogt-ya",
"html_url": "https://github.com/bayartsogt-ya",
"followers_url": "https://api.github.com/users/bayartsogt-ya/followers",
"following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}",
"gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions",
"organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs",
"repos_url": "https://api.github.com/users/bayartsogt-ya/repos",
"events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}",
"received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=h1) Report\n> Merging [#7439](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/74d8d69bd42c253c255dc69904ee1fbd1eece0cf?el=desc) will **increase** coverage by `0.99%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7439 +/- ##\n==========================================\n+ Coverage 77.73% 78.72% +0.99% \n==========================================\n Files 181 181 \n Lines 35800 35800 \n==========================================\n+ Hits 27830 28185 +355 \n+ Misses 7970 7615 -355 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/configuration\\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.79% <0.00%> (-6.04%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.05% <0.00%> (-0.54%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (+0.16%)` | :arrow_up: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=footer). Last update [74d8d69...b3a55c8](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | I am adding pretrained BERT-base models to model hub.
Please review this for me
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7439/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7439",
"html_url": "https://github.com/huggingface/transformers/pull/7439",
"diff_url": "https://github.com/huggingface/transformers/pull/7439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7439.patch",
"merged_at": 1601556387000
} |
https://api.github.com/repos/huggingface/transformers/issues/7438 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7438/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7438/comments | https://api.github.com/repos/huggingface/transformers/issues/7438/events | https://github.com/huggingface/transformers/issues/7438 | 710,660,603 | MDU6SXNzdWU3MTA2NjA2MDM= | 7,438 | CUDA out of memory (ALBERT) - run_squad.py ignores --per_gpu_train_batch_size | {
"login": "chbensch",
"id": 7063207,
"node_id": "MDQ6VXNlcjcwNjMyMDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7063207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chbensch",
"html_url": "https://github.com/chbensch",
"followers_url": "https://api.github.com/users/chbensch/followers",
"following_url": "https://api.github.com/users/chbensch/following{/other_user}",
"gists_url": "https://api.github.com/users/chbensch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chbensch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chbensch/subscriptions",
"organizations_url": "https://api.github.com/users/chbensch/orgs",
"repos_url": "https://api.github.com/users/chbensch/repos",
"events_url": "https://api.github.com/users/chbensch/events{/privacy}",
"received_events_url": "https://api.github.com/users/chbensch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you get the same error when using a batch size of 1?",
"Thanks, with a batch size of 1 it works! \r\n\r\nI never thought that even the \"A light BERT\" models are so big. :)",
"The `large` \"light BERT\" model is quite large indeed ;) The `base` model is smaller if you want to use bigger batch sizes."
] | 1,601 | 1,601 | 1,601 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.0
- Platform: Colab Pro (P100) / Anaconda (Windows / 2080 Ti)
- Python version: 3.6
- PyTorch version (GPU?): 1.6.0 / CUDA 10.2
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Information
Model I am using (Bert, XLNet ...): ALBERT
The problem arises when using:
* [x] the official example scripts: run_squad.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD 2.0
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Fine-tune ALBERT-xlarge or xxlarge and set --per_gpu_train_batch_size 8 or 10
2. try to finetune
```
!python transformers\examples\question-answering\run_squad.py \
--model_type albert \
--model_name_or_path albert-large-v2 \
--do_train \
--do_eval \
--do_lower_case \
--train_file train-v2.0.json \
--predict_file dev-v2.0.json \
--per_gpu_train_batch_size 8 \
--learning_rate 3e-5 \
--num_train_epochs 1.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/model_output \
--save_steps 1000 \
--threads 4 \
--version_2_with_negative \
--overwrite_output_dir
```
## Expected behavior
The model should be training, but despite the 8gb limit there is an out of memory error:
```
RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 0; 15.90 GiB total capacity; 15.01 GiB already allocated; 7.88 MiB free; 15.03 GiB reserved in total by PyTorch)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7438/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7437 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7437/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7437/comments | https://api.github.com/repos/huggingface/transformers/issues/7437/events | https://github.com/huggingface/transformers/issues/7437 | 710,603,700 | MDU6SXNzdWU3MTA2MDM3MDA= | 7,437 | RAG Retriever (NameError: name 'load_dataset' is not defined in retrieval_rag.py) | {
"login": "kliuPIDS",
"id": 69916756,
"node_id": "MDQ6VXNlcjY5OTE2NzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/69916756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kliuPIDS",
"html_url": "https://github.com/kliuPIDS",
"followers_url": "https://api.github.com/users/kliuPIDS/followers",
"following_url": "https://api.github.com/users/kliuPIDS/following{/other_user}",
"gists_url": "https://api.github.com/users/kliuPIDS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kliuPIDS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kliuPIDS/subscriptions",
"organizations_url": "https://api.github.com/users/kliuPIDS/orgs",
"repos_url": "https://api.github.com/users/kliuPIDS/repos",
"events_url": "https://api.github.com/users/kliuPIDS/events{/privacy}",
"received_events_url": "https://api.github.com/users/kliuPIDS/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Try with `pip install transformers datasets faiss-cpu psutil` (or see the [requirements.txt](https://github.com/huggingface/transformers/blob/master/examples/rag/requirements.txt) file).\r\n\r\nHad the same issue and it fixed it for me.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.0
- Platform: Linux-4.19.0-11-cloud-amd64-x86_64-with-debian-10.6
- Python version: 3.7.3
- PyTorch version (GPU?): 1.6.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help
@sshleifer
RAG model is not on the list, but this is summarization related
-->
## Information
Model I am using RAG
The problem arises when using:
* [ +] the official example scripts: (give details below)
``` python from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration
import torch
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
# initialize with RagRetriever to do everything in one forward call
model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
```
The tasks I am working on is:
Model coudln't load, didn't perform any task
## To reproduce
Steps to reproduce the behavior:
1. run the code
``` python from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration
import torch
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
# initialize with RagRetriever to do everything in one forward call
model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
```
## Expected behavior
Raise a NameError, load_dataset is not defined.
```python
NameError Traceback (most recent call last)
<ipython-input-6-752205d4a1c8> in <module>
3
4 tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
----> 5 retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
6 # initialize with RagRetriever to do everything in one forward call
7 model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
/mnt/disks/nlp/env_nlp_main/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)
307 generator_tokenizer = rag_tokenizer.generator
308 return cls(
--> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer
310 )
311
/mnt/disks/nlp/env_nlp_main/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)
287 config.retrieval_vector_size,
288 config.index_path,
--> 289 config.use_dummy_dataset,
290 )
291 )
/mnt/disks/nlp/env_nlp_main/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, dataset_name, dataset_split, index_name, vector_size, index_path, use_dummy_dataset)
218
219 logger.info("Loading passages from {}".format(self.dataset_name))
--> 220 self.dataset = load_dataset(
221 self.dataset_name, with_index=False, split=self.dataset_split, dummy=self.use_dummy_dataset
222 )
NameError: name 'load_dataset' is not defined
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7437/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/7437/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7436 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7436/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7436/comments | https://api.github.com/repos/huggingface/transformers/issues/7436/events | https://github.com/huggingface/transformers/pull/7436 | 710,578,064 | MDExOlB1bGxSZXF1ZXN0NDk0NDAyMTE4 | 7,436 | Create README.md | {
"login": "typicasoft",
"id": 19326612,
"node_id": "MDEyOk9yZ2FuaXphdGlvbjE5MzI2NjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/19326612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/typicasoft",
"html_url": "https://github.com/typicasoft",
"followers_url": "https://api.github.com/users/typicasoft/followers",
"following_url": "https://api.github.com/users/typicasoft/following{/other_user}",
"gists_url": "https://api.github.com/users/typicasoft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/typicasoft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/typicasoft/subscriptions",
"organizations_url": "https://api.github.com/users/typicasoft/orgs",
"repos_url": "https://api.github.com/users/typicasoft/repos",
"events_url": "https://api.github.com/users/typicasoft/events{/privacy}",
"received_events_url": "https://api.github.com/users/typicasoft/received_events",
"type": "Organization",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=h1) Report\n> Merging [#7436](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a1a8ffa5126ced93c12dfb677cbe3a069f48dcf3?el=desc) will **increase** coverage by `1.67%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7436 +/- ##\n==========================================\n+ Coverage 76.85% 78.52% +1.67% \n==========================================\n Files 181 181 \n Lines 35800 35800 \n==========================================\n+ Hits 27513 28112 +599 \n+ Misses 8287 7688 -599 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.70% <0.00%> (-22.68%)` | :arrow_down: |\n| [src/transformers/tokenization\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmFnLnB5) | `53.33% <0.00%> (-17.78%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.36% <0.00%> (-0.56%)` | :arrow_down: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=footer). Last update [a1a8ffa...66b5582](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"done!",
"Thanks!"
] | 1,601 | 1,601 | 1,601 | NONE | null | MagBERT-NER : Added widget (Text)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7436/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7436",
"html_url": "https://github.com/huggingface/transformers/pull/7436",
"diff_url": "https://github.com/huggingface/transformers/pull/7436.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7436.patch",
"merged_at": 1601331925000
} |
https://api.github.com/repos/huggingface/transformers/issues/7435 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7435/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7435/comments | https://api.github.com/repos/huggingface/transformers/issues/7435/events | https://github.com/huggingface/transformers/pull/7435 | 710,552,319 | MDExOlB1bGxSZXF1ZXN0NDk0MzgwNTc3 | 7,435 | [s2s] consistent output format across eval scripts | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=h1) Report\n> Merging [#7435](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7f4115c0990b5121878e38069d386f168fac6b7b?el=desc) will **increase** coverage by `2.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7435 +/- ##\n==========================================\n+ Coverage 76.89% 78.94% +2.05% \n==========================================\n Files 181 181 \n Lines 35800 35800 \n==========================================\n+ Hits 27530 28264 +734 \n+ Misses 8270 7536 -734 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.32% <0.00%> (-51.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.69% <0.00%> (-34.60%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `83.58% <0.00%> (-8.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/configuration\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `97.77% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `86.01% <0.00%> (-1.04%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.68% <0.00%> (-0.67%)` | :arrow_down: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=footer). Last update [7f4115c...59133e7](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7435/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7435",
"html_url": "https://github.com/huggingface/transformers/pull/7435",
"diff_url": "https://github.com/huggingface/transformers/pull/7435.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7435.patch",
"merged_at": 1601349604000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7434 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7434/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7434/comments | https://api.github.com/repos/huggingface/transformers/issues/7434/events | https://github.com/huggingface/transformers/pull/7434 | 710,547,176 | MDExOlB1bGxSZXF1ZXN0NDk0Mzc2MzY0 | 7,434 | Document new features of make fixup | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=h1) Report\n> Merging [#7434](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a1a8ffa5126ced93c12dfb677cbe3a069f48dcf3?el=desc) will **increase** coverage by `1.15%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7434 +/- ##\n==========================================\n+ Coverage 76.85% 78.00% +1.15% \n==========================================\n Files 181 181 \n Lines 35800 35800 \n==========================================\n+ Hits 27513 27926 +413 \n+ Misses 8287 7874 -413 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.70% <0.00%> (-22.68%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.95% <0.00%> (-5.27%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.12% <0.00%> (-3.52%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.36% <0.00%> (-0.56%)` | :arrow_down: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=footer). Last update [a1a8ffa...a4e4de6](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
This is a small follow-up on #7403 documenting the behavior it introduced, as instructed by @stas00.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7434/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7434/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7434",
"html_url": "https://github.com/huggingface/transformers/pull/7434",
"diff_url": "https://github.com/huggingface/transformers/pull/7434.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7434.patch",
"merged_at": 1601366218000
} |
https://api.github.com/repos/huggingface/transformers/issues/7433 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7433/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7433/comments | https://api.github.com/repos/huggingface/transformers/issues/7433/events | https://github.com/huggingface/transformers/pull/7433 | 710,545,471 | MDExOlB1bGxSZXF1ZXN0NDk0Mzc0OTcx | 7,433 | Add a code of conduct | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=h1) Report\n> Merging [#7433](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a1a8ffa5126ced93c12dfb677cbe3a069f48dcf3?el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7433 +/- ##\n==========================================\n- Coverage 76.85% 76.84% -0.02% \n==========================================\n Files 181 181 \n Lines 35800 35800 \n==========================================\n- Hits 27513 27509 -4 \n- Misses 8287 8291 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `66.34% <0.00%> (-28.85%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `96.82% <0.00%> (+39.68%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=footer). Last update [a1a8ffa...f8d87a2](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"(before merging, let's check out the thread we had internally on this) "
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
This PR adds a code of conduct to the project, inspired by the [Contributor Covenant](https://www.contributor-covenant.org/). To make it clearly visible it also adds:
- a badge that displays under Transformers on the main README.
- a link in the contributing guide. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7433/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7433",
"html_url": "https://github.com/huggingface/transformers/pull/7433",
"diff_url": "https://github.com/huggingface/transformers/pull/7433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7433.patch",
"merged_at": 1601401127000
} |
https://api.github.com/repos/huggingface/transformers/issues/7432 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7432/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7432/comments | https://api.github.com/repos/huggingface/transformers/issues/7432/events | https://github.com/huggingface/transformers/issues/7432 | 710,529,305 | MDU6SXNzdWU3MTA1MjkzMDU= | 7,432 | Fine-tune BERTForMaskedLM | {
"login": "naturecreator",
"id": 39854185,
"node_id": "MDQ6VXNlcjM5ODU0MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/39854185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naturecreator",
"html_url": "https://github.com/naturecreator",
"followers_url": "https://api.github.com/users/naturecreator/followers",
"following_url": "https://api.github.com/users/naturecreator/following{/other_user}",
"gists_url": "https://api.github.com/users/naturecreator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naturecreator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naturecreator/subscriptions",
"organizations_url": "https://api.github.com/users/naturecreator/orgs",
"repos_url": "https://api.github.com/users/naturecreator/repos",
"events_url": "https://api.github.com/users/naturecreator/events{/privacy}",
"received_events_url": "https://api.github.com/users/naturecreator/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can you try removing spaces between `--model_type`, `=` and `bert`? Same for `--model_name_or_path `, `=` and `bert-base-cased`",
"@LysandreJik Yes, it works now. Thank you :). \r\n\r\nI tried the [example ](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) as it is with the same dataset specified, but, now I am facing GPU out of memory issue. Do you know how can I change the batch size in \"run_language_modeling.py\". Here is the snippet of the error:\r\n\r\n`09/29/2020 13:11:35 - INFO - filelock - Lock 2508759984840 acquired on C:\\\\Users\\\\ravida6d\\\\Desktop\\\\spellcheck\\\\wikitext\\cached_lm_BertTokenizer_510_wiki.train.raw.lock\r\n09/29/2020 13:11:35 - INFO - filelock - Lock 2508759984840 released on C:\\\\Users\\\\ravida6d\\\\Desktop\\\\spellcheck\\\\wikitext\\cached_lm_BertTokenizer_510_wiki.train.raw.lock\r\n09/29/2020 13:11:35 - INFO - filelock - Lock 2508759984560 acquired on C:\\\\Users\\\\ravida6d\\\\Desktop\\\\spellcheck\\\\wikitext\\cached_lm_BertTokenizer_510_wiki.test.raw.lock\r\n09/29/2020 13:11:36 - INFO - filelock - Lock 2508759984560 released on C:\\\\Users\\\\ravida6d\\\\Desktop\\\\spellcheck\\\\wikitext\\cached_lm_BertTokenizer_510_wiki.test.raw.lock\r\nC:\\Users\\ravida6d\\AppData\\Local\\Continuum\\anaconda3\\envs\\spellcheck\\lib\\site-packages\\transformers\\trainer.py:266: FutureWarning: Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a future version. Use `args.prediction_loss_only` instead.\r\n FutureWarning,\r\nYou are instantiating a Trainer but Tensorboard is not installed. You should consider installing it.\r\nEpoch: 0%| | 0/3 [00:00<?, ?it/s]\r\nIteration: 0%| | 0/583 [00:00<?, ?it/s]\r\nIteration: 0%|▏ | 1/583 [00:01<11:16, 1.16s/it]Traceback (most recent call last):\r\n File \"fine_tune.py\", line 313, in <module>\r\n main()\r\n File \"fine_tune.py\", line 277, in main\r\n trainer.train(model_path=model_path)\r\n File \"C:\\Users\\ravida6d\\AppData\\Local\\Continuum\\anaconda3\\envs\\spellcheck\\lib\\site-packages\\transformers\\trainer.py\", line 755, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"C:\\Users\\ravida6d\\AppData\\Local\\Continuum\\anaconda3\\envs\\spellcheck\\lib\\site-packages\\transformers\\trainer.py\", line 1081, in training_step\r\n loss.backward()\r\n File \"C:\\Users\\ravida6d\\AppData\\Local\\Continuum\\anaconda3\\envs\\spellcheck\\lib\\site-packages\\torch\\tensor.py\", line 198, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n File \"C:\\Users\\ravida6d\\AppData\\Local\\Continuum\\anaconda3\\envs\\spellcheck\\lib\\site-packages\\torch\\autograd\\__init__.py\", line 100, in backward\r\n allow_unreachable=True) # allow_unreachable flag\r\nRuntimeError: CUDA out of memory. Tried to allocate 454.00 MiB (GPU 0; 11.00 GiB total capacity; 8.60 GiB already allocated; 132.32 MiB free; 8.70 GiB reserved in total by PyTorch) (malloc at ..\\c10\\cuda\\CUDACachingAllocator.cpp:289)\r\n(no backtrace available)\r\nEpoch: 0%| | 0/3 [00:01<?, ?it/s]\r\nIteration: 0%|▏ | 1/583 [00:01<13:12, 1.36s/it]`\r\n\r\n\r\n\r\nAnd also I would like to know, which argument defines that we are training or fine-tuning in \"run_langauge_modeling.py\".",
"Hello @LysandreJik, \r\n\r\nI reduced the --per_gpu_train_batch_size to 1, then I could fine-tune the BERT model. The result was stored as pytorch_model.bin. I wanted to load the model using Autotokenizer.from_pretrained class method but I faced this error:\r\n\r\n ```\r\nTraceback (most recent call last):\r\n File \"C:/Users/ravida6d/Desktop/Darshan/spell_correction/contextualSpellCheck/contextualSpellCheck.py\", line 587, in <module>\r\n checker = ContextualSpellCheck(model_name=\"C:/Users/ravida6d/Desktop/Darshan/spell_correction/contextualSpellCheck/pytorch_model.bin\", debug=True, max_edit_dist=3)\r\n File \"C:/Users/ravida6d/Desktop/Darshan/spell_correction/contextualSpellCheck/contextualSpellCheck.py\", line 113, in _init_\r\n self.BertTokenizer = AutoTokenizer.from_pretrained(self.model_name)\r\n File \"C:\\Users\\ravida6d\\AppData\\Local\\Continuum\\anaconda3\\envs\\contextualSpellCheck\\lib\\site-packages\\transformers\\tokenization_auto.py\", line 210, in from_pretrained\r\n config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)\r\n File \"C:\\Users\\ravida6d\\AppData\\Local\\Continuum\\anaconda3\\envs\\contextualSpellCheck\\lib\\site-packages\\transformers\\configuration_auto.py\", line 303, in from_pretrained\r\n config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"C:\\Users\\ravida6d\\AppData\\Local\\Continuum\\anaconda3\\envs\\contextualSpellCheck\\lib\\site-packages\\transformers\\configuration_utils.py\", line 357, in get_config_dict\r\n config_dict = cls._dict_from_json_file(resolved_config_file)\r\n File \"C:\\Users\\ravida6d\\AppData\\Local\\Continuum\\anaconda3\\envs\\contextualSpellCheck\\lib\\site-packages\\transformers\\configuration_utils.py\", line 439, in _dict_from_json_file\r\n text = reader.read()\r\n File \"C:\\Users\\ravida6d\\AppData\\Local\\Continuum\\anaconda3\\envs\\contextualSpellCheck\\lib\\codecs.py\", line 321, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte\r\n\r\n```\r\n\r\nCan you please help me with this?\r\n",
"I got it worked and the following files must be in the same folder and the path should be projected to the folder (not to the pytorch_model.bin):\r\n\r\nvocab.txt - vocabulary file\r\npytorch_model.bin - the Pytorch-compatible (and converted) model \r\nconfig.json - json-based model configuration",
"While fine-tuning, we can only see loss and perplexity which is useful. \r\nIs it also possible to see the accuracy of the model and also the tensorboard when using the \"run_language_modeling.py\" script? It would be really helpful if anyone could explain how the \"loss\" is calculated for BERTForMaskedLM task (as there are no labels provided while fine-tuning). ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"hi,dear\r\nhow to use Spelling Error Correction with this rp?\r\ncould you pls help me ?\r\n"
] | 1,601 | 1,669 | 1,607 | NONE | null | Hello,
I am doing a project on spelling correction. I used pre-trained "bert-base-cased" model. However, the results are not that accurate. Therefore, I planned to fine-tune the BERT for Masked LM task. I couldn't find any examples for fine-tuning BERT model for Masked LM. I tried to use "run_language_modeling.py" for fine-tuning. But, I came across with the following error:
```
C:\Users\ravida6d\spell_correction\transformers\examples\language-modeling>python run_language_modeling.py --output_dir ="C:\\Users\\ravida6d\\spell_correction\\contextualSpellCheck\\fine_tune\\" --model_type = bert --model_name_or_path = bert-base-cased --do_train --train_data_file =$TRAIN_FILE --do_eval --eval_data_file =$TEST_FILE –mlm
C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\contextualSpellCheck\lib\site-packages\transformers\training_args.py:291: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
Traceback (most recent call last):
File "run_language_modeling.py", line 313, in <module>
main()
File "run_language_modeling.py", line 153, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\contextualSpellCheck\lib\site-packages\transformers\hf_argparser.py", line 151, in parse_args_into_dataclasses
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueError: Some specified arguments are not used by the HfArgumentParser: ['bert', 'bert-base-cased']
```
I am not understanding how to use this script. Can anyone give some information for understanding the fine-tuning of BERT Masked LM.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7432/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7431 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7431/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7431/comments | https://api.github.com/repos/huggingface/transformers/issues/7431/events | https://github.com/huggingface/transformers/pull/7431 | 710,528,036 | MDExOlB1bGxSZXF1ZXN0NDk0MzYwNTI2 | 7,431 | Add automatic best model loading to Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=h1) Report\n> Merging [#7431](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f62f2ffdcc2df75cf01438bebc7ae281d921d21d?el=desc) will **increase** coverage by `0.53%`.\n> The diff coverage is `76.74%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7431 +/- ##\n==========================================\n+ Coverage 78.17% 78.71% +0.53% \n==========================================\n Files 181 181 \n Lines 35800 35858 +58 \n==========================================\n+ Hits 27986 28224 +238 \n+ Misses 7814 7634 -180 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `63.23% <73.43%> (+7.80%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `63.30% <80.00%> (+2.66%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.72% <100.00%> (+0.45%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.32% <0.00%> (-51.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `61.53% <0.00%> (-33.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=footer). Last update [f62f2ff...738935a](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"IMO this closes #4186",
"@sgugger how does this work together with `save_total_limit` ? If it is set might it happen that the best model gets deleted?\r\n\r\nwell - see here https://github.com/huggingface/transformers/issues/7556",
"The best model is not deleted with `save_total_limit`. It is always put at the top of the list after sorting the chceckpoints."
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | # What does this PR do?
This PR cleans up a bit the part that saves the training state inside `Trainer` and adds an API that can track which was the best model during any of the evaluation phases to load it back at the end.
When fine-tuning a model on a dataset that can easily overfit the model, it's quite common to have the last model not be the best one (in terms of metrics). This PR adds a `TrainingArgument` named `load_best_model_at_end` that triggers the following behavior:
- `save_steps` gets ignored and the model is saved every time there is an evaluation (determined by `evaluation_strategy` and `eval_steps`)
- It keeps track in a `TrainerState` of when the best model was encountered (that state is saved along the checkpoints so it can work with resuming a training)
- The best model is determined by the new `TrainingArgument`s `metric_for_best_model` (defaults to the loss) and `greater_is_better` (default to False for the loss, True otherwise).
- The best model is loaded once the training is finished.
In passing I've added some tests of the saving API in Trainer and made sure it can handle both `PreTrainedModel` and regular `nn.Module` (a feature asked in #6901). Both are now tested in the CI, as is the new API.
Fixes #6901
Those newly introduced arguments and APIs can then be leveraged to have early stopping supported in `Trainer`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7431/reactions",
"total_count": 6,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7431/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7431",
"html_url": "https://github.com/huggingface/transformers/pull/7431",
"diff_url": "https://github.com/huggingface/transformers/pull/7431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7431.patch",
"merged_at": 1601390479000
} |
https://api.github.com/repos/huggingface/transformers/issues/7430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7430/comments | https://api.github.com/repos/huggingface/transformers/issues/7430/events | https://github.com/huggingface/transformers/issues/7430 | 710,467,782 | MDU6SXNzdWU3MTA0Njc3ODI= | 7,430 | import error in version 3.3.0, conflict with local directory "datasets" | {
"login": "nhsjgczryf",
"id": 46837856,
"node_id": "MDQ6VXNlcjQ2ODM3ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/46837856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nhsjgczryf",
"html_url": "https://github.com/nhsjgczryf",
"followers_url": "https://api.github.com/users/nhsjgczryf/followers",
"following_url": "https://api.github.com/users/nhsjgczryf/following{/other_user}",
"gists_url": "https://api.github.com/users/nhsjgczryf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nhsjgczryf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nhsjgczryf/subscriptions",
"organizations_url": "https://api.github.com/users/nhsjgczryf/orgs",
"repos_url": "https://api.github.com/users/nhsjgczryf/repos",
"events_url": "https://api.github.com/users/nhsjgczryf/events{/privacy}",
"received_events_url": "https://api.github.com/users/nhsjgczryf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sadly that is how python works, it will try to import the datasets library from a local folder if you have a folder named like this in the path your are working in. However, this should only work if there is a `__init__.py` in your folder named datasets. Removing that file should then solve the bug.",
"This change just broke [DeepChem](https://github.com/deepchem/deepchem). In the short term we can work around it by pinning to an older version, but that's not a reasonable long term solution. Directories called \"datasets\" are very common, and this will impact a lot of people. Using a common, generic word as the top level package violates the [PEP 423](https://www.python.org/dev/peps/pep-0423/) guidelines for package naming.",
"Indeed, we are working on a fix and will release soon.",
"Great, thanks!",
"The patched release is on PyPi, tell us if you have any issue.",
"Works perfectly. Thanks so much for the super fast fix!"
] | 1,601 | 1,601 | 1,601 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.0
- Platform: Google Colab
Model I am using :Bert
## To reproduce
Steps to reproduce the behavior:
Traceback (most recent call last):
File "train.py", line 19, in <module>
from mydataset import load_data,dist_load_data,load_data2
File "/content/drive/My Drive/mrc4ner/mydataset.py", line 5, in <module>
from transformers import BertTokenizer
File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module>
from .integrations import ( # isort:skip
File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 42, in <module>
from .trainer_utils import PREFIX_CHECKPOINT_DIR, BestRun # isort:skip
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 6, in <module>
from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 72, in <module>
logger.debug(f"Succesfully imported datasets version {datasets.__version__}")
AttributeError: module 'datasets' has no attribute '__version__'
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
My code works well before, and there is a "datasets" folder in my working directory. When my transformers version upgraded to 3.3.0, I get this error. If I change the name of the folder "datasets" or downgrade transformers to version 3.2.0, the error is get fixed.
Is this a bug? Because it doesn't allow me to use "datasets" as a folder name. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7430/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7429/comments | https://api.github.com/repos/huggingface/transformers/issues/7429/events | https://github.com/huggingface/transformers/pull/7429 | 710,441,241 | MDExOlB1bGxSZXF1ZXN0NDk0Mjg5NTM1 | 7,429 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=h1) Report\n> Merging [#7429](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f62f2ffdcc2df75cf01438bebc7ae281d921d21d?el=desc) will **decrease** coverage by `1.31%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7429 +/- ##\n==========================================\n- Coverage 78.17% 76.85% -1.32% \n==========================================\n Files 181 181 \n Lines 35800 35800 \n==========================================\n- Hits 27986 27515 -471 \n- Misses 7814 8285 +471 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.46% <0.00%> (-1.51%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=footer). Last update [f62f2ff...1defbb1](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"There should be a special metadata block for this at some point! (\"Fine-tune button\" instead of GitHub's \"Fork\")"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | Add links to models fine-tuned on a downstream task
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7429/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7429",
"html_url": "https://github.com/huggingface/transformers/pull/7429",
"diff_url": "https://github.com/huggingface/transformers/pull/7429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7429.patch",
"merged_at": 1601314810000
} |
https://api.github.com/repos/huggingface/transformers/issues/7428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7428/comments | https://api.github.com/repos/huggingface/transformers/issues/7428/events | https://github.com/huggingface/transformers/pull/7428 | 710,426,530 | MDExOlB1bGxSZXF1ZXN0NDk0Mjc3MTE2 | 7,428 | Train T5 in Tensoflow 2 Community Notebook | {
"login": "HarrisDePerceptron",
"id": 17620536,
"node_id": "MDQ6VXNlcjE3NjIwNTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/17620536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HarrisDePerceptron",
"html_url": "https://github.com/HarrisDePerceptron",
"followers_url": "https://api.github.com/users/HarrisDePerceptron/followers",
"following_url": "https://api.github.com/users/HarrisDePerceptron/following{/other_user}",
"gists_url": "https://api.github.com/users/HarrisDePerceptron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HarrisDePerceptron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HarrisDePerceptron/subscriptions",
"organizations_url": "https://api.github.com/users/HarrisDePerceptron/orgs",
"repos_url": "https://api.github.com/users/HarrisDePerceptron/repos",
"events_url": "https://api.github.com/users/HarrisDePerceptron/events{/privacy}",
"received_events_url": "https://api.github.com/users/HarrisDePerceptron/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nThanks a lot for your awesome notebook! Just two tiny updates, can you clean your list of imports and use `datasets` instead of `tfds`?",
"@jplu much appreciated. i will clean up the imports. by datasets do you mean an alias for tensorflow datasets instead of tfds?",
"No, I mean using https://github.com/huggingface/datasets instead.",
"@jplu i have done the necessary changes. I have also switched to [datasets](https://github.com/huggingface/datasets) as requested. for this i have created a [new colab notebook](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) which uses datasets as its primary source.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=h1) Report\n> Merging [#7428](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f62f2ffdcc2df75cf01438bebc7ae281d921d21d?el=desc) will **increase** coverage by `0.48%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7428 +/- ##\n==========================================\n+ Coverage 78.17% 78.66% +0.48% \n==========================================\n Files 181 181 \n Lines 35800 35800 \n==========================================\n+ Hits 27986 28161 +175 \n+ Misses 7814 7639 -175 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.32% <0.00%> (-51.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `83.58% <0.00%> (-8.96%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |\n| [src/transformers/configuration\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `97.77% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.10% <0.00%> (-0.51%)` | :arrow_down: |\n| ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=footer). Last update [f62f2ff...d9df829](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Awesome!! Thanks a lot for your notebook!!",
"Thanks a lot @HarrisDePerceptron the community has been asking about such a notebook for a long time :-) ",
"Thanks @patrickvonplaten !! it was long over due :)"
] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | # What does this PR do?
This adds a link to **Training T5 in Tensoflow 2 Community Notebook** under the notebooks/Readme.md community notebook section.
This notebook demonstrates how to train T5 for any task using Tensorflow 2. Trains a Question & Answer task implemented in Tensorflow 2 using SQUAD
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). **Yes**
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? **Yes**
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case. [Forum Link](https://discuss.huggingface.co/t/how-to-train-t5-with-tensorflow/641)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). **Not Applicable**
- [ ] Did you write any new necessary tests? **Not Applicable**
## Who can review?
@patrickvonplaten @jplu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7428/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7428/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7428",
"html_url": "https://github.com/huggingface/transformers/pull/7428",
"diff_url": "https://github.com/huggingface/transformers/pull/7428.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7428.patch",
"merged_at": 1601564069000
} |
https://api.github.com/repos/huggingface/transformers/issues/7427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7427/comments | https://api.github.com/repos/huggingface/transformers/issues/7427/events | https://github.com/huggingface/transformers/issues/7427 | 710,411,316 | MDU6SXNzdWU3MTA0MTEzMTY= | 7,427 | Problem while using tokenizer.encode_plus for sentence pairs | {
"login": "rxlian",
"id": 35382484,
"node_id": "MDQ6VXNlcjM1MzgyNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/35382484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rxlian",
"html_url": "https://github.com/rxlian",
"followers_url": "https://api.github.com/users/rxlian/followers",
"following_url": "https://api.github.com/users/rxlian/following{/other_user}",
"gists_url": "https://api.github.com/users/rxlian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rxlian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rxlian/subscriptions",
"organizations_url": "https://api.github.com/users/rxlian/orgs",
"repos_url": "https://api.github.com/users/rxlian/repos",
"events_url": "https://api.github.com/users/rxlian/events{/privacy}",
"received_events_url": "https://api.github.com/users/rxlian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Could you provide some `seq0` and `seq1` values so that we may investigate further?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,607 | 1,607 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
Hi, thanks for this great work.
I was trying to use tokenizer.encode_plus to encode sentence pairs. The code looks like
```py
training_encoded_dict = tokenizer.encode_plus(
seq0,
seq1,
add_spicial_tokens = True,
max_length = 256,
truncation_strategy = 'only_second',
pad_to_max_length = True,
return_attention_mask = True,
return_token_type_ids = True,
return_tensors = 'pt',
)
```
However, the problem looks like
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-17-ae3ce93d62ba> in <module>()
24 return_attention_mask = True,
25 return_token_type_ids = True,
---> 26 return_tensors = 'pt',
27
28 )
2 frames
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in truncate_sequences(self, ids, pair_ids, num_tokens_to_remove, truncation_strategy, stride)
2068 ids = ids[:-num_tokens_to_remove]
2069 elif truncation_strategy == "only_second":
-> 2070 assert pair_ids is not None and len(pair_ids) > num_tokens_to_remove
2071 window_len = min(len(pair_ids), stride + num_tokens_to_remove)
2072 overflowing_tokens = pair_ids[-window_len:]
AssertionError:
```
This issue doesn't occur while using truncation_strategy = 'longest_first', and it happens with other truncation_strategy such as 'only_second' and 'only_first'.
I was wondering does anyone have the same issue or have any idea about how to fix it? Thanks a lot in advance.
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7427/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7426 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7426/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7426/comments | https://api.github.com/repos/huggingface/transformers/issues/7426/events | https://github.com/huggingface/transformers/issues/7426 | 710,394,262 | MDU6SXNzdWU3MTAzOTQyNjI= | 7,426 | [T5] Automatic setting of decoder_input_ids is misleading and does not correspond to the expected behavior of T5 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,601 | 1,601 | 1,601 | MEMBER | null | These lines: https://github.com/huggingface/transformers/blob/f62f2ffdcc2df75cf01438bebc7ae281d921d21d/src/transformers/modeling_t5.py#L1020 were added in this PR: https://github.com/huggingface/transformers/pull/5518 .
@mfuntowicz - do we need this hack to make onnx work? I would prefer to revert this change.
The lines do not make much sense IMO and should be deleted. Also the T5 error message when only `input_ids` are passed should be updated to be clearer.
And the docs should be cleaned as well: https://huggingface.co/transformers/model_doc/t5.html#t5model.
Also see:https://github.com/huggingface/transformers/issues/7358 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7426/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7425/comments | https://api.github.com/repos/huggingface/transformers/issues/7425/events | https://github.com/huggingface/transformers/issues/7425 | 710,367,950 | MDU6SXNzdWU3MTAzNjc5NTA= | 7,425 | Getting Import error ImportError: cannot import name 'quantize' from 'transformers.convert_graph_to_onnx' (/opt/conda/lib/python3.7/site-packages/transformers/convert_graph_to_onnx.py) | {
"login": "pinakimishra95",
"id": 23132843,
"node_id": "MDQ6VXNlcjIzMTMyODQz",
"avatar_url": "https://avatars.githubusercontent.com/u/23132843?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pinakimishra95",
"html_url": "https://github.com/pinakimishra95",
"followers_url": "https://api.github.com/users/pinakimishra95/followers",
"following_url": "https://api.github.com/users/pinakimishra95/following{/other_user}",
"gists_url": "https://api.github.com/users/pinakimishra95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pinakimishra95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pinakimishra95/subscriptions",
"organizations_url": "https://api.github.com/users/pinakimishra95/orgs",
"repos_url": "https://api.github.com/users/pinakimishra95/repos",
"events_url": "https://api.github.com/users/pinakimishra95/events{/privacy}",
"received_events_url": "https://api.github.com/users/pinakimishra95/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Any Updates on the above issue ?",
"@mfuntowicz any updates on this ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,601 | 1,608 | 1,608 | NONE | null | ---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-19-60bf04c6de64> in <module>
----> 1 from simpletransformers.classification import MultiLabelClassificationModel
2
3
4 model = MultiLabelClassificationModel('roberta', 'roberta-base', num_labels=6, args={'train_batch_size':2, 'gradient_accumulation_steps':16, 'learning_rate': 3e-5, 'num_train_epochs': 3, 'max_seq_length': 512})
/opt/conda/lib/python3.7/site-packages/simpletransformers/classification/__init__.py in <module>
----> 1 from simpletransformers.classification.classification_model import ClassificationModel
2 from simpletransformers.classification.multi_label_classification_model import MultiLabelClassificationModel
3 from simpletransformers.classification.multi_modal_classification_model import MultiModalClassificationModel
4 from simpletransformers.config.model_args import (
5 ClassificationArgs,
/opt/conda/lib/python3.7/site-packages/simpletransformers/classification/classification_model.py in <module>
62 get_linear_schedule_with_warmup,
63 )
---> 64 from transformers.convert_graph_to_onnx import convert, quantize
65
66 from simpletransformers.classification.classification_utils import (
ImportError: cannot import name 'quantize' from 'transformers.convert_graph_to_onnx' (/opt/conda/lib/python3.7/site-packages/transformers/convert_graph_to_onnx.py) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7425/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7425/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/7424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7424/comments | https://api.github.com/repos/huggingface/transformers/issues/7424/events | https://github.com/huggingface/transformers/pull/7424 | 710,293,907 | MDExOlB1bGxSZXF1ZXN0NDk0MTY2NDQ5 | 7,424 | [draft] codecov no comment | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,601 | 1,601 | 1,601 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7424/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7424",
"html_url": "https://github.com/huggingface/transformers/pull/7424",
"diff_url": "https://github.com/huggingface/transformers/pull/7424.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7424.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/7423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7423/comments | https://api.github.com/repos/huggingface/transformers/issues/7423/events | https://github.com/huggingface/transformers/pull/7423 | 710,285,440 | MDExOlB1bGxSZXF1ZXN0NDk0MTU5NDAz | 7,423 | Reorganize documentation navbar | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Cool, thanks @sgugger"
] | 1,601 | 1,601 | 1,601 | COLLABORATOR | null | With the library containing so many models now, the documentation navigation bar was started to get unreadable, especially if someone was looking for a specific model. To make this cleaner I:
- removed the PACKAGE REFENCE caption (not happy about this but there is no way to have a caption with empty content :-( )
- added three captions to separate the documentation between main classes, models and internals
- sorted alphabetically the sections
Also, make the background color of the section headers a bit darker to make the distinction with the rest of the toc clearer.
The result can be checked [here](https://92916-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7423/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7423",
"html_url": "https://github.com/huggingface/transformers/pull/7423",
"diff_url": "https://github.com/huggingface/transformers/pull/7423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7423.patch",
"merged_at": 1601302979000
} |
https://api.github.com/repos/huggingface/transformers/issues/7422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/7422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/7422/comments | https://api.github.com/repos/huggingface/transformers/issues/7422/events | https://github.com/huggingface/transformers/pull/7422 | 710,113,170 | MDExOlB1bGxSZXF1ZXN0NDk0MDE5OTQy | 7,422 | Custom TF weights loading | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=h1) Report\n> Merging [#7422](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/95f792afb0f0ce5a7b4f0e8df108b10157a69134?el=desc) will **decrease** coverage by `0.21%`.\n> The diff coverage is `87.50%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #7422 +/- ##\n==========================================\n- Coverage 78.51% 78.30% -0.22% \n==========================================\n Files 184 181 -3 \n Lines 36734 35917 -817 \n==========================================\n- Hits 28843 28125 -718 \n+ Misses 7891 7792 -99 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.37% <50.00%> (-1.94%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.06% <92.30%> (+0.84%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.91% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.59% <0.00%> (-72.35%)` | :arrow_down: |\n| [src/transformers/configuration\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `26.47% <0.00%> (-70.59%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `23.51% <0.00%> (-65.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `69.91% <0.00%> (-20.82%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `63.30% <0.00%> (-5.35%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `63.23% <0.00%> (-1.59%)` | :arrow_down: |\n| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=footer). Last update [95f792a...6f52cc9](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Just merged your suggestions :)",
"@patrickvonplaten I have done some updates, let me know if it looks like what you have in mind.",
"There is an issue with Longformer apparently.",
"Ok, I found why, and I should have thought about this much before.... 😣 \r\n\r\nWe cannot have `None` into a tuple, the logic works only when the `return_dict` is True.",
"@LysandreJik are we able to merge?",
"Good to merge for me",
"Ran the slow tests, they pass."
] | 1,601 | 1,686 | 1,601 | CONTRIBUTOR | null | This PR provides a custom weight loading function in order to take into account dynamic model architecture building. More precisely, the brand new loading function takes into account the `authorized_unexpected_keys` and `authorized_missing_keys` class attributes enabling the possibility to ignore some layers in the models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/7422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/7422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/7422",
"html_url": "https://github.com/huggingface/transformers/pull/7422",
"diff_url": "https://github.com/huggingface/transformers/pull/7422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/7422.patch",
"merged_at": 1601906325000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.