url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/10332
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10332/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10332/comments
https://api.github.com/repos/huggingface/transformers/issues/10332/events
https://github.com/huggingface/transformers/issues/10332
813,571,608
MDU6SXNzdWU4MTM1NzE2MDg=
10,332
bug in bert pretraining
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Duplicate of https://github.com/huggingface/transformers/issues/10285" ]
1,614
1,614
1,614
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> - albert, bert, xlm: @LysandreJik ## Information In this line https://github.com/huggingface/transformers/blob/e73a3e1891775a915846cc0f24b7e9a26d6688fb/src/transformers/data/data_collator.py#L381 you need to change 0.5 to 0.1 to match the description written that only in 10% you would like to change tokens with randomly selected tokens. ## To reproduce Nothing to reproduce. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior probs should match BERT paper.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10332/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10331
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10331/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10331/comments
https://api.github.com/repos/huggingface/transformers/issues/10331/events
https://github.com/huggingface/transformers/pull/10331
813,535,532
MDExOlB1bGxSZXF1ZXN0NTc3NjY4MDk2
10,331
Add note to resize token embeddings matrix when adding new tokens to voc
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,614
1,614
1,614
MEMBER
null
Closes https://github.com/huggingface/transformers/issues/10319
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10331/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10331", "html_url": "https://github.com/huggingface/transformers/pull/10331", "diff_url": "https://github.com/huggingface/transformers/pull/10331.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10331.patch", "merged_at": 1614005301000 }
https://api.github.com/repos/huggingface/transformers/issues/10330
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10330/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10330/comments
https://api.github.com/repos/huggingface/transformers/issues/10330/events
https://github.com/huggingface/transformers/issues/10330
813,515,833
MDU6SXNzdWU4MTM1MTU4MzM=
10,330
[DeepSpeed] strange learning rate schedule in linear_schedule_with_warmup
{ "login": "tomohideshibata", "id": 16042472, "node_id": "MDQ6VXNlcjE2MDQyNDcy", "avatar_url": "https://avatars.githubusercontent.com/u/16042472?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomohideshibata", "html_url": "https://github.com/tomohideshibata", "followers_url": "https://api.github.com/users/tomohideshibata/followers", "following_url": "https://api.github.com/users/tomohideshibata/following{/other_user}", "gists_url": "https://api.github.com/users/tomohideshibata/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomohideshibata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomohideshibata/subscriptions", "organizations_url": "https://api.github.com/users/tomohideshibata/orgs", "repos_url": "https://api.github.com/users/tomohideshibata/repos", "events_url": "https://api.github.com/users/tomohideshibata/events{/privacy}", "received_events_url": "https://api.github.com/users/tomohideshibata/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "Incidentally a bug fix was just merged as part of: https://github.com/huggingface/transformers/pull/10310\r\n- the scheduler step was getting run twice.\r\n\r\nCould you please re-test with `transformers` master?\r\n\r\nThank you!\r\n", "Thank you for your response.\r\n\r\nI have tested the latest version, but the following error occurred. There is something wrong in the lr initialization.\r\n```\r\n..\r\n File \"run_clm.py\", line 376, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py\", line 1054, in train\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n.. \r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/deepspeed/runtime/lr_schedules.py\", line 728, in get_last_lr\r\n assert getattr(self, '_last_lr', None) is not None, \"need to call step() first\"\r\nAssertionError: need to call step() first\r\n``` ", "Thank you for testing with the master version.\r\n\r\nPlease always post the full backtrace and the full command line you used - otherwise it's impossible to reproduce the problem and know how to fix it.\r\n\r\nThank you.", "Sorry. I just ran the same command shown above, and the full error is as follows:\r\n\r\n```\r\n File \"run_clm.py\", line 417, in <module>\r\n main()\r\n File \"run_clm.py\", line 376, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py\", line 1054, in train\r\nTraceback (most recent call last):\r\n File \"run_clm.py\", line 417, in <module>\r\nTraceback (most recent call last):\r\n File \"run_clm.py\", line 417, in <module>\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py\", line 1135, in _maybe_log_save_evaluate\r\nTraceback (most recent call last):\r\n File \"run_clm.py\", line 417, in <module>\r\n main()\r\n File \"run_clm.py\", line 376, in main\r\n if version.parse(torch.__version__) >= version.parse(\"1.4\")\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/deepspeed/runtime/lr_schedules.py\", line 728, in get_last_lr\r\n main()\r\n File \"run_clm.py\", line 376, in main\r\n assert getattr(self, '_last_lr', None) is not None, \"need to call step() first\"\r\nAssertionErrortrain_result = trainer.train(resume_from_checkpoint=checkpoint): need to call step() first\r\n\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py\", line 1054, in train\r\n main()\r\n File \"run_clm.py\", line 376, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py\", line 1054, in train\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py\", line 1054, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py\", line 1135, in _maybe_log_save_evaluate\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py\", line 1135, in _maybe_log_save_evaluate\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py\", line 1135, in _maybe_log_save_evaluate\r\n if version.parse(torch.__version__) >= version.parse(\"1.4\")\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/deepspeed/runtime/lr_schedules.py\", line 728, in get_last_lr\r\n if version.parse(torch.__version__) >= version.parse(\"1.4\")\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/deepspeed/runtime/lr_schedules.py\", line 728, in get_last_lr\r\n if version.parse(torch.__version__) >= version.parse(\"1.4\")\r\n File \"/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/deepspeed/runtime/lr_schedules.py\", line 728, in get_last_lr\r\n assert getattr(self, '_last_lr', None) is not None, \"need to call step() first\"\r\nAssertionError: need to call step() first\r\n assert getattr(self, '_last_lr', None) is not None, \"need to call step() first\"\r\nAssertionError: need to call step() first\r\n assert getattr(self, '_last_lr', None) is not None, \"need to call step() first\"\r\nAssertionError: need to call step() first\r\n 1%|▋ | 10/1095 [00:08<16:13, 1.11it/s]\r\n```\r\n\r\nThanks.", "Great, thank you, I'm able to reproduce this problem. Let me investigate and I will get back to you with a solution. ", "I understand the problem.\r\n\r\nThe optimizer doesn't kick in until a much later step, so `lr_scheduler` doesn't get its first `step()` yet and `._maybe_log_save_evaluate` fails to retrieve `get_last_lr` since there wasn't any yet.\r\n\r\nA quick workaround is to add `\"initial_scale_power\": 1,`, which will force the optimizer to churn from step one.\r\n```\r\n \"fp16\": {\r\n \"enabled\": true,\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"initial_scale_power\": 1,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1\r\n },\r\n```\r\nbut it might not be an optimal solution. https://www.deepspeed.ai/docs/config-json/#fp16-training-options\r\n\r\nI will think of how to resolve this correctly, but meanwhile please let me know if that resolves the scheduler issue.\r\n\r\nto explain - when you use deepspeed's fp16 it skips the optimizer/scheduler calls until the OVERFLOW is no more. And you'd see the following in the log:\r\n\r\n```\r\nOVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 4294967296\r\nOVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648.0\r\nOVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2147483648.0, reducing to 1073741824.0\r\n```\r\n\r\nThis is also probably why you see an odd behavior as you reported originally (besides the double step bug I fixed)", "OK, this PR should work too if you would like to try it instead: https://github.com/huggingface/transformers/pull/10362\r\n", "Thank you for your work.\r\n\r\nI set `\"initial_scale_power\": 1`, re-ran the command, and the training finished without errors.\r\n \r\n![image](https://user-images.githubusercontent.com/16042472/108927816-622fea00-7684-11eb-8071-c3f101d21c9c.png)\r\n\r\nAfter the PR #10362 is merged into master, I will try it. Thanks.", "FYI, it has been merged.", "Thanks.\r\n\r\nI have tested the latest version (without setting `\"initial_scale_power\": 1`), and the learning rate behavior is as expected! \r\n\r\n![image](https://user-images.githubusercontent.com/16042472/108942404-47687000-769a-11eb-82af-a68bf1815345.png)\r\n\r\nThanks for your work. It is very useful to use deepspeed in transformers.", "Thank you for your feedback and supporting this problem fixing process, @tomohideshibata " ]
1,614
1,614
1,614
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: Linux - Python version: 3.7.3 - PyTorch version (GPU?): 1.7 (yes) - Tensorflow version (GPU?): N/A - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes (DeepSpeed) ### Who can help @stas00 <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): GPT-2 The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce I am trying using deepspeed for run_clm.py to train GPT-2 (from scratch). I want to use the same scheduler (`linear_schedule_with_warmup`) and `optimizer` as ones used in run_clm.py. So, the `scheduler` and `optimizer` sections are removed in `examples/tests/deepspeed/ds_config.json`, and the original ones are used. My `ds_config.json` is as follows: ``` { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1 }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true, "cpu_offload": true }, "zero_allow_untested_optimizer": true, "steps_per_print": 2000, "wall_clock_breakdown": false } ``` I ran the following command (using 4GPUs in one node): $ cd examples/language-modeling/ $ deepspeed run_clm.py \ --output_dir=/somewhere \ --model_type=gpt2 \ --do_train \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --tokenizer_name gpt2 \ --block_size=512 \ --num_train_epochs=5 \ --warmup_steps=100 \ --learning_rate=2e-5 \ --per_device_train_batch_size=32 \ --per_device_eval_batch_size=32 \ --save_steps=10000 \ --save_total_limit=5 \ --dataloader_drop_last \ --deepspeed ds_config.json \ --logging_steps=10 The learning rate schedule was strange. The following is a screenshot of tensorboard. ![image](https://user-images.githubusercontent.com/16042472/108629197-2fc69700-74a2-11eb-9d05-c8efd476c489.png) The initial learning rate was 1e-5, which should be 0. The learning rate went up to 2e-5 (it was OK), and went down to 0 around the middle (before the end), which was strange. I tested a `WarmupDecayLR` scheduler in `deepspeed` (without `transformers`), and it seemed OK. So, I think the utilization of this scheduler in `transformers` is strange. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The learning rate schedule through `deepspeed` should be the same as the original one used in `run_clm.py`. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10330/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10329
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10329/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10329/comments
https://api.github.com/repos/huggingface/transformers/issues/10329/events
https://github.com/huggingface/transformers/issues/10329
813,500,123
MDU6SXNzdWU4MTM1MDAxMjM=
10,329
Raise an error instead of a warning when model files are not loaded correctly
{ "login": "hasansalimkanmaz", "id": 49716619, "node_id": "MDQ6VXNlcjQ5NzE2NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/49716619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hasansalimkanmaz", "html_url": "https://github.com/hasansalimkanmaz", "followers_url": "https://api.github.com/users/hasansalimkanmaz/followers", "following_url": "https://api.github.com/users/hasansalimkanmaz/following{/other_user}", "gists_url": "https://api.github.com/users/hasansalimkanmaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/hasansalimkanmaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hasansalimkanmaz/subscriptions", "organizations_url": "https://api.github.com/users/hasansalimkanmaz/orgs", "repos_url": "https://api.github.com/users/hasansalimkanmaz/repos", "events_url": "https://api.github.com/users/hasansalimkanmaz/events{/privacy}", "received_events_url": "https://api.github.com/users/hasansalimkanmaz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Today, I have encountered a bug in our codebase due to this issue. The model loading wasn't ok for some reason and the code just logged the warnings. I would like to get an error instead of a warning. I can't monitor thousands of models that are running in production for this particular warning message. Could you help me to work on this issue?" ]
1,614
1,649
1,619
CONTRIBUTOR
null
# 🚀 Feature request Currently, when I initialize a model and if my pre-trained model files aren't fully matched with my model architecture, code silently logs the event and warns the user. I think it is better to have a flag to stop the training if model weights are not loaded as expected. ## Motivation With the current implementation, user can train a model from scratch without knowing it if he/she doesn't look at the logs carefully (Even if he/she should look at it). ## Your contribution I am open to work for this issue. If you have any idea about how to implement it, let me know. I can start working on this issue in the coming weeks (not for now).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10329/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10328
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10328/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10328/comments
https://api.github.com/repos/huggingface/transformers/issues/10328/events
https://github.com/huggingface/transformers/pull/10328
813,460,260
MDExOlB1bGxSZXF1ZXN0NTc3NjA1MDA1
10,328
DeBERTa-v2 fixes
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
MEMBER
null
Applying @BigBird01's fixes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10328/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10328", "html_url": "https://github.com/huggingface/transformers/pull/10328", "diff_url": "https://github.com/huggingface/transformers/pull/10328.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10328.patch", "merged_at": 1613997918000 }
https://api.github.com/repos/huggingface/transformers/issues/10327
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10327/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10327/comments
https://api.github.com/repos/huggingface/transformers/issues/10327/events
https://github.com/huggingface/transformers/issues/10327
813,413,439
MDU6SXNzdWU4MTM0MTM0Mzk=
10,327
mBART 50 models not found in model shortcut name list
{ "login": "codingnoobneedshelp", "id": 39620284, "node_id": "MDQ6VXNlcjM5NjIwMjg0", "avatar_url": "https://avatars.githubusercontent.com/u/39620284?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codingnoobneedshelp", "html_url": "https://github.com/codingnoobneedshelp", "followers_url": "https://api.github.com/users/codingnoobneedshelp/followers", "following_url": "https://api.github.com/users/codingnoobneedshelp/following{/other_user}", "gists_url": "https://api.github.com/users/codingnoobneedshelp/gists{/gist_id}", "starred_url": "https://api.github.com/users/codingnoobneedshelp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codingnoobneedshelp/subscriptions", "organizations_url": "https://api.github.com/users/codingnoobneedshelp/orgs", "repos_url": "https://api.github.com/users/codingnoobneedshelp/repos", "events_url": "https://api.github.com/users/codingnoobneedshelp/events{/privacy}", "received_events_url": "https://api.github.com/users/codingnoobneedshelp/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hi @codingnoobneedshelp , thank for reporting this issue. Right now MBart50Tokenizer does not work with `AutoTokenizer`.\r\nThere will be a new script for translation in the next ~2 weeks that will handle this issue. For now, you could just modify the script to use `MBart50Tokenizer`, instead of `AutoTokenizer`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hi @codingnoobneedshelp \r\n\r\nThis is now resolved, the `run_translation.py` script now supports fine-tuning mBART-50." ]
1,613
1,618
1,618
NONE
null
Transformers version: 4.4.0.dev0 Hello, I'm trying to fine-tune mBART 50 with your seq2seq examples. Getting this error: Model name 'facebook/mbart-large-50' not found in model shortcut name list (facebook/mbart-large-en-ro, facebook/mbart-large-cc25). Traceback (most recent call last): File "/content/transformers/examples/seq2seq/run_seq2seq.py", line 668, in <module> main() File "/content/transformers/examples/seq2seq/run_seq2seq.py", line 349, in main use_auth_token=True if model_args.use_auth_token else None, File "/usr/local/lib/python3.6/dist-packages/transformers/models/auto/tokenization_auto.py", line 399, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1789, in from_pretrained resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1806, in _from_pretrained **(copy.deepcopy(kwargs)), File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1860, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/mbart/tokenization_mbart.py", line 109, in __init__ self.set_src_lang_special_tokens(kwargs.get("src_lang", "en_XX")) File "/usr/local/lib/python3.6/dist-packages/transformers/models/mbart/tokenization_mbart.py", line 199, in set_src_lang_special_tokens self.cur_lang_code = self.lang_code_to_id[src_lang] KeyError: None Any Ideas on how to fix this? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10327/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10327/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10326
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10326/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10326/comments
https://api.github.com/repos/huggingface/transformers/issues/10326/events
https://github.com/huggingface/transformers/issues/10326
813,348,494
MDU6SXNzdWU4MTMzNDg0OTQ=
10,326
[DeepSpeed] unable to increase batch size from 4 for T5-3b with 2x 32GB V100 GPUs
{ "login": "saichandrapandraju", "id": 41769919, "node_id": "MDQ6VXNlcjQxNzY5OTE5", "avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saichandrapandraju", "html_url": "https://github.com/saichandrapandraju", "followers_url": "https://api.github.com/users/saichandrapandraju/followers", "following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}", "gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}", "starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions", "organizations_url": "https://api.github.com/users/saichandrapandraju/orgs", "repos_url": "https://api.github.com/users/saichandrapandraju/repos", "events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}", "received_events_url": "https://api.github.com/users/saichandrapandraju/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,614
1,614
NONE
null
Hi, I'm trying T5-3b with DeepSpeed on 2 V100-32GB GPU's. But I'm unable to increase batch size beyond 4 with max i/p sequence length of 512 and max o/p sequence length of 4. Previously I tried with t5.parallelize() [ i.e, without DeepSpeed ] on same setup and was able to train with batch size of 2. Below are my training args and DeepSpeed's config - ``` training_args = Seq2SeqTrainingArguments( output_dir='./seq_out/results', overwrite_output_dir=True, evaluation_strategy="epoch", per_device_train_batch_size=4, per_device_eval_batch_size=4, learning_rate=3e-5, weight_decay=0.01, num_train_epochs=2, warmup_steps=500, logging_dir='./seq_out/logs', logging_steps=10, load_best_model_at_end=True, deepspeed='ds_config.json' ) ``` ``` { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1 }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 1.5e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 1.5e8, "contiguous_gradients": true, "cpu_offload": true }, "zero_allow_untested_optimizer": true, "optimizer": { "type": "AdamW", "params": { "lr": 3e-5, "betas": [ 0.8, 0.999 ], "eps": 1e-8, "weight_decay": 3e-7 } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 3e-5, "warmup_num_steps": 500 } }, "steps_per_print": 2000, "wall_clock_breakdown": false } ``` Observed similar behavior with T5-large as well . I was able to train with 14 batch size ( same i/p, o/p seq length and config.json as above ) on single GPU ( 32 GB V100 ) BUT when executing the same on 2x 32 GB GPUs, I was not able to go beyond 14 batch size( which I was able to train with 1 GPU itself) and memory from both the GPUs was consumed (31 GB and 28 GB). Reducing `allgather_bucket_size ` and `reduce_bucket_size` didn't help in increasing batch size. But I expected more batch size with DeepSpeed and CPU offloading. Is this fine or am I making something wrong which is hindering deepspeed's capability..?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10326/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10325
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10325/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10325/comments
https://api.github.com/repos/huggingface/transformers/issues/10325/events
https://github.com/huggingface/transformers/issues/10325
813,322,833
MDU6SXNzdWU4MTMzMjI4MzM=
10,325
Input mismatch with TFDistilBert training from scratch inspite of cross checking input dimensions
{ "login": "DarshanDeshpande", "id": 39432636, "node_id": "MDQ6VXNlcjM5NDMyNjM2", "avatar_url": "https://avatars.githubusercontent.com/u/39432636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DarshanDeshpande", "html_url": "https://github.com/DarshanDeshpande", "followers_url": "https://api.github.com/users/DarshanDeshpande/followers", "following_url": "https://api.github.com/users/DarshanDeshpande/following{/other_user}", "gists_url": "https://api.github.com/users/DarshanDeshpande/gists{/gist_id}", "starred_url": "https://api.github.com/users/DarshanDeshpande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarshanDeshpande/subscriptions", "organizations_url": "https://api.github.com/users/DarshanDeshpande/orgs", "repos_url": "https://api.github.com/users/DarshanDeshpande/repos", "events_url": "https://api.github.com/users/DarshanDeshpande/events{/privacy}", "received_events_url": "https://api.github.com/users/DarshanDeshpande/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello!\r\n\r\nAs first, I can see several issues on the way you want to train the model:\r\n1. The way you build your dataset is not correct. More precisely, in the `tokenize` function, the first element of the tuple (`a.ids`) is taken as the input, and the second (`a.attention_mask`) is taken as the label. Hence the error you get.\r\n2. When you instantiate your `tf.keras.models.Model` you define the `inputs` and the `outputs` to be the same, this is not correct either, you have to run the model once and then give this output.", "@jplu I realized my mistake and I changed the code to this\r\n\r\n```\r\ndef tokenize(sentence):\r\n sentence = sentence.numpy().decode('utf-8')\r\n a = tokenizer.encode(sentence)\r\n return tf.constant(a.ids,tf.int32), tf.constant(a.attention_mask, tf.int32)\r\n\r\ndef get_tokenized(sentence):\r\n return tf.py_function(tokenize, inp=[sentence], Tout=[tf.int32,tf.int32])\r\n\r\ndef get_tokenized_final(a,b):\r\n return (a,b), None\r\n\r\ndataset = tf.data.Dataset.from_tensor_slices(lines)\r\ndataset = dataset.map(get_tokenized, num_parallel_calls=tf.data.AUTOTUNE).map(get_tokenized_final, num_parallel_calls=tf.data.AUTOTUNE)\r\n\r\nimport tensorflow as tf\r\n\r\nconfig = DistilBertConfig(vocab_size=30000)\r\nmodel = TFDistilBertForMaskedLM(config)\r\ninp1 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name=\"input_ids\")\r\ninp2 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name=\"attention_mask\")\r\nop = model([inp1,inp2])\r\nmodel = tf.keras.models.Model(inputs=[inp1, inp2], outputs=model.output)\r\n```\r\nNow the model throws two warnings \r\n```\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\n```\r\nand then throws the final error\r\n```\r\nValueError: in user code:\r\n\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function *\r\n return step_function(self, iterator)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function **\r\n outputs = model.distribute_strategy.run(run_step, args=(data,))\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run\r\n return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica\r\n return self._call_for_each_replica(fn, args, kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica\r\n return fn(*args, **kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step **\r\n outputs = model.train_step(data)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:757 train_step\r\n self.optimizer.minimize(loss, self.trainable_variables, tape=tape)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:498 minimize\r\n return self.apply_gradients(grads_and_vars, name=name)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:598 apply_gradients\r\n grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/utils.py:79 filter_empty_gradients\r\n ([v.name for _, v in grads_and_vars],))\r\n\r\n ValueError: No gradients provided for any variable: ['tf_distil_bert_for_masked_lm_1/distilbert/embeddings/word_embeddings/weight:0', 'tf_distil_bert_for_masked_lm_1/distilbert/embeddings/position_embeddings/embeddings:0', 'tf_distil_bert_for_masked_lm_1/distilbert/embeddings/LayerNorm/gamma:0', 'tf_distil_bert_for_masked_lm_1/distilbert/embeddings/LayerNorm/beta:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/q_lin/kernel:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/q_lin/bias:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/k_lin/kernel:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/k_lin/bias:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/v_lin/kernel:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/v_lin/bias:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/out_lin/kernel:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/out_lin/bias:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/sa_layer_norm/gamma:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/sa_layer_norm/beta:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/ffn/lin1/kernel:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/ffn/lin1/bias:0', 'tf_distil_bert_for_masked_lm_1/distilbert/tra...\r\n```\r\nAny idea what I am doing wrong?", "You cannot do `model.output`, as said in my previous message you have to run the model once to get how the output looks like :)", "@jplu Could you tell me exactly what you mean by \"run\" the model? If I pass a sample array with all ones, it gives me a Broadcasting error as follows \r\n```\r\nconfig = DistilBertConfig(vocab_size=30000)\r\nmodel = TFDistilBertForMaskedLM(config)\r\ninp1 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name=\"input_ids\")\r\ninp2 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name=\"attention_mask\")\r\n_ = model([inp1,inp2])\r\n\r\n# Error is thrown for this call\r\na = tf.ones((128,),dtype=tf.int32)\r\nmodel((a,a))\r\n```\r\nError is as attached\r\n```\r\nInvalidArgumentError: Incompatible shapes: [512,768] vs. [128,768] [Op:BroadcastTo]\r\n```\r\nMore specifically the error is raised in `modeling_tf_distilbert.py `\r\n```\r\n 183 if position_ids is None:\r\n--> 184 position_embeds = self.position_embeddings(position_ids=inputs_embeds)\r\n 185 else:\r\n 186 position_embeds = self.position_embeddings(position_ids=position_ids)\r\n```\r\n-----------------------------------------------------------------------------------------\r\nIf by \"run\" you mean calling fit on the model then it raises the same gradient error\r\n", "Here a dummy example:\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import TFDistilBertForMaskedLM, DistilBertTokenizer, DistilBertConfig\r\n\r\nconfig = DistilBertConfig(vocab_size=30000)\r\nmodel = TFDistilBertForMaskedLM(config)\r\ninp1 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name=\"input_ids\")\r\ninp2 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name=\"attention_mask\")\r\noutput = model([inp1,inp2])\r\nmodel = tf.keras.models.Model(inputs=[inp1,inp2], outputs=[output])\r\ntokenizer = DistilBertTokenizer.from_pretrained(\"distilbert-base-uncased\")\r\ndata = tokenizer([\"Hello1\", \"Hello2\", \"Hello3\"], truncation=True, max_length=128, padding=\"max_length\", return_tensors=\"tf\")\r\nlabels = tf.ones((3, 128), dtype=tf.int32)\r\nX = tf.data.Dataset.from_tensor_slices((dict(data), labels)).batch(1)\r\nloss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\r\nmodel.compile(loss=loss, optimizer=\"adam\")\r\nmodel.fit(X, epochs=1)\r\n```", "@jplu Thanks for this but this tokenizes the data and then loads it as a tf.data.Dataset. I was looking for an implementation where the tokenization can be integrated in the pipeline itself and can be done on the fly. I found [this](https://github.com/tensorflow/tensorflow/issues/38762) issue on tensorflow but there are no fixes for it yet. Do you have any idea how to do this because my dataset is big enough to fit in colab memory but cannot be fully tokenized in memory?", "Sorry you cannot do this.", "Okay. Thanks for all the help!" ]
1,613
1,614
1,614
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: Colab - Python version: 3.6 - PyTorch version (GPU?): None - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @jplu <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): TFDistilBert The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` tokenizer = tokenizers.BertWordPieceTokenizer("/content/drive/Shareddrives/Darshan's Shared Driver/NewTrainingData/Tokenizer/vocab.txt", strip_accents=False) tokenizer.enable_padding(length=128) tokenizer.enable_truncation(max_length=128) def tokenize(sentence): sentence = sentence.numpy().decode('utf-8') a = tokenizer.encode(sentence) return tf.constant(a.ids,tf.int32), tf.constant(a.attention_mask, tf.int32) def get_tokenized(sentence): return tf.py_function(tokenize, inp=[sentence], Tout=[tf.int32,tf.int32]) with open("TextFile.txt") as f: lines = f.readlines() dataset = tf.data.Dataset.from_tensor_slices(lines) dataset = dataset.map(get_tokenized, num_parallel_calls=tf.data.AUTOTUNE) config = DistilBertConfig(vocab_size=30000) model = TFDistilBertForMaskedLM(config) inp1 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name="input_ids") inp2 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name="attention_mask") op = model([inp1, inp2]) model = tf.keras.models.Model(inputs=[inp1, inp2], outputs=op) model.compile(tf.keras.optimizers.Adam(1e-4)) model.fit(dataset.batch(32).prefetch(tf.data.AUTOTUNE), epochs=1) ``` Error: ``` /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function * return step_function(self, iterator) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step ** outputs = model.train_step(data) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:754 train_step y_pred = self(x, training=True) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:998 __call__ input_spec.assert_input_compatibility(self.input_spec, inputs, self.name) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py:207 assert_input_compatibility ' input tensors. Inputs received: ' + str(inputs)) ValueError: Layer model expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=<unknown> dtype=int32>] ``` I have cross checked the output shape and input dimensions. If this is not the correct way then how exactly do I train a TF DistilBert model from scratch? <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Training should start as soon as fit is called <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10325/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10324
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10324/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10324/comments
https://api.github.com/repos/huggingface/transformers/issues/10324/events
https://github.com/huggingface/transformers/pull/10324
813,257,914
MDExOlB1bGxSZXF1ZXN0NTc3NDM1MjQ3
10,324
[PretrainedFeatureExtractor] + Wav2Vec2FeatureExtractor, Wav2Vec2Processor, Wav2Vec2Tokenizer
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> This approach looks great and doesn't seem limiting at all. Implementing it for Wav2Vec2/SpeechToTextTransformer and refactoring/upstreaming methods down the road seems like a good implementation roadmap.\r\n> \r\n> Regarding the implementation of `FeatureProcessors`, what do you have in mind regarding understandability/explicitness? Do you expect something like models, where we aim for maximum accessibility, with copy/pastes and single-file containers, or do you expect something like tokenizers, where some tokenizers inherit from others while modifying certain aspects, and some level of abstraction, making them harder to decypher?\r\n> \r\n> I'm asking because I think it's relevant to the different preprocessing that can be handled by the feature processors. For example, normalizing or converting to MFCCs seems like it would be something quite widespread among speech-based feature processors, do we want to have that in each implementation (abstraction-free) or will the goal be to upstream these methods in the parent class once we identify similarities among feature processors?\r\n\r\n\r\nYeah good question! To be honest, I'm not really sure yet. I would like to enforce the rule that feature extractors can only inherit from `PreTrainedFeatureExtractor` and no other feature extractor. IMO, the best approach to begin with is to limit (as you've suggested) the user-facing API for `FeatureProcessor` to `__call__`, `from_pretrained()`, `save_pretrained()` and maybe something like `from_file()` and then in the beginning. \r\nI think a method like `pad()` is general enough to have this method in the beginning be implemented in `PreTrainedFeatureExtractor` because every extractor will need to do padding no matter what. \r\n\r\nFor pretty much all other methods (actually including `normalization()`), I would copy-paste them into each feature processor and make sure that they are private methods `_normalize()` so that we can later still do some refactoring here if needed.\r\n\r\nSo in general my strategy would be to have as little abstraction as possible - *e.g.* copy-paste classes such as those: https://github.com/huggingface/transformers/blob/19c14579f0c7f5f15c5a5115b2fd18582e61ac3b/src/transformers/models/speech_to_text_transformer/tokenization_speech_to_text_transformer.py#L239 to each feature extractor - and then when having more models maybe move some things upstream into the `PretrainedFeatureExtractor` file ", "Thanks for explaining, sounds like a good approach to me! Thanks for drafting the proposal." ]
1,613
1,614
1,614
MEMBER
null
# 🚨🚨🚨**IMPORTANT** Wav2Vec2 repositories that were added before 4.4 should make sure to manually add a feature extractor class. This can be done as easily as doing: ``` git clone <your/repo/> cd <your/repo/> ``` ```python from transformers import Wav2Vec2FeatureExtractor feat_extract = Wav2Vec2FeatureExtractor() # or feat_extract = Wav2Vec2FeatureExtractor(return_attention_mask=True) for lv60 models feat_extract.save_pretrained("./") ``` ``` git add . && git commit -m "add feature processor file" && git push ``` # What does this PR do? This is a new design for how to handle the feature extraction + tokenization functionality for speech models in a single class. Speech models connect the two different formats `speech` and `text`. In order to have more flexibility when extending Transformers to speech tasks, such as ASR, I propose a composite `Processor` class that has both a `tokenizer` and a `feature_extractor` attribute, similar to how composite tokenizer are currently handled for models, such as RAG, [see](https://github.com/huggingface/transformers/blob/88605f37a6fe7bde336f52700229d619b5ffa0f6/src/transformers/models/rag/tokenization_rag.py#L28). For ASR models the output of the model is text so that a `tokenizer` is required and the input is a sequence of `feature_vectors` (which includes raw waveform features) so that a `feature_extractor` is required. The tokenizer is hereby of the exact same format as our current tokenizer implementations (*e.g.* Speech2TextTransformer models train their tokenizers the same way NLP models train their tokenizers, see section 4.1 [here](https://arxiv.org/pdf/2007.10310.pdf)). Feature processors on the other hand are of a completely new format and therefore deserve a `PreTrainedFeatureExtractor` class that mostly handles the loading & saving for all feature extractors and in addition, provides padding functionality. Since feature extractors are deterministic by nature (feature extractors are not trained, as tokenizers can be), we only need a single `feature_extractor_config.json` file to load and save the class IMO. To meet the demands of a single model processing class that can handle both the text and speech modality while being flexible enough for different kinds of speech models, I propose to add a composite `SpeechProcessor` class for each speech model that has both a `tokenizer` and `feature_extractor` attribute and in short, would look as follows for Wav2Vec2: ```python Wav2Vec2Processor: def __init__(feature_extractor: Wav2Vec2FeatureExtractor, tokenizer: Wav2Vec2CTCTokenizer): self.feature_extractor = feature_extractor self.tokenizer = tokenizer Wav2Vec2CTCTokenizer(PreTrainedTokenizer): ... Wav2Vec2FeatureExtractor(PreTrainedFeatureExtractor): ... ``` So this means we leverage all the existing functionalities of the tokenizers for the tokenizer part of the speech models and create a new `PreTrainedFeatureExtractor` to handle general feature extraction functionality. The composite `Wav2Vec2Processor` is then in style very similar to `RagTokenizer` and would provide the following functionality to the user: ```python from transformers import Wav2Vec2SpeechProcessor, Wav2Vec2ForCTC model = Wav2Vec2ForCTC.from_pretained("facebook/wav2vec2-base-960h") processor = Wav2Vec2SpeechProcessor.from_pretrained("facebook/wav2vec2-base-960h") inputs = processor(raw_waveform, return_tensors="pt", padding="longest") logits = model(**inputs) predicted_ids = torch.argmax(logits, dim=-1) pred_transcription = model.batch_decode(predicted_ids) # Also the processor can then later be used to encode & decode labels, *e.g.* with processor.as_tokenizer(): label_ids = processor(label_str) ``` A couple of advantages of the following design: - It makes sense logically. When we add multi-modal models, it is quite natural for me to add compotise `...Processor` classes to the library as well - It is general enough to handle a bunch of different use cases. E.g. `Speech2TextTransformers` will have more or less the same feature extractor for the different tasks it was trained on, but will have different tokenizers depending on whether the model was trained on Librispeech/Must-C or Covost (cc @patil-suraj). The current design can handle this very nicely by simply changing the tokenizer - We only need to create a `PretrainedFeatureExtractor` class, all the Speech model's tokenization functionality is handled by the already existing `PreTrainedTokenizer` class. - It's general enough to handle all speech models IMO ## Backwards breaking compatibility `Wav2Vec2Tokenzier` is deprecated and is replaced by a better `Wav2Vec2CTCTokenizer` class that actually can inherit the full tokenizer test suit. `Wav2Vec2Tokenizer` can still be used by is not be found in the docs anymore. It was made sure that the tokenizer configs stay the same for bcp so that I only had to add files for the `Wav2Vec2FeatureProcessor` (see: https://huggingface.co/facebook/wav2vec2-base-960h/commit/dbdb8c54a01c6b0ca8ec79f811970214fb72cecc). Essentially, one is advised to replace `Wav2Vec2Tokenizer` with `Wav2Vec2Processor` in all scripts from now, whereas the API of `Wav2Vec2Processor` is identical to the API of the old `Wav2Vec2Tokenizer **The only big breaking change is that the AutoTokenizer now loads `Wav2Vec2CTCTokenizer` instead of `Wav2Vec2Tokenizer`** ## Review @LysandreJik, @patil-suraj, @sgugger, @thomwolf - this PR is now ready for a complete review. @patil-suraj, it would be very nice, if you could do a very thorough review and make sure that this design is 100% compatible with the `Speech2TextTransformersProcessor` that we'll add soon.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10324/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10324/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10324", "html_url": "https://github.com/huggingface/transformers/pull/10324", "diff_url": "https://github.com/huggingface/transformers/pull/10324.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10324.patch", "merged_at": 1614264166000 }
https://api.github.com/repos/huggingface/transformers/issues/10321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10321/comments
https://api.github.com/repos/huggingface/transformers/issues/10321/events
https://github.com/huggingface/transformers/issues/10321
812,961,618
MDU6SXNzdWU4MTI5NjE2MTg=
10,321
[Tensor Parallelism] Megatron-LM to transformers
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2760822153, "node_id": "MDU6TGFiZWwyNzYwODIyMTUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tensor%20Parallel", "name": "Tensor Parallel", "color": "1AD0A8", "default": false, "description": "" }, { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "@stas00 thanks for starting this thread!\r\n\r\nI guess, in order for everyone to be on the same page, a brief explanation of horizontal parallelism is needed. This would be a good place for future reference and introduce other contributors to the core concepts.\r\n\r\n**NOTE for everyone reading:** If you find any of the explanations below confusing, you can read about Megatron-LM in much more detail in its original paper: https://arxiv.org/pdf/1909.08053.pdf\r\n\r\n\r\n## The core idea\r\n\r\nThe main thing that separates Megatron-style (horizontal) parallelism from vertical parallelism is the way that it splits the model layers between GPUs without the need for idle time during training/inference (i.e. waiting while the previous GPUs complete their work on the previous layers of the model). This makes the whole process much more asynchronous, just like in MapReduce. Here's my rough sketch of how it looks:\r\n![Model parallelism](https://user-images.githubusercontent.com/26864830/108723414-4833c180-7535-11eb-8304-827846e64fec.png)\r\n\r\nNow the question is, how do we split the computation of those layers so that the parallelized model weights would be equivalent to the CPU ones?\r\n\r\n## Parallelized layers\r\nLet's start with a simple building block of any transformer: a fully connected layer (nn.Linear) followed by a nonlinear activation (GeLU). Following the Megatron's paper notation, we can write the dot-product part of it as `Y = GeLU(XA)`, where `X` and `Y` are the input and output vectors, and `A` is the weight matrix.\r\n\r\nIf we look at the computation in matrix form, it's easy to see how the matrix multiplication can be split between multiple GPUs:\r\n![Parallel GEMM (1)](https://user-images.githubusercontent.com/26864830/108731050-36eeb300-753d-11eb-850d-37a095a2fddf.png)\r\nBasically, if we split the weight matrix `A` column-wise across `N` GPUs and perform matrix multiplications `XA_1` through `XA_n` in parallel, then we will end up with `N` output vectors `Y_1, Y_2, ..., Y_n` which can be fed into GeLU independently:\r\n![image](https://user-images.githubusercontent.com/26864830/108733438-90f07800-753f-11eb-9e50-1f03f2687262.png)\r\n\r\nUsing this principle, we can update an MLP of arbitrary depth, without the need for any synchronization between GPUs until the very end, where we need to reconstruct the output vector from shards. The authors provide a helpful illustration for that:\r\n![image](https://user-images.githubusercontent.com/26864830/108734085-21c75380-7540-11eb-9d3e-83882a24ea09.png)\r\n\r\n### Quick note on self-attention\r\nParallelizing the multiheaded attention layers is even simpler, since they are already inherently parallel, due to having multiple independent heads! \r\n![image](https://user-images.githubusercontent.com/26864830/108734544-a2864f80-7540-11eb-92c8-82e08077ec33.png)\r\n\r\n## Practical implementation\r\nIf you want to just dive right in, here are the basic building blocks implemented in Megatron-LM:\r\n\r\n- [ColumnParallelLinear](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/mpu/layers.py#L195)\r\n- [RowParallelLinear](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/mpu/layers.py#L290)\r\n- [ParallelMLP](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py#L58)\r\n- [ParallelSelfAttention](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py#L112)\r\n\r\nAll of these rely on basic `Scatter`, `Gather` and `Reduce` ops to split and aggregate the weight matrices. Thanks to [PyTorch Distributed](https://pytorch.org/tutorials/intermediate/dist_tuto.html), we can use `torch.distributed.all_reduce` and `all_gather` for that, without having to worry about GPU synchronization. The scatter and gather layers just have to define appropriate forward and backward passes like so:\r\n```python\r\ndef _split(input_):\r\n world_size = get_tensor_model_parallel_world_size()\r\n input_list = split_tensor_along_last_dim(input_, world_size)\r\n rank = get_tensor_model_parallel_rank()\r\n output = input_list[rank].contiguous()\r\n return output\r\n\r\ndef _gather(input_):\r\n world_size = get_tensor_model_parallel_world_size()\r\n last_dim = input_.dim() - 1\r\n rank = get_tensor_model_parallel_rank()\r\n tensor_list = [torch.empty_like(input_) for _ in range(world_size)]\r\n tensor_list[rank] = input_\r\n torch.distributed.all_gather(tensor_list, input_, group=get_tensor_model_parallel_group())\r\n output = torch.cat(tensor_list, dim=last_dim).contiguous()\r\n return output\r\n\r\nclass ScatterToModelParallelRegion(torch.autograd.Function):\r\n def forward(ctx, input_):\r\n return _split(input_)\r\n\r\n def backward(ctx, grad_output):\r\n return _gather(grad_output)\r\n\r\nclass GatherFromModelParallelRegion(torch.autograd.Function):\r\n def forward(ctx, input_):\r\n return _gather(input_)\r\n\r\n def backward(ctx, grad_output):\r\n return _split(grad_output)\r\n```\r\n\r\nIn a single transformer layer, there are 4 communication operations in total, for the forward and backward passes:\r\n![image](https://user-images.githubusercontent.com/26864830/108743258-a9659000-7549-11eb-8e3d-445157c5660e.png)\r\n\r\n\r\n## Other things to consider\r\n\r\n#### Parallelized embeddings and output logits\r\nSince the weights of input and output embeddings of BERT/GPT2 are tied, they require a coordinated modification. In the original implementation, the input embedding matrix is parallelized along the vocabulary dimension (column-wise), and the output embeddings' matrix multiplications is parallelized _together with the cross-entropy loss_ to reduce the communication size (see end of section 3 in the paper):\r\n\r\n- [VocabParallelEmbedding](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/mpu/layers.py#L123)\r\n- [parallel_lm_logits](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/language_model.py#L28)\r\n#### Model parallelism-aware Dropout\r\nTransformers have dropout layers outside the model parallel regions before residual connections and within model parallel regions in the self attention block. Because some dropout layers are in a model parallel region, while others are not, we need to treat random number generation carefully to ensure dropout works correctly. See appendix B.2 in the paper for reference.\r\nThe necessary RNG state tracking is implemented in [random.py](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/mpu/random.py)\r\n\r\n#### Hybrid model and data parallelism\r\nCombining horizontal parallelism with data parallelism requires grouping the GPUs in a specific way, as described in appendix B.1:\r\n![image](https://user-images.githubusercontent.com/26864830/108745996-cd76a080-754c-11eb-8a24-1034b573ceec.png)\r\n\r\n", "Phew! That felt like a start of a whole blog post :smile: \r\n\r\nAs for porting all of this, I would follow [fairseq's example](https://github.com/pytorch/fairseq/blob/master/fairseq/model_parallel/models/transformer.py) and copy Megatron-LM's parallel layers verbatim into an existing (but separate) implementation of `BertModel` or `GPT2Model` as a proof-of-concept and then work from there. \r\n\r\nAfter the first semi-working prototype we could figure out how to implement the switching mechanism between a homogeneous model and a parallelized one, but it's too early to think about that, IMO. What do you think, @stas00 ?", "Amazing! Thank you for this awesome presentation, @anton-l! This could totally be a great blog post - I agree!\r\n\r\nLet me study the information you shared and I will follow up then!\r\n\r\nUntil then I have a quick suggestion: Do you have an easy access to 2 gpus? That would be enough to make a\r\nPoC work and then we can find a larger cluster with more gpus to experiment on and eventually port the 8 splits from fairseq. \r\n\r\nI suppose it'd be easier to implement this for Megatron-LM, but the main use would be t5 and gpt2 where we have most huge models at the moment. So we could start there as well. If it works for you. Which also can be worked on independently of your Megatron-LM PR.", "Regarding the setup: I can borrow a second gpu for the time being, that shouldn't be a problem :)\n\nAs for the models, I think GPT2 is a good candidate for our experiments, since the transformers' implementation is already stable and has multiple smaller checkpoints for quick demos.\n\nAlso, I don't think we should even be too concerned about porting the 8 original splits of fairseq's megatron, since I've already concatenated them for the model's PR. If everything was done correctly, this potentially allows us to create an arbitrary split across 2^n devices, not just 8.", "Sounds good on all accounts. GPT2 would be perfect, @anton-l! \r\n\r\nI had the same thought about just splitting your merged model if needed.\r\n\r\nPlease let us know how we can support you in this endeavor.\r\n\r\njust for you to be aware, I mentioned in the other thread the DeepSpeed version of their Megatron-LM port - perhaps theirs is newer - I haven't had a chance to study it yet. https://github.com/jeffra/DSE/tree/master/megatron-lm . You can diff the different versions against the baseline - that is I assume it has been changed - perhaps it hasn't. If you want to have a look, if not, it is good too. It will be good to start anywhere.\r\n\r\n", "@anton-l Thanks for the great work on this, its really nice to be able to load the pretrained model so thanks for that too! Did you have any progress on fine-tuning across multiple GPUs? Would love to see if the results get any better with some fine-tuning...", "@anton-l, let's do it if you have resources and interest? Let me know how I can be of help.\r\n\r\nNow having used Megatron-LM in [big science experiments](https://github.com/bigscience-workshop/bigscience/blob/master/experiments/gpt2.md) it's time to port it to transformers.", "@stas00 @anton-l Just curious, is Megatron-LM now ported to transformers? Or the proof of concept mentioned in: \r\n\r\n> As for porting all of this, I would follow [fairseq's example](https://github.com/pytorch/fairseq/blob/master/fairseq/model_parallel/models/transformer.py) and copy Megatron-LM's parallel layers verbatim into an existing (but separate) implementation of `BertModel` or `GPT2Model` as a proof-of-concept and then work from there.\r\n\r\nI would love to work on this issue, if there is anything I could do!\r\n", "Thanks for nice the overview.\r\n\r\nHaving read [the paper](https://arxiv.org/pdf/1909.08053.pdf), I disagree with the following statement (emphasis mine)\r\n\r\n> Using this principle, we can update an MLP of *arbitrary depth*, without the need for any synchronization between GPUs until the very end \r\n\r\nIf you split one layer inputs across rows, then the outputs are split across columns, so you need to split the second layer weights across rows, then you need to gather outputs before applying a non-linearity. This is explained in Section 3 of the paper.\r\n" ]
1,613
1,706
null
CONTRIBUTOR
null
# 🚀 Feature request Splitting the discussion that started here: https://github.com/huggingface/transformers/pull/10301#issuecomment-782917393 to add the potential future feature of transformers and it's Tensor Parallelism (Horizontal Model Parallelism) - for bigger context please see [Parallelism notes](https://github.com/huggingface/transformers/issues/9766). Let's start with important clarification: MP can mean many different things 1. Vertical MP - slice the layers vertically - one or more full layers placed on each gpu = Vertical MP - in which case VertMP is a simple version of PP with chunks=1 2. Horizontal MP - slice the layers horizontally - place a slice of a full model on each gpu - Example Megatron-LM At the moment I think it's only Megatron-LM that implements Horizontal MP. @anthon-l has ported that model to `transformers`, except the Horizontal MP parts, since currently `transformers` doesn't yet have support for it. There is already naive Vertical MP in t5 and gpt2 thanks to @alexorona's work, I ported Bart too but it's unmerged, and there is an ongoing effort to figure out how to implement the Pipeline. All these will have to co-operate with each other and also share common tools. @anton-l [started sharing](https://github.com/huggingface/transformers/pull/10301#issuecomment-782917393) what needs to be done to make that important feature available - and then down the road potentially make it available to other (all?) `transformers` models. @anton-l, the floor is yours.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10321/reactions", "total_count": 7, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/10321/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/10320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10320/comments
https://api.github.com/repos/huggingface/transformers/issues/10320/events
https://github.com/huggingface/transformers/issues/10320
812,904,488
MDU6SXNzdWU4MTI5MDQ0ODg=
10,320
BERT for speech
{ "login": "arunraja-hub", "id": 43485111, "node_id": "MDQ6VXNlcjQzNDg1MTEx", "avatar_url": "https://avatars.githubusercontent.com/u/43485111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arunraja-hub", "html_url": "https://github.com/arunraja-hub", "followers_url": "https://api.github.com/users/arunraja-hub/followers", "following_url": "https://api.github.com/users/arunraja-hub/following{/other_user}", "gists_url": "https://api.github.com/users/arunraja-hub/gists{/gist_id}", "starred_url": "https://api.github.com/users/arunraja-hub/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arunraja-hub/subscriptions", "organizations_url": "https://api.github.com/users/arunraja-hub/orgs", "repos_url": "https://api.github.com/users/arunraja-hub/repos", "events_url": "https://api.github.com/users/arunraja-hub/events{/privacy}", "received_events_url": "https://api.github.com/users/arunraja-hub/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nAlso, [here's the doc](https://huggingface.co/transformers/model_doc/wav2vec2.html#transformers.Wav2Vec2ForCTC) for `Wav2Vec2ForCTC` which seems to be the model you're interested in.\r\n\r\nThanks!" ]
1,613
1,614
1,614
NONE
null
How can I use HF's BERT models for speech-to-text training?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10320/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10319/comments
https://api.github.com/repos/huggingface/transformers/issues/10319/events
https://github.com/huggingface/transformers/issues/10319
812,902,286
MDU6SXNzdWU4MTI5MDIyODY=
10,319
[Question] Add a new token to tokenizer and bart model
{ "login": "mwang98", "id": 35629082, "node_id": "MDQ6VXNlcjM1NjI5MDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35629082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mwang98", "html_url": "https://github.com/mwang98", "followers_url": "https://api.github.com/users/mwang98/followers", "following_url": "https://api.github.com/users/mwang98/following{/other_user}", "gists_url": "https://api.github.com/users/mwang98/gists{/gist_id}", "starred_url": "https://api.github.com/users/mwang98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mwang98/subscriptions", "organizations_url": "https://api.github.com/users/mwang98/orgs", "repos_url": "https://api.github.com/users/mwang98/repos", "events_url": "https://api.github.com/users/mwang98/events{/privacy}", "received_events_url": "https://api.github.com/users/mwang98/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! You should use the [resize_token_embeddings](https://huggingface.co/transformers/main_classes/model.html?highlight=resize_token_embeddings#transformers.PreTrainedModel.resize_token_embeddings) method for that. Will add that to the documentation." ]
1,613
1,614
1,614
NONE
null
Hi, I have extended the word embedding of a tokenizer and a bart model through `tokenizer.add_token()` and `model.resize_token_embeddings(len(tokenizer))`. Because the ground truth consist of a newly added token, the dimension of the decoder output should be extended as well. But I can't figure out how to extend the model. Can anyone give me some help? Thanks in advance ! 💯
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10319/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10318/comments
https://api.github.com/repos/huggingface/transformers/issues/10318/events
https://github.com/huggingface/transformers/issues/10318
812,890,565
MDU6SXNzdWU4MTI4OTA1NjU=
10,318
Guidance for continued pre-training of BART with de-noising.
{ "login": "griff4692", "id": 12277915, "node_id": "MDQ6VXNlcjEyMjc3OTE1", "avatar_url": "https://avatars.githubusercontent.com/u/12277915?v=4", "gravatar_id": "", "url": "https://api.github.com/users/griff4692", "html_url": "https://github.com/griff4692", "followers_url": "https://api.github.com/users/griff4692/followers", "following_url": "https://api.github.com/users/griff4692/following{/other_user}", "gists_url": "https://api.github.com/users/griff4692/gists{/gist_id}", "starred_url": "https://api.github.com/users/griff4692/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/griff4692/subscriptions", "organizations_url": "https://api.github.com/users/griff4692/orgs", "repos_url": "https://api.github.com/users/griff4692/repos", "events_url": "https://api.github.com/users/griff4692/events{/privacy}", "received_events_url": "https://api.github.com/users/griff4692/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "also it is my questions, thanks ", "denoising function is a part of T5 pretraining as well, is there a denoising function implementation in Huggingface repo ? Any advice is appreciated. thanks ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
# 🚀 Feature request An example of continued pre-training of BART with de-noising. ## Motivation I'm using the run causal LM [script](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py), but it seems on line 340, it's simply copying input to output (learning the identify function). 1. Would line 340 be the best place to add the de-noising or should I introduce it as part of the collator? 2. Is there any code which implements de-noising in HuggingFace? BART defines 4-5 main operations which should be easy to reproduce - I just don't want to introduce new code if well-tested code already exists. Thanks!!!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10318/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10317/comments
https://api.github.com/repos/huggingface/transformers/issues/10317/events
https://github.com/huggingface/transformers/issues/10317
812,868,377
MDU6SXNzdWU4MTI4NjgzNzc=
10,317
ForTokenClassification head on BART
{ "login": "jonatasgrosman", "id": 5097052, "node_id": "MDQ6VXNlcjUwOTcwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatasgrosman", "html_url": "https://github.com/jonatasgrosman", "followers_url": "https://api.github.com/users/jonatasgrosman/followers", "following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}", "gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions", "organizations_url": "https://api.github.com/users/jonatasgrosman/orgs", "repos_url": "https://api.github.com/users/jonatasgrosman/repos", "events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}", "received_events_url": "https://api.github.com/users/jonatasgrosman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It would be great to have a `BartForTokenClassification`. Does it use the same head as `BertForTokenClassification`, etc.? \r\n\r\nFeel free to open a PR :) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
CONTRIBUTOR
null
# 🚀 Feature request Hello guys! I'm trying to reproduce the token classification experiments from the [BART paper](https://arxiv.org/abs/1910.13461) using the HF/Transformers and found that a token classification head is missing on the current BART model HF implementation. The current BART implementation only has the "BartForConditionalGeneration" and "BartForSequenceClassification". Are there any plans to add a "BartForTokenClassification" head too?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10317/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10316/comments
https://api.github.com/repos/huggingface/transformers/issues/10316/events
https://github.com/huggingface/transformers/pull/10316
812,853,423
MDExOlB1bGxSZXF1ZXN0NTc3MTA5OTY0
10,316
fix typo in conversion script
{ "login": "tagucci", "id": 12934276, "node_id": "MDQ6VXNlcjEyOTM0Mjc2", "avatar_url": "https://avatars.githubusercontent.com/u/12934276?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tagucci", "html_url": "https://github.com/tagucci", "followers_url": "https://api.github.com/users/tagucci/followers", "following_url": "https://api.github.com/users/tagucci/following{/other_user}", "gists_url": "https://api.github.com/users/tagucci/gists{/gist_id}", "starred_url": "https://api.github.com/users/tagucci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tagucci/subscriptions", "organizations_url": "https://api.github.com/users/tagucci/orgs", "repos_url": "https://api.github.com/users/tagucci/repos", "events_url": "https://api.github.com/users/tagucci/events{/privacy}", "received_events_url": "https://api.github.com/users/tagucci/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Wonderful! Thank you for this fix, @tagucci! \r\n\r\n(I tweaked your PR to run `make style` to appease to auto-formatters to have CI pass)" ]
1,613
1,613
1,613
CONTRIBUTOR
null
# What does this PR do? Fix typo in `convert_fsmt_original_pytorch_checkpoint_to_pytorch.py` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @stas00 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10316/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10316/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10316", "html_url": "https://github.com/huggingface/transformers/pull/10316", "diff_url": "https://github.com/huggingface/transformers/pull/10316.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10316.patch", "merged_at": 1613922867000 }
https://api.github.com/repos/huggingface/transformers/issues/10315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10315/comments
https://api.github.com/repos/huggingface/transformers/issues/10315/events
https://github.com/huggingface/transformers/issues/10315
812,838,555
MDU6SXNzdWU4MTI4Mzg1NTU=
10,315
Huggingface mt5 does not reach the performance of original mt5 on paws-x
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @dorost1234, \r\n\r\ncould you post your question on the [forum](https://discuss.huggingface.co/) and see whether you can get help from the community there? We try to keep GitHub issues for bug reports mostly. \r\n\r\nIt would also be very important that you attach a notebook or something that allows people to understand what you have done and what you might have missed...", "Hi Patrick\nthanks, sure, I posted here because I mainly wanted to ask if you have in\nthe past compared the performance of original mt5 with the HuggingFace\nmodel side by side on one setting?\nthis is unfortunately a lot of codes for me to share, since I build on top\nof my codebase, and this is not very easy to make a small example showing\nthese differences.\nOverall, knowing if in the past such comparison is done is great.\nthanks.\n\nOn Mon, Feb 22, 2021 at 3:02 PM Patrick von Platen <[email protected]>\nwrote:\n\n> Hey @dorost1234 <https://github.com/dorost1234>,\n>\n> could you post your question on the forum\n> <https://discuss.huggingface.co/> and see whether you can get help from\n> the community there? We try to keep GitHub issues for bug reports mostly.\n>\n> It would also be very important that you attach a notebook or something\n> that allows people to understand what you have done and what you might have\n> missed...\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/10315#issuecomment-783396814>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXVTAFKQLNWCBFUG43TAJPXBANCNFSM4X65FR7Q>\n> .\n>\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): - - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: Library: Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> t5: @patrickvonplaten, @patil-suraj ## Information Hi I ran the model of mT5-small on paws-x in zero-shot cross-lingual setup where we tune on english and evaluate on all languages in paws-x dataset and obtain only 80.2 while the reported original mt5-small on this datasets is 82.4. (see table 2 in mt5 paper) I used the setup in mt5 paper, is there any missing details from original mt5 work in the huggingface implementation? thanks ## Expected behavior reaching the performance of original model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10315/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10314/comments
https://api.github.com/repos/huggingface/transformers/issues/10314/events
https://github.com/huggingface/transformers/pull/10314
812,834,466
MDExOlB1bGxSZXF1ZXN0NTc3MDk2Mjg5
10,314
ConvBERT fix torch <> tf weights conversion
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'll push the fixed weights and remove from_pt in test.", "> Ok with the change!\r\n> \r\n> Should we do a patch release for this?\r\n\r\nThink it's a good idea", "Thanks @patrickvonplaten ", "Ok will do a patch this afternoon", "cc @stefan-it and @mrm8488 that have been playing with the model. We'll release v4.3.3 very soon which will contain that patch.", "Should I re-convert the TF model then :thinking: ", "I think you should, once v4.3.3 is released (in a few minutes)", "v4.3.3 has been released!" ]
1,613
1,614
1,614
MEMBER
null
(from @patrickvonplaten): This PR corrects the shape of the grouped linear layer weight so that the general conversion function does not have to be changed. All models are tested to work correctly as follows: ```python from transformers import ConvBertModel, TFConvBertModel import tensorflow as tf import torch input_ids = [[1, 2, 3, 4, 5]] tf_input_ids = tf.convert_to_tensor(input_ids) pt_input_ids = torch.tensor(input_ids) for name in ["conv-bert-base", "conv-bert-medium-small", "conv-bert-small"]: model_tf = TFConvBertModel.from_pretrained(f"YituTech/{name}", from_pt=True) model = ConvBertModel.from_pretrained(f"YituTech/{name}") assert abs(model_tf(tf_input_ids)[0].cpu().numpy().sum() - model(pt_input_ids)[0].cpu().numpy().sum()) < 1e-2, "Error" ``` Changing the size and the name of a weight means that all tf weights have to be updated, but I think this is ok here since the TF models (if I understood correctly) were not behaving as expected before anyways. I also checked that the conversion the other way around works as expected (`...from_tf=True`)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10314/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10314", "html_url": "https://github.com/huggingface/transformers/pull/10314", "diff_url": "https://github.com/huggingface/transformers/pull/10314.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10314.patch", "merged_at": 1614167735000 }
https://api.github.com/repos/huggingface/transformers/issues/10313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10313/comments
https://api.github.com/repos/huggingface/transformers/issues/10313/events
https://github.com/huggingface/transformers/issues/10313
812,813,439
MDU6SXNzdWU4MTI4MTM0Mzk=
10,313
ValueError: too many values to unpack (expected 2)
{ "login": "sadanyh", "id": 56395363, "node_id": "MDQ6VXNlcjU2Mzk1MzYz", "avatar_url": "https://avatars.githubusercontent.com/u/56395363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sadanyh", "html_url": "https://github.com/sadanyh", "followers_url": "https://api.github.com/users/sadanyh/followers", "following_url": "https://api.github.com/users/sadanyh/following{/other_user}", "gists_url": "https://api.github.com/users/sadanyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/sadanyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sadanyh/subscriptions", "organizations_url": "https://api.github.com/users/sadanyh/orgs", "repos_url": "https://api.github.com/users/sadanyh/repos", "events_url": "https://api.github.com/users/sadanyh/events{/privacy}", "received_events_url": "https://api.github.com/users/sadanyh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please post the full error stacktrace. Which version of the script are you using?", "Hi thank you for your reply. I did post the trace back for the error.\n\nLooking forward to your reply\n\nHadeel\n\n\n\n> On 21 Feb 2021, at 10:01 am, cronoik <[email protected]> wrote:\n> \n> \n> Please post the full error stacktrace. Which version of the script are you using?\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "Please use the [forums](https://discuss.huggingface.co/) to debug custom code with help from the community and only when you have isolated a part that is a bug in the library use a GitHub issue. \r\nIn this particular case, you should include the full stack trace, as pointed out before (it's not in your message, just the last frame) and the code that created your model: the error seems to point out that it only returns one value when you need two (since you write `loss, logits = model(...)`). You should debug that by looking at the model return on one batch.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: True - Using distributed or parallel set-up in script?: No ### Who can help albert, bert, xlm: @LysandreJik trainer: @sgugger ## Information Model I am using (XLMRobertaForSequenceClassification): The problem arises when using: * [ *] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [* ] my own task or dataset: (give details below) ## To reproduce # This training code is based on the `run_glue.py` script here: # https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L128 for epoch_i in range(0, epochs): # ======================================== # Training # ======================================== # Perform one full pass over the training set. print("") print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') # Measure how long the training epoch takes. t0 = time.time() # Reset the total loss for this epoch. total_train_loss = 0 # Put the model into training mode. Don't be mislead--the call to # `train` just changes the *mode*, it doesn't *perform* the training. # `dropout` and `batchnorm` layers behave differently during training # vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch) model.train() # For each batch of training data... for step, batch in enumerate(train_dataloader): # Progress update every 40 batches. if step % 40 == 0 and not step == 0: # Calculate elapsed time in minutes. elapsed = format_time(time.time() - t0) # Report progress. print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed)) # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using the # `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) # Always clear any previously calculated gradients before performing a # backward pass. PyTorch doesn't do this automatically because # accumulating the gradients is "convenient while training RNNs". # (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch) model.zero_grad() # Perform a forward pass (evaluate the model on this training batch). # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification # It returns different numbers of parameters depending on what arguments # arge given and what flags are set. For our useage here, it returns # the loss (because we provided labels) and the "logits"--the model # outputs prior to activation. loss, logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) # Accumulate the training loss over all of the batches so that we can # calculate the average loss at the end. `loss` is a Tensor containing a # single value; the `.item()` function just returns the Python value # from the tensor. total_train_loss += loss.item() # Perform a backward pass to calculate the gradients. loss.backward() # Clip the norm of the gradients to 1.0. # This is to help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Update parameters and take a step using the computed gradient. # The optimizer dictates the "update rule"--how the parameters are # modified based on their gradients, the learning rate, etc. optimizer.step() # Update the learning rate. scheduler.step() # Calculate the average loss over all of the batches. avg_train_loss = total_train_loss / len(train_dataloader) # Measure how long this epoch took. training_time = format_time(time.time() - t0) print("") print(" Average training loss: {0:.2f}".format(avg_train_loss)) print(" Training epcoh took: {:}".format(training_time)) # ======================================== # Validation # ======================================== # After the completion of each training epoch, measure our performance on # our validation set. print("") print("Running Validation...") t0 = time.time() # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() # Tracking variables total_eval_accuracy = 0 total_eval_loss = 0 nb_eval_steps = 0 # Evaluate data for one epoch for batch in validation_dataloader: # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using # the `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) # Tell pytorch not to bother with constructing the compute graph during # the forward pass, since this is only needed for backprop (training). with torch.no_grad(): # Forward pass, calculate logit predictions. # token_type_ids is the same as the "segment ids", which # differentiates sentence 1 and 2 in 2-sentence tasks. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification # Get the "logits" output by the model. The "logits" are the output # values prior to applying an activation function like the softmax. (loss, logits) = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) # Accumulate the validation loss. total_eval_loss += loss.item() # Move logits and labels to CPU logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() # Calculate the accuracy for this batch of test sentences, and # accumulate it over all batches. total_eval_accuracy += flat_accuracy(logits, label_ids) # Report the final accuracy for this validation run. avg_val_accuracy = total_eval_accuracy / len(validation_dataloader) print(" Accuracy: {0:.2f}".format(avg_val_accuracy)) # Calculate the average loss over all of the batches. avg_val_loss = total_eval_loss / len(validation_dataloader) # Measure how long the validation run took. validation_time = format_time(time.time() - t0) print(" Validation Loss: {0:.2f}".format(avg_val_loss)) print(" Validation took: {:}".format(validation_time)) # Record all statistics from this epoch. training_stats.append( { 'epoch': epoch_i + 1, 'Training Loss': avg_train_loss, 'Valid. Loss': avg_val_loss, 'Valid. Accur.': avg_val_accuracy, 'Training Time': training_time, 'Validation Time': validation_time } ) print("") print("Training complete!") print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0))) ## Expected behavior THE ERROR: ======== Epoch 1 / 4 ======== Training... --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-92-840aefe69c26> in <module>() 85 token_type_ids=None, 86 attention_mask=b_input_mask, ---> 87 labels=b_labels) 88 89 # Accumulate the training loss over all of the batches so that we can ValueError: too many values to unpack (expected 2) I am working an a four-label classification task (on an Arabic dataset). I have run this code before but it is not working now. I get this error: ValueError: too many values to unpack (expected 2). I have not changed any of the steps or the preprocessing but it raises up this error this time. The tensor for each label instance is [0,1,2,3]. The error points to the labels parameter. Could you please suggest solutions?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10313/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10312/comments
https://api.github.com/repos/huggingface/transformers/issues/10312/events
https://github.com/huggingface/transformers/issues/10312
812,812,987
MDU6SXNzdWU4MTI4MTI5ODc=
10,312
LayoutLM Tensorflow model
{ "login": "atahmasb", "id": 25216362, "node_id": "MDQ6VXNlcjI1MjE2MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/25216362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/atahmasb", "html_url": "https://github.com/atahmasb", "followers_url": "https://api.github.com/users/atahmasb/followers", "following_url": "https://api.github.com/users/atahmasb/following{/other_user}", "gists_url": "https://api.github.com/users/atahmasb/gists{/gist_id}", "starred_url": "https://api.github.com/users/atahmasb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atahmasb/subscriptions", "organizations_url": "https://api.github.com/users/atahmasb/orgs", "repos_url": "https://api.github.com/users/atahmasb/repos", "events_url": "https://api.github.com/users/atahmasb/events{/privacy}", "received_events_url": "https://api.github.com/users/atahmasb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sure, I can guide you if you want.\r\n\r\nAs LayoutLM is only a slight adaptation from BERT, I guess you can define `modeling_tf_layoutlm.py` based on `modeling_tf_bert.py`. Note that all layers should be renamed, e.g. `TFBertEmbeddings` -> `TFLayoutLMEmbeddings`. LayoutLM adds position embeddings for the tokens based on the bounding boxes, so I guess this is the only thing that needs to be added in the [embedding layer](https://github.com/huggingface/transformers/blob/88605f37a6fe7bde336f52700229d619b5ffa0f6/src/transformers/models/bert/modeling_tf_bert.py#L131). \r\n\r\nIn PyTorch, we have:\r\nhttps://github.com/huggingface/transformers/blob/88605f37a6fe7bde336f52700229d619b5ffa0f6/src/transformers/models/layoutlm/modeling_layoutlm.py#L65-L68\r\n\r\nSo this will need to be added to the `TFLayoutLMEmbeddings` class. Regarding a conversion script to convert the PyTorch weights into the TF version, there's a general script to convert PyTorch weights to TF 2 models: https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py\r\n\r\nThis works if all weights names are identical between the PT and TF implementations.", "@NielsRogge Thanks for the guidance, I think I know where to start. \r\nI'll comment here if I had more questions. I am hoping to have a PR by the end of this week ", "Should I upload TF weights under `https://huggingface.co/microsoft/` in the same place PT weights are stored?", "Yes, pinging @julien-c here to give you access", "pinging @LysandreJik and @sgugger ", "We don't have granular write access to model repos so (unless you're affiliated with Microsoft in some way) I would suggest you upload the files to a model repo under your HF user account and then we (or them) can copy the relevant files to the main repos!\r\n\r\nLet me know if this is a suitable workflow.", "@julien-c Thanks, the suggested workflow sounds good.", "closing this issue as the code is already merged." ]
1,613
1,616
1,616
CONTRIBUTOR
null
# 🚀 Feature request It would be great if there was a TF version of the layoutlm. I see there are scripts in the repo to convert PyTorch checkpoints to TF models but I think the requirement is to have a TF model architecture to be able to load PyTorch model's wights in it. ## Motivation We are using TF in production and we'd love to be able to use the layoutlm. ## Your contribution I am happy to tackle the conversion. I was wondering if there are instructions on how to do the conversion properly so that it can be added to the repo.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10312/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10311/comments
https://api.github.com/repos/huggingface/transformers/issues/10311/events
https://github.com/huggingface/transformers/issues/10311
812,799,578
MDU6SXNzdWU4MTI3OTk1Nzg=
10,311
Matrix multiplication error for ReformerModelWithLMHead when tie_word_embeddings is True
{ "login": "Bluelari2", "id": 59641842, "node_id": "MDQ6VXNlcjU5NjQxODQy", "avatar_url": "https://avatars.githubusercontent.com/u/59641842?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bluelari2", "html_url": "https://github.com/Bluelari2", "followers_url": "https://api.github.com/users/Bluelari2/followers", "following_url": "https://api.github.com/users/Bluelari2/following{/other_user}", "gists_url": "https://api.github.com/users/Bluelari2/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bluelari2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bluelari2/subscriptions", "organizations_url": "https://api.github.com/users/Bluelari2/orgs", "repos_url": "https://api.github.com/users/Bluelari2/repos", "events_url": "https://api.github.com/users/Bluelari2/events{/privacy}", "received_events_url": "https://api.github.com/users/Bluelari2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @xe442,\r\n\r\nActually Reformer cannot make use of `tie_word_embeddings=True` because the output word embedding layer is twice as big as the input layer (because of Reformer's architecture, see section 3) in this blog: https://huggingface.co/blog/reformer", "But, we should in this case give a better error message! Feel free to open a PR to add such an error message :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
## Environment info - `transformers` version: 4.3.2 - Platform: Windows 10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.0 cpu-only - Tensorflow version (GPU?): not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help - longformer, reformer, transfoxl, xlnet: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Reformer The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the following script: ```python import torch from transformers import ReformerConfig, ReformerModelWithLMHead config = ReformerConfig(is_decoder=True, tie_word_embeddings=True) model = ReformerModelWithLMHead(config) inp = torch.randint(0, 100, (1, 4096)) out = model(inp) ``` 2. The error: ``` Traceback (most recent call last): File "./test.py", line 8, in <module> out = model(inp) File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\transformers\models\reformer\modeling_reformer.py", line 2248, in forward logits = self.lm_head(sequence_output) File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\transformers\models\reformer\modeling_reformer.py", line 1761, in forward return apply_chunking_to_forward(self.forward_chunk, self.chunk_size_lm_head, self.seq_len_dim, hidden_states) File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\transformers\modeling_utils.py", line 1787, in apply_chunking_to_forward return forward_fn(*input_tensors) File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\transformers\models\reformer\modeling_reformer.py", line 1764, in forward_chunk hidden_states = self.decoder(hidden_states) File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\linear.py", line 93, in forward return F.linear(input, self.weight, self.bias) File "C:\Users\xe442\anaconda3\envs\pytorch\lib\site-packages\torch\nn\functional.py", line 1692, in linear output = input.matmul(weight.t()) RuntimeError: mat1 and mat2 shapes cannot be multiplied (4096x512 and 256x320) ``` ## Expected behavior There should be no errors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10311/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10310/comments
https://api.github.com/repos/huggingface/transformers/issues/10310/events
https://github.com/huggingface/transformers/pull/10310
812,794,898
MDExOlB1bGxSZXF1ZXN0NTc3MDY3OTI0
10,310
[Trainer] implement gradient_accumulation_steps support in DeepSpeed integration
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
closed
false
null
[]
[ "> Cool that they added it! This all looks pretty good to me!\r\n\r\nWell, it has been there all this time, this PR just bolts it on correctly.\r\n\r\n> Absolutely no problems with moving the regression trainer somewhere accessible, it was just in `test_trainer` because only used there.\r\n\r\nAh, that makes sense. I will rework it then next time I touch on this code.\r\n\r\nThank you for the feedback, @sgugger \r\n" ]
1,613
1,614
1,614
CONTRIBUTOR
null
This PR: Fixes in a bug: - `lr_scheduler.step()` shouldn't be called under DeepSpeed - it's already called in its `optimizer.step()` internally - so it was moving through the scheduler rate change at twice the speed :( Adds support for `gradient_accumulation_steps`: * makes `gradient_accumulation_steps` work with deepspeed - for nuances see: https://github.com/microsoft/DeepSpeed/issues/776 - it required a lot of `if` / `if nots` - not helping the readability of the trainer - and took a lot of trial and error to figure out - but what to do * adds a corresponding doc * adds a first serious quality test for DeepSpeed that measures that `gradient_accumulation_steps` works - modelled after `test_trainer.py`'s own `test_gradient_accumulation` and extends it to compare loss as well, and also tests that the optimizer actually kicked in - with fp16 deepspeed it normally takes a few dozen steps before it kicks in with dynamic scaling enabled. * extends `testing_utils` with a `mockenv_context` which is similar to `@mockenv`, but which can be used inside the test as context manager if multiple env vars need to be tested - `@mockenv` is only useful as a decorator. At the end I think I don't really need it as using the same env worked for all tests, but it might come handy if ports don't get released fast enough and then the test will use different ports - I'm concerned about CIs. And it's easier to re-use the class-wide env, rather hardcoding or creating a global variable - so it's just cleaner too. Suggestion/Question: * `get_regression_trainer` is very awesome! But importing it from a test file is not great - probably should move it and its components into a utilities file - `testing_utils.py` or create a new one `testing_training_utils.py` and in the future add other trainer-testing specific utils in there? Though this should be dealt with in a separate PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10310/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10310", "html_url": "https://github.com/huggingface/transformers/pull/10310", "diff_url": "https://github.com/huggingface/transformers/pull/10310.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10310.patch", "merged_at": 1614021359000 }
https://api.github.com/repos/huggingface/transformers/issues/10309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10309/comments
https://api.github.com/repos/huggingface/transformers/issues/10309/events
https://github.com/huggingface/transformers/issues/10309
812,733,551
MDU6SXNzdWU4MTI3MzM1NTE=
10,309
[Example] Using label_smoothing_factor raise error when evaluating model
{ "login": "dunglt2015", "id": 12955068, "node_id": "MDQ6VXNlcjEyOTU1MDY4", "avatar_url": "https://avatars.githubusercontent.com/u/12955068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dunglt2015", "html_url": "https://github.com/dunglt2015", "followers_url": "https://api.github.com/users/dunglt2015/followers", "following_url": "https://api.github.com/users/dunglt2015/following{/other_user}", "gists_url": "https://api.github.com/users/dunglt2015/gists{/gist_id}", "starred_url": "https://api.github.com/users/dunglt2015/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dunglt2015/subscriptions", "organizations_url": "https://api.github.com/users/dunglt2015/orgs", "repos_url": "https://api.github.com/users/dunglt2015/repos", "events_url": "https://api.github.com/users/dunglt2015/events{/privacy}", "received_events_url": "https://api.github.com/users/dunglt2015/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can reproduce locally, here is a short reproducer from the root of the repo:\r\n```\r\npython examples/token-classification/run_ner.py \\\r\n --model_name_or_path bert-base-uncased \\\r\n --train_file tests/fixtures/tests_samples/conll/sample.json \\\r\n --validation_file tests/fixtures/tests_samples/conll/sample.json \\\r\n --output_dir /tmp/test-ner \\\r\n --overwrite_output_dir \\\r\n --do_train \\\r\n --do_eval \\\r\n --label_smoothing_factor 0.1\r\n```\r\n\r\nWill look into it tomorrow." ]
1,613
1,614
1,614
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: Ubuntu 20.04 - Python version: 3.8 - PyTorch version (GPU): 1.6.0 ### Who can help Library: - pipelines: @LysandreJik Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information Model I am using BERT: The problem arises when using: * [x] the official example scripts: https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. I run the old script run_ner.py with default label_smoothing_factor = 0.0. It works well. 2. I add label_smoothing_factor = 0.1 to JSON config file. `{ "data_dir": "/home/dzungle/NER/data/", "train_file": "/home/dzungle/NER/data/train.csv", "validation_file": "/home/dzungle/data/dev.csv", "model_name_or_path": "emilyalsentzer/Bio_ClinicalBERT", "output_dir": "/home/dzungle/NER/models/", "label_smoothing_factor": 0.1, "max_seq_length": 256, "num_train_epochs": 1, "per_device_train_batch_size": 8, "gradient_accumulation_steps": 4, "per_device_eval_batch_size": 1, "save_steps": 1000, "eval_steps" : 50, "save_total_limit":1, "seed": 1, "do_train": true, "do_eval": true, "do_predict": true, "overwrite_output_dir" : true, "evaluate_during_training" : true }` 3. I run the script and it works well for training but got an error when evaluating. **Error:** ``` Traceback (most recent call last): File "run_ner.py", line 333, in <module> main() File "run_ner.py", line 282, in main result = trainer.evaluate() File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer.py", line 1604, in evaluate output = self.prediction_loop( File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer.py", line 1742, in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer.py", line 1874, in prediction_step labels = nested_detach(tuple(inputs.get(name) for name in self.label_names)) File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 111, in nested_detach return type(tensors)(nested_detach(t) for t in tensors) File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 111, in <genexpr> return type(tensors)(nested_detach(t) for t in tensors) File "/home/dzungle/miniconda3/envs/hppi/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 112, in nested_detach return tensors.detach() AttributeError: 'NoneType' object has no attribute 'detach' ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> As I known, label_smoothing_factor is a new feature of recent transformers version. I would expect that the script with label_smoothing_factor=0.1 works well as using default value 0.0.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10309/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10308/comments
https://api.github.com/repos/huggingface/transformers/issues/10308/events
https://github.com/huggingface/transformers/pull/10308
812,709,424
MDExOlB1bGxSZXF1ZXN0NTc3MDA4MzMx
10,308
[ci] don't fail when there are no zombies
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
fixes: ``` Run pkill -f tests; pkill -f examples 4 Error: Process completed with exit code 1. ``` Didn't think that it'd `exit(1)` when there is nothing to kill @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10308/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10308", "html_url": "https://github.com/huggingface/transformers/pull/10308", "diff_url": "https://github.com/huggingface/transformers/pull/10308.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10308.patch", "merged_at": 1613856523000 }
https://api.github.com/repos/huggingface/transformers/issues/10307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10307/comments
https://api.github.com/repos/huggingface/transformers/issues/10307/events
https://github.com/huggingface/transformers/issues/10307
812,702,719
MDU6SXNzdWU4MTI3MDI3MTk=
10,307
pretraining objective of T5 model
{ "login": "dorooddorood606", "id": 79288051, "node_id": "MDQ6VXNlcjc5Mjg4MDUx", "avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorooddorood606", "html_url": "https://github.com/dorooddorood606", "followers_url": "https://api.github.com/users/dorooddorood606/followers", "following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}", "gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions", "organizations_url": "https://api.github.com/users/dorooddorood606/orgs", "repos_url": "https://api.github.com/users/dorooddorood606/repos", "events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}", "received_events_url": "https://api.github.com/users/dorooddorood606/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Hi, it would be great to have pretraining of T5 model implemented. Currently, run_mlm.py script does not support it. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> T5 is the SOTA model and having pertaining would be very helpful to the community.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10307/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10306/comments
https://api.github.com/repos/huggingface/transformers/issues/10306/events
https://github.com/huggingface/transformers/issues/10306
812,684,885
MDU6SXNzdWU4MTI2ODQ4ODU=
10,306
Issue Loading bert-based-german-cased
{ "login": "George-Ogden", "id": 38294960, "node_id": "MDQ6VXNlcjM4Mjk0OTYw", "avatar_url": "https://avatars.githubusercontent.com/u/38294960?v=4", "gravatar_id": "", "url": "https://api.github.com/users/George-Ogden", "html_url": "https://github.com/George-Ogden", "followers_url": "https://api.github.com/users/George-Ogden/followers", "following_url": "https://api.github.com/users/George-Ogden/following{/other_user}", "gists_url": "https://api.github.com/users/George-Ogden/gists{/gist_id}", "starred_url": "https://api.github.com/users/George-Ogden/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/George-Ogden/subscriptions", "organizations_url": "https://api.github.com/users/George-Ogden/orgs", "repos_url": "https://api.github.com/users/George-Ogden/repos", "events_url": "https://api.github.com/users/George-Ogden/events{/privacy}", "received_events_url": "https://api.github.com/users/George-Ogden/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Which code caused this error?", "This issue comes from the hosted API https://huggingface.co/bert-base-german-cased?text=Ich+bin+%5BMASK%5D", "@tholor The url above currently loads for me, but to be future-proof should we cp the files currently loaded from that S3 bucket to the corresponding model repo (here, https://huggingface.co/bert-base-german-cased)?\r\n\r\ncc'ing @LysandreJik ", "@julien-c Sure, let's copy them from our S3 to the model repo. ", "copied to the model repo in \r\nhttps://huggingface.co/bert-base-german-cased/commit/876457621368b8c955478cfe1cdee634f47ea34c\r\n\r\nChanged hardcoded url in https://github.com/huggingface/transformers/pull/10353", "@Narsil could you please check if the inference widget works for this model when you get a chance to upgrade the transformers dependency in the API? Thanks!", "⚠️ Can't load tokenizer using from_pretrained, please update its configuration: 400 Client Error: Bad Request for url: https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt\r\nIt's still not working!", "Hi @George-Ogden, the change was merged two days ago and is therefore available on the `master` branch, but not yet in a release.\r\n\r\nDo you still get the same error when installing from source?", "This is on the inference API on the website I haven't tried it from source." ]
1,613
1,614
1,614
NONE
null
Message on the website is: Can't load tokenizer using from_pretrained, please update its configuration: 400 Client Error: Bad Request for url: https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10306/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10305/comments
https://api.github.com/repos/huggingface/transformers/issues/10305/events
https://github.com/huggingface/transformers/issues/10305
812,671,303
MDU6SXNzdWU4MTI2NzEzMDM=
10,305
Documentation of the decode method is missing
{ "login": "cronoik", "id": 18630848, "node_id": "MDQ6VXNlcjE4NjMwODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cronoik", "html_url": "https://github.com/cronoik", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "organizations_url": "https://api.github.com/users/cronoik/orgs", "repos_url": "https://api.github.com/users/cronoik/repos", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "received_events_url": "https://api.github.com/users/cronoik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This has been fixed a few days ago, I believe. Look at the [master doc tokenizer page](https://huggingface.co/transformers/master/main_classes/tokenizer.html) (the stable documentation is only updated at each release).", "Yes, you are right." ]
1,613
1,613
1,613
CONTRIBUTOR
null
The tokenizer documentation [page](https://huggingface.co/transformers/main_classes/tokenizer.html) is generated from the following files: - tokenization_utils_base.py - tokenization_utils_fast.py - tokenization_utils.py At least the documentation of the decode method is missing even if it is properly documented in the [source file](https://github.com/huggingface/transformers/blob/9a7e63729f3ff6ddf065fd0d443421e46b1a2ffb/src/transformers/tokenization_utils_base.py#L3099). @sgugger Could you please have a look?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10305/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10304/comments
https://api.github.com/repos/huggingface/transformers/issues/10304/events
https://github.com/huggingface/transformers/pull/10304
812,668,607
MDExOlB1bGxSZXF1ZXN0NTc2OTc5MTcw
10,304
fixes #10303
{ "login": "cronoik", "id": 18630848, "node_id": "MDQ6VXNlcjE4NjMwODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cronoik", "html_url": "https://github.com/cronoik", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "organizations_url": "https://api.github.com/users/cronoik/orgs", "repos_url": "https://api.github.com/users/cronoik/repos", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "received_events_url": "https://api.github.com/users/cronoik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for fixing!" ]
1,613
1,613
1,613
CONTRIBUTOR
null
# What does this PR do? Fixes #10303 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. Documentation: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10304/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10304", "html_url": "https://github.com/huggingface/transformers/pull/10304", "diff_url": "https://github.com/huggingface/transformers/pull/10304.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10304.patch", "merged_at": 1613852493000 }
https://api.github.com/repos/huggingface/transformers/issues/10303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10303/comments
https://api.github.com/repos/huggingface/transformers/issues/10303/events
https://github.com/huggingface/transformers/issues/10303
812,665,794
MDU6SXNzdWU4MTI2NjU3OTQ=
10,303
convert_tokens_to_string documentation bug
{ "login": "cronoik", "id": 18630848, "node_id": "MDQ6VXNlcjE4NjMwODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cronoik", "html_url": "https://github.com/cronoik", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "organizations_url": "https://api.github.com/users/cronoik/orgs", "repos_url": "https://api.github.com/users/cronoik/repos", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "received_events_url": "https://api.github.com/users/cronoik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
The [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.convert_tokens_to_string) states that convert_tokens_to_string would convert _a sequence of token ids in a single string._ That is actually not correct as it converts a sequence of tokens. The method that converts a sequence of token ids is the decode method. Documentation: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10303/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10302/comments
https://api.github.com/repos/huggingface/transformers/issues/10302/events
https://github.com/huggingface/transformers/issues/10302
812,650,571
MDU6SXNzdWU4MTI2NTA1NzE=
10,302
Tensorflow not found but i can import it
{ "login": "fedor360139", "id": 45128881, "node_id": "MDQ6VXNlcjQ1MTI4ODgx", "avatar_url": "https://avatars.githubusercontent.com/u/45128881?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fedor360139", "html_url": "https://github.com/fedor360139", "followers_url": "https://api.github.com/users/fedor360139/followers", "following_url": "https://api.github.com/users/fedor360139/following{/other_user}", "gists_url": "https://api.github.com/users/fedor360139/gists{/gist_id}", "starred_url": "https://api.github.com/users/fedor360139/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fedor360139/subscriptions", "organizations_url": "https://api.github.com/users/fedor360139/orgs", "repos_url": "https://api.github.com/users/fedor360139/repos", "events_url": "https://api.github.com/users/fedor360139/events{/privacy}", "received_events_url": "https://api.github.com/users/fedor360139/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "What is the output of:\r\n```\r\nimport tensorflow\r\nprint(tensorflow.__version__)\r\n```\r\n?", "> What is the output of:\r\n> \r\n> ```\r\n> import tensorflow\r\n> print(tensorflow.__version__)\r\n> ```\r\n> \r\n> ?\r\n\r\n'2.4.0-rc0'", "Hello!\r\n\r\nWe currently don't support other implementations than the Google's pypi versions. The reason is because we don't tests on other versions and then we cannot guarantee it will works on those \"extra\" versions.\r\n\r\nTo make `transformers` works on Mac I suggest you to use the official version of TensorFlow as proposed in their documentation https://www.tensorflow.org/install/pip", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.0.dev0 - Platform: macOS-11.2.1-arm64-arm-64bit - Python version: 3.8.6 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> @jplu, @patrickvonplaten, @LysandreJik ## Description When i import transformers i get message bellow. ## To reproduce Steps to reproduce the behavior: 1. install tf for mac m1(https://github.com/apple/tensorflow_macos) 2. install transformers 3. import transformers Message when i import transformers: ``` None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. ``` ## Expected behavior I'd like to make transformers find tensorflow.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10302/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10302/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10301/comments
https://api.github.com/repos/huggingface/transformers/issues/10301/events
https://github.com/huggingface/transformers/pull/10301
812,641,342
MDExOlB1bGxSZXF1ZXN0NTc2OTU5MDY2
10,301
[WIP] Add Megatron-11B
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "That's very neat, @anton-l! thank you for the port\r\n\r\nYou demonstrated a very good creativity by finding a way to recompose the model shards!\r\n\r\n> This one will probably be fun to test with DeepSpeed, as @stas00 mentioned it's referenced a lot in its docs \r\n\r\nAs you correctly noticed studying Megatron-LM's horizontal model parallel sharding is on my TODO list. \r\n\r\nI suppose since `transformers` currently doesn't provide this feature you didn't port that part of the model, correct? i.e. you unsharded it. I had a brief read through the PR and didn't see anything of a sort - unless I somehow missed it? And without this feature, this is like any other `transformers` model - It's its horizontal model parallel feature that is needed to complete 3D parallelism with Deepspeed. Your PR is an excellent start.\r\n\r\nI think the part that deals with sharding is here in the original:\r\nhttps://github.com/jeffra/DSE/blob/79888e162425e8d64043a9597ee14751bd4b53d1/megatron/data/realm_index.py\r\nThough this is the NVIDIA version. \r\n\r\nSo if the horizontal MP is eventually re-ported (I hope it will be so) the model will need to know when to load the flattened version and when the sharded one. But `transformers` doesn't even have a framework for loading multiple-part models at the moment, so I guess we will cross that bridge when we get to it.\r\n\r\nI'm just just thinking aloud here, considering different options, not making any requests ;)\r\n\r\n-------\r\n\r\nThe fp32 weights are ~41GB https://huggingface.co/anton-l/megatron-11b/tree/main - i.e. it's quite similar to t5-11b, so it should be possible to load it on a 40GB gpu w/ DeepSpeed ZeRO-Offload if there are some 256GB of RAM available.\r\n\r\n-----\r\n\r\nAlso, FYI, Deepspeed are making a new port of Megatron-LM to work with DeepSpeed. https://github.com/jeffra/DSE/tree/master/megatron-lm\r\n", "@stas00 you're correct, I didn't port the model-parallel implementation. Fairseq uses an older Megatron-LM version as a submodule [here](https://github.com/pytorch/fairseq/tree/master/fairseq/model_parallel) for its MP map-reduce fuctions. This makes it quite cumbersome to reproduce, since it requires compiling an older `apex` library among other dependencies with broken versioning. It would also require a patched version of faiseq's state loader, since right now it requires exactly 8 GPUs available to load the sharded checkpoint correctly.\r\n\r\nHowever, on the surface it seems like adding support for model parallelism comes down to porting `VocabParallelEmbedding`, `ColumnParallelLinear` and `RowParallelLinear` layers as implemented [here](https://github.com/ngoyal2707/Megatron-LM/blob/adb23324c222aad0aad89308e70302d996a5eaeb/mpu/layers.py). This seems doable, but I don't have multiple GPUs to test it out :(\r\n\r\nI guess a proper MP implementation should also take care of splitting the checkpointed layers regardless of how many GPUs are available (i.e. 2, 4 or 8). That would remove the requirement to have a full DGX setup if the user is willing to use gradient checkpointing/accumulation instead.\r\n\r\n", "@anhon-l, in order not to make your and reviewers' lives unnecessarily difficult, let's take the discussion of the Horizontal MP to a dedicated issue, since it could take some time to figure and none of is required for you to complete this PR and I trust @patil-suraj and @patrickvonplaten will support you at completing this awesome effort. \r\n\r\nSo if you could re-post your last comment here: https://github.com/huggingface/transformers/issues/10321 and I will follow up there. Thank you!", "> ```\r\n> ['Before boarding your rocket to Mars, remember to pack these items: 1. A parachute.',\r\n> 'Before boarding your rocket to Mars, remember to pack these items: 1. A parachute $100 bill2. A copy of your passport3. A copy of your passport444',\r\n> 'Before boarding your rocket to Mars, remember to pack these items: 1. A parachute $1 million dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars']\r\n> ```\r\n> \r\n> To be honest, I'm not too impressed with its text-generation power. 😄 I guess it's either that the model was too large to train it for enough steps, or I missed something during the conversion. The original implementation does not have a text-generation script (or any non-wikitext results, for that matter), so I'm kinda in the dark here.\r\n\r\nThis is amazing work, big kudos! The seemingly low text-generation quality surprises me though, because of the crazy good output you get from https://inferkit.com/ which is also just Megatron11b, according to their docs (https://inferkit.com/docs/generation). Their output seems to be much better than GPT2.", "@anton-l, would you like to complete this PR? For it to be reviewed it needs to be a normal PR and not a draft.\r\n\r\nI marked it as WIP so that the stale bot won't try to close it.\r\n\r\nThank you.", "pinging @anton-l - let's revisit this? Please let us know what you need.\r\n\r\nI know meanwhile someone else did the porting of the original GPT2-345M checkpoint https://huggingface.co/nvidia/megatron-gpt2-345m and I see from the docs they use straight GPT2 transformers model to operate it. \r\nhttps://huggingface.co/nvidia/megatron-gpt2-345m#text-generation\r\n\r\nAll they have is a conversion script:\r\nhttps://github.com/huggingface/transformers/tree/master/src/transformers/models/megatron_gpt2\r\nCan the same be done with the fairseq version - i.e. reuse some of the existing models for that? or is it unique enough to warrant its own?\r\n\r\nPlease bear with me, I'm just starting to figure out Megatron-LM and its variants (there is also a Deepspeed variant), so I'm just slightly above clueless at the moment - should have a better understanding in a few days once I had a chance working with it.", "@stas00 sorry for the late reply!\r\n\r\nIt's great that someone figured out a way to post the original megatron models. When I was looking into that, it wasn't exactly straightforward due to the differences between the attention block implementations in HF GPT2 and Megatron, which was probably patched/parameterized in the meantime.\r\n\r\nI chose to implement a separate model for the fairseq megatron because the model uses the same code as the existing MBART & FSMT, but there's only an encoder model, without the decoder. However, we could take a different route and convert the fairseq weights to fit GPT2, since it's clearly possible now. I'll try that tomorrow, and if it works out, we can discard this PR and just add a simple conversion script :+1: ", "This PR seems very promising and I know the model would be really useful to many.\r\n\r\nAs it was earlier pointed out, the converted model doesn't seem to have the same quality of generation as the model elsewhere. Perhaps the conversion script could have caused it somehow? Just curious if there was any success with converting the fairseq weights to fit GPT2. " ]
1,613
1,648
1,648
MEMBER
null
# What does this PR do? Fixes #9560 This PR introduces the Megatron model as described in https://github.com/pytorch/fairseq/blob/master/examples/megatron_11b/README.md This one will probably be fun to test with DeepSpeed, as @stas00 mentioned it's referenced a lot in its docs :smile: It's important to mention that there are actually two independent implementations of [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf): * The one described in the original paper belongs to NVIDIA (https://github.com/NVIDIA/Megatron-LM), but they released only a 345M checkpoint. It's also based on a rewrite of GPT2 and is not compatible with the current huggingface implementation due to minor changes, like LayerNorm reordering (see https://github.com/NVIDIA/Megatron-LM/issues/37). * [Fairseq](https://github.com/pytorch/fairseq/blob/master/examples/megatron_11b/README.md), on the other hand, uses its own GPT2 version based on their encoder-decoder framework (with the encoder removed) and it does release the colossal 11B pretrained model. After some tinkering I realized that fairseq's checkpoint is already pretty compatible with the existing BART port. So, based on that and the fact that NVIDIA doesn't plan on releasing the 3B and 8B checkpoints, **I chose to port only the fairseq version**. **NOTE:** The original fairseq implementation requires an 8-GPU server to even load the model weights, so I just load the checkpoints manually one by one and merge the model-parallelized tensors into single-model ones. ### How to reproduce the conversion 1. First, find a server with _at least 85GB of RAM_, this model is huge! 2. Next, download and untar the checkpoint: ``` # WARNING: this file is 19GB wget https://dl.fbaipublicfiles.com/fairseq/models/model_parallel/megatron_11b.tar.gz tar -xzvf megatron_11b.tar.gz wget -P ./megatron_11b/ 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' wget -P ./megatron_11b/ 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' ``` 3. Run the conversion script ``` python convert_megatron_original_pytorch_checkpoint_to_pytorch.py --fairseq_path /path/to/megatron_11b --pytorch_dump_path /path/to/megatron_hf_dump ``` 4. The conversion script will load the model-parallel shards of the checkpoint, group the sharded parameters and concatenate the weights, so that the [fairseq.ModelParallelTransformerLanguageModel](https://github.com/pytorch/fairseq/blob/3b27ed7996b0315f471c795cf9b7dfcc18467cbe/fairseq/model_parallel/models/transformer_lm.py) `state_dict` can be easily loaded into a CPU-compatible [faiseq.TransformerLanguageModel](https://github.com/pytorch/fairseq/blob/3b27ed7996b0315f471c795cf9b7dfcc18467cbe/fairseq/models/transformer_lm.py). The de-parallelisation is based on ParlAI's [conversion script](https://github.com/facebookresearch/ParlAI/blob/abfb771ac4ed2966d6f3ea22c7a38e4ebc9cc0f0/parlai/agents/bart/convert_fairseq_to_parlai.py#L258-L307). 5. Then the script will initialize the huggingface Megatron model and load the converted `state_dict` into it. ### Here's how Megatron differs from the existing BART/MBART implemenations: 1. The most controversial difference, IMO, is the missing encoder, since it's a decoder-only model. For now, I decided to remove the encoder parts inherited from MBART, bit left the encoder-dependent parts in the decoder (e.g. `encoder_hidden_states`, `encoder_attention_mask`) and the cross-attention to simplify the review process on your end. 2. Megatron uses `SinusoidalPositionalEmbedding` instead of learned ones, so I just yanked those from FSMT :smile: 3. Megatron does not have a `layernorm_embedding` 4. Minor detail: the `self_attn_layer_norm` is applied before self-attention (like in MBART) instead of after (like in BART). ### Important questions regarding the API: 1. What should be done about the missing encoder? I think the `decoder` variable can be left as is, since it's compatible with the fairseq checkpoint keys, but the `encoder_*` references in the code bother me a lot. We need to somehow strike a balance between `Copied from` and removing the unused parts. 2. I think the position of `self_attn_layer_norm` should be a parameter in the config, similar to `decoder_normalize_before=True` in faiseq. This will close the not-so-obvious difference between BART and MBART. 3. The existence of `layernorm_embedding` can also be parametrized, similar to `layernorm_embedding=False` in fairseq. ### Quick LM test You can test out the model's capabilities like so (again, you'll probably need _at least 85GB RAM_, there's some weird memory duplication happening somewhere, this should not need more than 50): ``` from transformers import MegatronForCausalLM, MegatronTokenizer, TextGenerationPipeline tokenizer = MegatronTokenizer.from_pretrained("megatron-11b") model = MegatronForCausalLM.from_pretrained("anton-l/megatron-11b") def generate(prompt, max_length=40, num_beams=5, num_return=3): input_ids = tokenizer(prompt, return_tensors="pt").input_ids outputs = model.generate( input_ids=input_ids, num_beams=num_beams, num_return_sequences=num_return, max_length=max_length ) decoded = tokenizer.batch_decode(outputs, skip_special_tokens=True) return decoded print(generate("Before boarding your rocket to Mars, remember to pack these items: ")) ``` ``` ['Before boarding your rocket to Mars, remember to pack these items: 1. A parachute.', 'Before boarding your rocket to Mars, remember to pack these items: 1. A parachute $100 bill2. A copy of your passport3. A copy of your passport444', 'Before boarding your rocket to Mars, remember to pack these items: 1. A parachute $1 million dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars dollars'] ``` To be honest, I'm not too impressed with its text-generation power. :smile: I guess it's either that the model was too large to train it for enough steps, or I missed something during the conversion. The original implementation does not have a text-generation script (or any non-wikitext results, for that matter), so I'm kinda in the dark here. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10301/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10301/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10301", "html_url": "https://github.com/huggingface/transformers/pull/10301", "diff_url": "https://github.com/huggingface/transformers/pull/10301.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10301.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10300/comments
https://api.github.com/repos/huggingface/transformers/issues/10300/events
https://github.com/huggingface/transformers/issues/10300
812,636,157
MDU6SXNzdWU4MTI2MzYxNTc=
10,300
unexpected keyword argument 'forced_bos_token_id' when using mbart-large-50-many-to-many-mmt
{ "login": "IamAdiSri", "id": 9818842, "node_id": "MDQ6VXNlcjk4MTg4NDI=", "avatar_url": "https://avatars.githubusercontent.com/u/9818842?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IamAdiSri", "html_url": "https://github.com/IamAdiSri", "followers_url": "https://api.github.com/users/IamAdiSri/followers", "following_url": "https://api.github.com/users/IamAdiSri/following{/other_user}", "gists_url": "https://api.github.com/users/IamAdiSri/gists{/gist_id}", "starred_url": "https://api.github.com/users/IamAdiSri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IamAdiSri/subscriptions", "organizations_url": "https://api.github.com/users/IamAdiSri/orgs", "repos_url": "https://api.github.com/users/IamAdiSri/repos", "events_url": "https://api.github.com/users/IamAdiSri/events{/privacy}", "received_events_url": "https://api.github.com/users/IamAdiSri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hi @IamAdiSri \r\n\r\nWhat is your Transformers version ? mBART-50 currently only works on master.", "@patil-suraj I'm on version 4.3.2, but I tried it with the modules in master branch. I searched through the repository but as far as I can tell, none of the relevant mbart modules take `forced_bos_token_id` as a parameter in their generate function.\r\n\r\nI'm looking the example on [this](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) page btw.", "`forced_bos_token_id` is included on master, https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L549\r\n\r\nyou should install from source to use mBART-50", "Oh okay, thank you." ]
1,613
1,614
1,614
NONE
null
When I try to run the example on the model card, I get this error; ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-9-88d049aaf9c0> in <module> 5 tokenizer.src_lang = "hi_IN" 6 encoded_hi = tokenizer(article_hi, return_tensors="pt") ----> 7 generated_tokens = model.generate( 8 **encoded_hi, 9 forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"] ~/opt/Python-3.8.2/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 24 def decorate_context(*args, **kwargs): 25 with self.__class__(): ---> 26 return func(*args, **kwargs) 27 return cast(F, decorate_context) 28 ~/opt/Python-3.8.2/lib/python3.8/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, **model_kwargs) 831 if self.config.is_encoder_decoder: 832 # add encoder_outputs to model_kwargs --> 833 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs) 834 835 # set input_ids as decoder_input_ids ~/opt/Python-3.8.2/lib/python3.8/site-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs) 376 argument: value for argument, value in model_kwargs.items() if not argument.startswith("decoder_") 377 } --> 378 model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs) 379 return model_kwargs 380 ~/opt/Python-3.8.2/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), TypeError: forward() got an unexpected keyword argument 'forced_bos_token_id' ``` Looking at the code in the master repository, I can't see the generate function taking that argument anywhere at all so I'm unsure how to proceed with this. _Originally posted by @IamAdiSri in https://github.com/huggingface/tokenizers/issues/633#issuecomment-781689632_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10300/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10299/comments
https://api.github.com/repos/huggingface/transformers/issues/10299/events
https://github.com/huggingface/transformers/issues/10299
812,609,813
MDU6SXNzdWU4MTI2MDk4MTM=
10,299
Object of type 'int64' is not JSON serializable in Trainer.save_checkpoint
{ "login": "arthurbra", "id": 27431781, "node_id": "MDQ6VXNlcjI3NDMxNzgx", "avatar_url": "https://avatars.githubusercontent.com/u/27431781?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arthurbra", "html_url": "https://github.com/arthurbra", "followers_url": "https://api.github.com/users/arthurbra/followers", "following_url": "https://api.github.com/users/arthurbra/following{/other_user}", "gists_url": "https://api.github.com/users/arthurbra/gists{/gist_id}", "starred_url": "https://api.github.com/users/arthurbra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arthurbra/subscriptions", "organizations_url": "https://api.github.com/users/arthurbra/orgs", "repos_url": "https://api.github.com/users/arthurbra/repos", "events_url": "https://api.github.com/users/arthurbra/events{/privacy}", "received_events_url": "https://api.github.com/users/arthurbra/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "I too ran into this problem and its caused by turning on evaluation strategy which then adds metrics in the log_history of the models state, which is using numpy data types and causes the JSON encoder issue. That was the case with 4.3.3. There appear to be a bunch of changes in the trainer in the works, whether this has been fixed as a result of those i've not checked.", "As a temporary work around you can modify trainer.py at line 1260 \"output = {**logs, **{\"step\": self.state.global_step}}\" and add the following three lines after. If the metrics are being calculated the same in the latest code as in 4.3.3 then something like this may also be needed going forward, or things calling the log method will need to ensure they safely cast data points beforehand if its going to be added to the trainer state still.\r\n\r\n```\r\n for k,v in output.items():\r\n if isinstance(v, np.generic):\r\n output[k]=v.item()\r\n```", "I confirm I can reproduce in master. Will investigate more tomorrow.", "My only comment on the fix submitted is that it targets the metrics output, but will not stop others putting things into the log history in the model state which later on cause the same problem if serializing the state to json. " ]
1,613
1,615
1,615
NONE
null
I am using the recent run_ner.py example script to train an NER model. I want to evaluate the performance of the model during training and use the following command for training: ``` python3 run_ner.py --model_name_or_path bert-base-uncased --dataset_name conll2003 --return_entity_level_metrics --output_dir conll-tmp --overwrite_output_dir --do_train --do_eval --do_predict --evaluation_strategy steps --logging_steps 10 --eval_steps 10 --load_best_model_at_end ``` I run the command in the current docker image huggingface/transformers-pytorch-gpu However, I get the following error: ``` Traceback (most recent call last): File "run_ner.py", line 470, in main() File "run_ner.py", line 404, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 983, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1062, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1126, in _save_checkpoint self.state.save_to_json(os.path.join(output_dir, "trainer_state.json")) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_callback.py", line 95, in save_to_json json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + "\n" File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/usr/lib/python3.6/json/encoder.py", line 201, in encode chunks = list(chunks) File "/usr/lib/python3.6/json/encoder.py", line 430, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict yield from chunks File "/usr/lib/python3.6/json/encoder.py", line 325, in _iterencode_list yield from chunks File "/usr/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict yield from chunks File "/usr/lib/python3.6/json/encoder.py", line 437, in _iterencode o = _default(o) File "/usr/lib/python3.6/json/encoder.py", line 180, in default o.__class__.__name__) TypeError: Object of type 'int64' is not JSON serializable -- ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10299/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10298/comments
https://api.github.com/repos/huggingface/transformers/issues/10298/events
https://github.com/huggingface/transformers/issues/10298
812,564,384
MDU6SXNzdWU4MTI1NjQzODQ=
10,298
Converting fairseq NMT to transformers misses model weight
{ "login": "tagucci", "id": 12934276, "node_id": "MDQ6VXNlcjEyOTM0Mjc2", "avatar_url": "https://avatars.githubusercontent.com/u/12934276?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tagucci", "html_url": "https://github.com/tagucci", "followers_url": "https://api.github.com/users/tagucci/followers", "following_url": "https://api.github.com/users/tagucci/following{/other_user}", "gists_url": "https://api.github.com/users/tagucci/gists{/gist_id}", "starred_url": "https://api.github.com/users/tagucci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tagucci/subscriptions", "organizations_url": "https://api.github.com/users/tagucci/orgs", "repos_url": "https://api.github.com/users/tagucci/repos", "events_url": "https://api.github.com/users/tagucci/events{/privacy}", "received_events_url": "https://api.github.com/users/tagucci/received_events", "type": "User", "site_admin": false }
[ { "id": 2357479466, "node_id": "MDU6TGFiZWwyMzU3NDc5NDY2", "url": "https://api.github.com/repos/huggingface/transformers/labels/fsmt", "name": "fsmt", "color": "d0e884", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "Pinging @stas00 here", "Thank you for the ping, @NielsRogge \r\n\r\n@tagucci, when you file an issue you will find a list of who to tag for what topic, so please use it to tag the right people. Otherwise it's hard for everybody to try to follow all issues.\r\n\r\nalso when you link to a line of code in github, always hit `y` first to get the exact sha (it rewrites the url to embed the current git sha). Otherwise your links quickly become invalid, e.g. I have no idea where you were trying to link to in your link to [transformer_wmt_en_de](https://github.com/pytorch/fairseq/blob/master/fairseq/models/transformer.py#L1046) as the code was modified today.\r\n\r\n--------------------------------\r\n\r\nOK, could you first clarify where do you get \"decoder.embed_out weight is missing\" - the command line and the backtrace please. Also a dump of the model (i.e. `print(model)`.\r\n\r\nNow to the guess work.\r\n\r\nDoes your model miss `output_projection` weight key?\r\n\r\nThe context is here:\r\nhttps://github.com/pytorch/fairseq/blob/ab560669cd9baaa4009e1fd01c970f8ffccd1ee0/fairseq/models/transformer.py#L950-L960\r\n\r\nfairseq has different versions of their code, and some have keys renamed or added, that's why they have all that logic.\r\n\r\nYou can see that it's a simple alias - i.e. in fsmt decoder embed and output are always shared.\r\n\r\nhttps://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/fsmt/modeling_fsmt.py#L651\r\n\r\nSo if it's missing you can assign it in the conversion script:\r\n```\r\n model_state_dict[\"model.decoder.output_projection.weight\"] = model_state_dict[\"model.decoder.embed_tokens.weight\"]\r\n```\r\nadd this to this line:\r\nhttps://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/fsmt/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py#L247\r\n\r\nbut again I could have guessed wrong and will need to see the model dump to tell you more.\r\n\r\nYou can see the dump of original model I converted from here: https://github.com/stas00/porting/blob/master/transformers/fairseq-wmt19/nbs/config.ipynb\r\n", "@NielsRogge\r\nThanks for pinging @stas00!\r\n@stas00\r\nSorry for the inconvenience of linking the code.\r\nFollowing your advice, my model args and model dump are as below.\r\n\r\n> in fsmt decoder embed and output are always shared.\r\n\r\nAs you said, fsmt does not have decoder embed and output seperately, my fairseq `transformer_wmt_en_de` without `share_decoder_input_output_embed` cannot fit fsmt in transformers. In this case, do I need to retrain fairseq model with `share_decoder_input_output_embed` or modify [FSMTDecoer](https://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/fsmt/modeling_fsmt.py#L622)? \r\n\r\n\r\n```python\r\nimport torch\r\nfrom pprint import pprint\r\nchkpt = torch.load(\"model/checkpoint_best.pt\")\r\nmodel = chkpt[\"model\"]\r\npprint(vars(chkpt[\"args\"]))\r\nprint(\"\\n\".join(model.keys()))\r\n```\r\n\r\n```\r\n# args\r\n{'activation_dropout': 0.0,\r\n 'activation_fn': 'relu',\r\n 'adam_betas': '(0.9, 0.98)',\r\n 'adam_eps': 1e-08,\r\n 'adaptive_input': False,\r\n 'adaptive_softmax_cutoff': None,\r\n 'adaptive_softmax_dropout': 0,\r\n 'arch': 'transformer_wmt_en_de',\r\n 'attention_dropout': 0.0,\r\n 'best_checkpoint_metric': 'loss',\r\n 'bpe': None,\r\n 'bucket_cap_mb': 25,\r\n 'clip_norm': 0.0,\r\n 'cpu': False,\r\n 'criterion': 'label_smoothed_cross_entropy',\r\n 'cross_self_attention': False,\r\n 'curriculum': 0,\r\n 'data': './data/data.src_trg',\r\n 'dataset_impl': None,\r\n 'ddp_backend': 'no_c10d',\r\n 'decoder_attention_heads': 8,\r\n 'decoder_embed_dim': 512,\r\n 'decoder_embed_path': None,\r\n 'decoder_ffn_embed_dim': 2048,\r\n 'decoder_input_dim': 512,\r\n 'decoder_layerdrop': 0,\r\n 'decoder_layers': 6,\r\n 'decoder_layers_to_keep': None,\r\n 'decoder_learned_pos': False,\r\n 'decoder_normalize_before': False,\r\n 'decoder_output_dim': 512,\r\n 'device_id': 0,\r\n 'disable_validation': False,\r\n 'distributed_backend': 'nccl',\r\n 'distributed_init_method': 'tcp://localhost:16441',\r\n 'distributed_no_spawn': False,\r\n 'distributed_port': -1,\r\n 'distributed_rank': 0,\r\n 'distributed_world_size': 4,\r\n 'dropout': 0.1,\r\n 'empty_cache_freq': 0,\r\n 'encoder_attention_heads': 8,\r\n 'encoder_embed_dim': 512,\r\n 'encoder_embed_path': None,\r\n 'encoder_ffn_embed_dim': 2048,\r\n 'encoder_layerdrop': 0,\r\n 'encoder_layers': 6,\r\n 'encoder_layers_to_keep': None,\r\n 'encoder_learned_pos': False,\r\n 'encoder_normalize_before': False,\r\n 'fast_stat_sync': False,\r\n 'find_unused_parameters': False,\r\n 'fix_batches_to_gpus': False,\r\n 'fixed_validation_seed': None,\r\n 'fp16': False,\r\n 'fp16_init_scale': 128,\r\n 'fp16_scale_tolerance': 0.0,\r\n 'fp16_scale_window': None,\r\n 'keep_interval_updates': 20,\r\n 'keep_last_epochs': -1,\r\n 'label_smoothing': 0.1,\r\n 'layer_wise_attention': False,\r\n 'layernorm_embedding': False,\r\n 'lazy_load': False,\r\n 'left_pad_source': True,\r\n 'left_pad_target': False,\r\n 'load_alignments': False,\r\n 'log_format': 'json',\r\n 'log_interval': 50,\r\n 'lr': [0.0007],\r\n 'lr_scheduler': 'inverse_sqrt',\r\n 'max_epoch': 100,\r\n 'max_sentences': None,\r\n 'max_sentences_valid': None,\r\n 'max_source_positions': 1024,\r\n 'max_target_positions': 1024,\r\n 'max_tokens': 4096,\r\n 'max_tokens_valid': 4096,\r\n 'max_update': 0,\r\n 'maximize_best_checkpoint_metric': False,\r\n 'memory_efficient_fp16': False,\r\n 'min_loss_scale': 0.0001,\r\n 'min_lr': 1e-09,\r\n 'no_cross_attention': False,\r\n 'no_epoch_checkpoints': True,\r\n 'no_last_checkpoints': False,\r\n 'no_progress_bar': True,\r\n 'no_save': False,\r\n 'no_save_optimizer_state': False,\r\n 'no_scale_embedding': False,\r\n 'no_token_positional_embeddings': False,\r\n 'num_workers': 1,\r\n 'optimizer': 'adam',\r\n 'optimizer_overrides': '{}',\r\n 'raw_text': False,\r\n 'required_batch_size_multiple': 8,\r\n 'reset_dataloader': False,\r\n 'reset_lr_scheduler': False,\r\n 'reset_meters': False,\r\n 'reset_optimizer': False,\r\n 'restore_file': 'checkpoint_last.pt',\r\n 'save_dir': './data/models',\r\n 'save_interval': 1,\r\n 'save_interval_updates': 1000,\r\n 'seed': 1,\r\n 'sentence_avg': False,\r\n 'share_all_embeddings': False,\r\n 'share_decoder_input_output_embed': False,\r\n 'skip_invalid_size_inputs_valid_test': True,\r\n 'source_lang': 'src',\r\n 'target_lang': 'trg',\r\n 'task': 'translation',\r\n 'tensorboard_logdir': '',\r\n 'threshold_loss_scale': None,\r\n 'tokenizer': None,\r\n 'train_subset': 'train',\r\n 'truncate_source': False,\r\n 'update_freq': [16],\r\n 'upsample_primary': 1,\r\n 'use_bmuf': False,\r\n 'user_dir': None,\r\n 'valid_subset': 'valid',\r\n 'validate_interval': 1,\r\n 'warmup_init_lr': 1e-07,\r\n 'warmup_updates': 4000,\r\n 'weight_decay': 0.0}\r\n```\r\n\r\n```\r\n# model dump\r\nencoder.version\r\nencoder.embed_tokens.weight\r\nencoder.embed_positions._float_tensor\r\nencoder.layers.0.self_attn.k_proj.weight\r\nencoder.layers.0.self_attn.k_proj.bias\r\nencoder.layers.0.self_attn.v_proj.weight\r\nencoder.layers.0.self_attn.v_proj.bias\r\nencoder.layers.0.self_attn.q_proj.weight\r\nencoder.layers.0.self_attn.q_proj.bias\r\nencoder.layers.0.self_attn.out_proj.weight\r\nencoder.layers.0.self_attn.out_proj.bias\r\nencoder.layers.0.self_attn_layer_norm.weight\r\nencoder.layers.0.self_attn_layer_norm.bias\r\nencoder.layers.0.fc1.weight\r\nencoder.layers.0.fc1.bias\r\nencoder.layers.0.fc2.weight\r\nencoder.layers.0.fc2.bias\r\nencoder.layers.0.final_layer_norm.weight\r\nencoder.layers.0.final_layer_norm.bias\r\nencoder.layers.1.self_attn.k_proj.weight\r\nencoder.layers.1.self_attn.k_proj.bias\r\nencoder.layers.1.self_attn.v_proj.weight\r\nencoder.layers.1.self_attn.v_proj.bias\r\nencoder.layers.1.self_attn.q_proj.weight\r\nencoder.layers.1.self_attn.q_proj.bias\r\nencoder.layers.1.self_attn.out_proj.weight\r\nencoder.layers.1.self_attn.out_proj.bias\r\nencoder.layers.1.self_attn_layer_norm.weight\r\nencoder.layers.1.self_attn_layer_norm.bias\r\nencoder.layers.1.fc1.weight\r\nencoder.layers.1.fc1.bias\r\nencoder.layers.1.fc2.weight\r\nencoder.layers.1.fc2.bias\r\nencoder.layers.1.final_layer_norm.weight\r\nencoder.layers.1.final_layer_norm.bias\r\nencoder.layers.2.self_attn.k_proj.weight\r\nencoder.layers.2.self_attn.k_proj.bias\r\nencoder.layers.2.self_attn.v_proj.weight\r\nencoder.layers.2.self_attn.v_proj.bias\r\nencoder.layers.2.self_attn.q_proj.weight\r\nencoder.layers.2.self_attn.q_proj.bias\r\nencoder.layers.2.self_attn.out_proj.weight\r\nencoder.layers.2.self_attn.out_proj.bias\r\nencoder.layers.2.self_attn_layer_norm.weight\r\nencoder.layers.2.self_attn_layer_norm.bias\r\nencoder.layers.2.fc1.weight\r\nencoder.layers.2.fc1.bias\r\nencoder.layers.2.fc2.weight\r\nencoder.layers.2.fc2.bias\r\nencoder.layers.2.final_layer_norm.weight\r\nencoder.layers.2.final_layer_norm.bias\r\nencoder.layers.3.self_attn.k_proj.weight\r\nencoder.layers.3.self_attn.k_proj.bias\r\nencoder.layers.3.self_attn.v_proj.weight\r\nencoder.layers.3.self_attn.v_proj.bias\r\nencoder.layers.3.self_attn.q_proj.weight\r\nencoder.layers.3.self_attn.q_proj.bias\r\nencoder.layers.3.self_attn.out_proj.weight\r\nencoder.layers.3.self_attn.out_proj.bias\r\nencoder.layers.3.self_attn_layer_norm.weight\r\nencoder.layers.3.self_attn_layer_norm.bias\r\nencoder.layers.3.fc1.weight\r\nencoder.layers.3.fc1.bias\r\nencoder.layers.3.fc2.weight\r\nencoder.layers.3.fc2.bias\r\nencoder.layers.3.final_layer_norm.weight\r\nencoder.layers.3.final_layer_norm.bias\r\nencoder.layers.4.self_attn.k_proj.weight\r\nencoder.layers.4.self_attn.k_proj.bias\r\nencoder.layers.4.self_attn.v_proj.weight\r\nencoder.layers.4.self_attn.v_proj.bias\r\nencoder.layers.4.self_attn.q_proj.weight\r\nencoder.layers.4.self_attn.q_proj.bias\r\nencoder.layers.4.self_attn.out_proj.weight\r\nencoder.layers.4.self_attn.out_proj.bias\r\nencoder.layers.4.self_attn_layer_norm.weight\r\nencoder.layers.4.self_attn_layer_norm.bias\r\nencoder.layers.4.fc1.weight\r\nencoder.layers.4.fc1.bias\r\nencoder.layers.4.fc2.weight\r\nencoder.layers.4.fc2.bias\r\nencoder.layers.4.final_layer_norm.weight\r\nencoder.layers.4.final_layer_norm.bias\r\nencoder.layers.5.self_attn.k_proj.weight\r\nencoder.layers.5.self_attn.k_proj.bias\r\nencoder.layers.5.self_attn.v_proj.weight\r\nencoder.layers.5.self_attn.v_proj.bias\r\nencoder.layers.5.self_attn.q_proj.weight\r\nencoder.layers.5.self_attn.q_proj.bias\r\nencoder.layers.5.self_attn.out_proj.weight\r\nencoder.layers.5.self_attn.out_proj.bias\r\nencoder.layers.5.self_attn_layer_norm.weight\r\nencoder.layers.5.self_attn_layer_norm.bias\r\nencoder.layers.5.fc1.weight\r\nencoder.layers.5.fc1.bias\r\nencoder.layers.5.fc2.weight\r\nencoder.layers.5.fc2.bias\r\nencoder.layers.5.final_layer_norm.weight\r\nencoder.layers.5.final_layer_norm.bias\r\ndecoder.embed_out\r\ndecoder.version\r\ndecoder.embed_tokens.weight\r\ndecoder.embed_positions._float_tensor\r\ndecoder.layers.0.self_attn.k_proj.weight\r\ndecoder.layers.0.self_attn.k_proj.bias\r\ndecoder.layers.0.self_attn.v_proj.weight\r\ndecoder.layers.0.self_attn.v_proj.bias\r\ndecoder.layers.0.self_attn.q_proj.weight\r\ndecoder.layers.0.self_attn.q_proj.bias\r\ndecoder.layers.0.self_attn.out_proj.weight\r\ndecoder.layers.0.self_attn.out_proj.bias\r\ndecoder.layers.0.self_attn_layer_norm.weight\r\ndecoder.layers.0.self_attn_layer_norm.bias\r\ndecoder.layers.0.encoder_attn.k_proj.weight\r\ndecoder.layers.0.encoder_attn.k_proj.bias\r\ndecoder.layers.0.encoder_attn.v_proj.weight\r\ndecoder.layers.0.encoder_attn.v_proj.bias\r\ndecoder.layers.0.encoder_attn.q_proj.weight\r\ndecoder.layers.0.encoder_attn.q_proj.bias\r\ndecoder.layers.0.encoder_attn.out_proj.weight\r\ndecoder.layers.0.encoder_attn.out_proj.bias\r\ndecoder.layers.0.encoder_attn_layer_norm.weight\r\ndecoder.layers.0.encoder_attn_layer_norm.bias\r\ndecoder.layers.0.fc1.weight\r\ndecoder.layers.0.fc1.bias\r\ndecoder.layers.0.fc2.weight\r\ndecoder.layers.0.fc2.bias\r\ndecoder.layers.0.final_layer_norm.weight\r\ndecoder.layers.0.final_layer_norm.bias\r\ndecoder.layers.1.self_attn.k_proj.weight\r\ndecoder.layers.1.self_attn.k_proj.bias\r\ndecoder.layers.1.self_attn.v_proj.weight\r\ndecoder.layers.1.self_attn.v_proj.bias\r\ndecoder.layers.1.self_attn.q_proj.weight\r\ndecoder.layers.1.self_attn.q_proj.bias\r\ndecoder.layers.1.self_attn.out_proj.weight\r\ndecoder.layers.1.self_attn.out_proj.bias\r\ndecoder.layers.1.self_attn_layer_norm.weight\r\ndecoder.layers.1.self_attn_layer_norm.bias\r\ndecoder.layers.1.encoder_attn.k_proj.weight\r\ndecoder.layers.1.encoder_attn.k_proj.bias\r\ndecoder.layers.1.encoder_attn.v_proj.weight\r\ndecoder.layers.1.encoder_attn.v_proj.bias\r\ndecoder.layers.1.encoder_attn.q_proj.weight\r\ndecoder.layers.1.encoder_attn.q_proj.bias\r\ndecoder.layers.1.encoder_attn.out_proj.weight\r\ndecoder.layers.1.encoder_attn.out_proj.bias\r\ndecoder.layers.1.encoder_attn_layer_norm.weight\r\ndecoder.layers.1.encoder_attn_layer_norm.bias\r\ndecoder.layers.1.fc1.weight\r\ndecoder.layers.1.fc1.bias\r\ndecoder.layers.1.fc2.weight\r\ndecoder.layers.1.fc2.bias\r\ndecoder.layers.1.final_layer_norm.weight\r\ndecoder.layers.1.final_layer_norm.bias\r\ndecoder.layers.2.self_attn.k_proj.weight\r\ndecoder.layers.2.self_attn.k_proj.bias\r\ndecoder.layers.2.self_attn.v_proj.weight\r\ndecoder.layers.2.self_attn.v_proj.bias\r\ndecoder.layers.2.self_attn.q_proj.weight\r\ndecoder.layers.2.self_attn.q_proj.bias\r\ndecoder.layers.2.self_attn.out_proj.weight\r\ndecoder.layers.2.self_attn.out_proj.bias\r\ndecoder.layers.2.self_attn_layer_norm.weight\r\ndecoder.layers.2.self_attn_layer_norm.bias\r\ndecoder.layers.2.encoder_attn.k_proj.weight\r\ndecoder.layers.2.encoder_attn.k_proj.bias\r\ndecoder.layers.2.encoder_attn.v_proj.weight\r\ndecoder.layers.2.encoder_attn.v_proj.bias\r\ndecoder.layers.2.encoder_attn.q_proj.weight\r\ndecoder.layers.2.encoder_attn.q_proj.bias\r\ndecoder.layers.2.encoder_attn.out_proj.weight\r\ndecoder.layers.2.encoder_attn.out_proj.bias\r\ndecoder.layers.2.encoder_attn_layer_norm.weight\r\ndecoder.layers.2.encoder_attn_layer_norm.bias\r\ndecoder.layers.2.fc1.weight\r\ndecoder.layers.2.fc1.bias\r\ndecoder.layers.2.fc2.weight\r\ndecoder.layers.2.fc2.bias\r\ndecoder.layers.2.final_layer_norm.weight\r\ndecoder.layers.2.final_layer_norm.bias\r\ndecoder.layers.3.self_attn.k_proj.weight\r\ndecoder.layers.3.self_attn.k_proj.bias\r\ndecoder.layers.3.self_attn.v_proj.weight\r\ndecoder.layers.3.self_attn.v_proj.bias\r\ndecoder.layers.3.self_attn.q_proj.weight\r\ndecoder.layers.3.self_attn.q_proj.bias\r\ndecoder.layers.3.self_attn.out_proj.weight\r\ndecoder.layers.3.self_attn.out_proj.bias\r\ndecoder.layers.3.self_attn_layer_norm.weight\r\ndecoder.layers.3.self_attn_layer_norm.bias\r\ndecoder.layers.3.encoder_attn.k_proj.weight\r\ndecoder.layers.3.encoder_attn.k_proj.bias\r\ndecoder.layers.3.encoder_attn.v_proj.weight\r\ndecoder.layers.3.encoder_attn.v_proj.bias\r\ndecoder.layers.3.encoder_attn.q_proj.weight\r\ndecoder.layers.3.encoder_attn.q_proj.bias\r\ndecoder.layers.3.encoder_attn.out_proj.weight\r\ndecoder.layers.3.encoder_attn.out_proj.bias\r\ndecoder.layers.3.encoder_attn_layer_norm.weight\r\ndecoder.layers.3.encoder_attn_layer_norm.bias\r\ndecoder.layers.3.fc1.weight\r\ndecoder.layers.3.fc1.bias\r\ndecoder.layers.3.fc2.weight\r\ndecoder.layers.3.fc2.bias\r\ndecoder.layers.3.final_layer_norm.weight\r\ndecoder.layers.3.final_layer_norm.bias\r\ndecoder.layers.4.self_attn.k_proj.weight\r\ndecoder.layers.4.self_attn.k_proj.bias\r\ndecoder.layers.4.self_attn.v_proj.weight\r\ndecoder.layers.4.self_attn.v_proj.bias\r\ndecoder.layers.4.self_attn.q_proj.weight\r\ndecoder.layers.4.self_attn.q_proj.bias\r\ndecoder.layers.4.self_attn.out_proj.weight\r\ndecoder.layers.4.self_attn.out_proj.bias\r\ndecoder.layers.4.self_attn_layer_norm.weight\r\ndecoder.layers.4.self_attn_layer_norm.bias\r\ndecoder.layers.4.encoder_attn.k_proj.weight\r\ndecoder.layers.4.encoder_attn.k_proj.bias\r\ndecoder.layers.4.encoder_attn.v_proj.weight\r\ndecoder.layers.4.encoder_attn.v_proj.bias\r\ndecoder.layers.4.encoder_attn.q_proj.weight\r\ndecoder.layers.4.encoder_attn.q_proj.bias\r\ndecoder.layers.4.encoder_attn.out_proj.weight\r\ndecoder.layers.4.encoder_attn.out_proj.bias\r\ndecoder.layers.4.encoder_attn_layer_norm.weight\r\ndecoder.layers.4.encoder_attn_layer_norm.bias\r\ndecoder.layers.4.fc1.weight\r\ndecoder.layers.4.fc1.bias\r\ndecoder.layers.4.fc2.weight\r\ndecoder.layers.4.fc2.bias\r\ndecoder.layers.4.final_layer_norm.weight\r\ndecoder.layers.4.final_layer_norm.bias\r\ndecoder.layers.5.self_attn.k_proj.weight\r\ndecoder.layers.5.self_attn.k_proj.bias\r\ndecoder.layers.5.self_attn.v_proj.weight\r\ndecoder.layers.5.self_attn.v_proj.bias\r\ndecoder.layers.5.self_attn.q_proj.weight\r\ndecoder.layers.5.self_attn.q_proj.bias\r\ndecoder.layers.5.self_attn.out_proj.weight\r\ndecoder.layers.5.self_attn.out_proj.bias\r\ndecoder.layers.5.self_attn_layer_norm.weight\r\ndecoder.layers.5.self_attn_layer_norm.bias\r\ndecoder.layers.5.encoder_attn.k_proj.weight\r\ndecoder.layers.5.encoder_attn.k_proj.bias\r\ndecoder.layers.5.encoder_attn.v_proj.weight\r\ndecoder.layers.5.encoder_attn.v_proj.bias\r\ndecoder.layers.5.encoder_attn.q_proj.weight\r\ndecoder.layers.5.encoder_attn.q_proj.bias\r\ndecoder.layers.5.encoder_attn.out_proj.weight\r\ndecoder.layers.5.encoder_attn.out_proj.bias\r\ndecoder.layers.5.encoder_attn_layer_norm.weight\r\ndecoder.layers.5.encoder_attn_layer_norm.bias\r\ndecoder.layers.5.fc1.weight\r\ndecoder.layers.5.fc1.bias\r\ndecoder.layers.5.fc2.weight\r\ndecoder.layers.5.fc2.bias\r\ndecoder.layers.5.final_layer_norm.weight\r\ndecoder.layers.5.final_layer_norm.bias\r\n```", "Thank you for the model dump, so my guess was correct - it's missing `output_projection` and I gave you the solution at the end of my previous comment.\r\n\r\nI still don't know what the error you get, when and the backtrace, but perhaps my guessed solution is all you need.\r\n\r\nBut no, you don't need to re-train.\r\n\r\nif it works could you adapt the script to check if the checkpoint that is being loaded doesn't have this key and if so to copy it as I suggested?", "@stas00 \r\nRunning [convert_fsmt_original_pytorch_checkpoint_to_pytorch.py]( https://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/fsmt/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py) is successful, but there is something wrong.\r\nIn comparing fairseq model provided by `torch.hub` and converted HF model, the translation result is matched.\r\n\r\n```python\r\nfrom transformers import FSMTForConditionalGeneration, FSMTTokenizer, TranslationPipeline\r\nimport torch\r\n\r\ninput_text = \"Machine learning is great!\"\r\n# fairseq\r\nen2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt',\r\n tokenizer='moses', bpe='fastbpe')\r\nfairseq_res = en2de.translate(input_text)\r\n# tranformers\r\nfsmt_path = \"./fairseq2hf/data/wmt19-en-de/\"\r\ntokenizer = FSMTTokenizer.from_pretrained(fsmt_path)\r\nmodel = FSMTForConditionalGeneration.from_pretrained(fsmt_path)\r\nnlp = TranslationPipeline(model=model, tokenizer=tokenizer)\r\nfsmt_res = nlp(input_text)[0][\"translation_text\"]\r\n\r\nprint(\"fairseq: {}\".format(fairseq_res))\r\nprint(\"transformer: {}\".format(fsmt_res))\r\nprint(\"match: {}\".format(fairseq_res == fsmt_res))\r\n\"\"\"\r\nfairseq: Maschinelles Lernen ist großartig!\r\ntransformer: Maschinelles Lernen ist großartig!\r\nmatch: True\r\n\"\"\"\r\n```\r\n\r\nHowever, my fairseq model and converted HF model have wrong result with same parameter (beam_size=5). Do you have any idea to debug why tranlation results are different?\r\n\r\n### fairseq result\r\n```\r\n# encoded token by hypo_token by fairseq-interactive\r\ntensor([[5269, 2069, 5, 1154, 9, 4, 1823, 3382, 5, 3128, 116, 167,\r\n 1582, 7, 2192, 914, 63, 6, 1823, 2807, 124, 1219, 1106, 8,\r\n 53, 2175, 2007, 483, 4, 660, 708, 5229, 33, 44, 4, 6049,\r\n 1430, 5, 1806, 2050, 2282, 1908, 4, 334, 3229, 4808, 6102, 5,\r\n 5031, 11, 5, 291, 4214, 6485, 10, 5784, 1908, 23, 1765, 4916,\r\n 6, 2]])\r\n\r\n# hypo_token by fairseq-interactive\r\ntensor([ 924, 4938, 6, 3056, 59, 503, 1497, 4, 5835, 847, 6, 592,\r\n 2], dtype=torch.int32)\r\n```\r\n\r\n### transformers result\r\n```python\r\nencoded_token = torch.tensor([[5269, 2069, 5, 1154, 9, 4, 1823, 3382, 5, 3128, 116, 167, 1582, 7, 2192, 914, 63, 6, 1823, 2807, 124, 1219, 1106, 8, 53, 2175, 2007, 483, 4, 660, 708, 5229, 33, 44, 4, 6049, 1430, 5, 1806, 2050, 2282, 1908, 4, 334, 3229, 4808, 6102, 5, 5031, 11, 5, 291, 4214, 6485, 10, 5784, 1908, 23, 1765, 4916, 6, 2]])\r\n\r\nfsmt = FSMTForConditionalGeneration.from_pretrained(\"./fairseq2HF/\")\r\nhypo = fsmt.generate(encoded_token, num_beams=5)\r\nprint(hypo)\r\n# tensor([[ 2, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 2]])\r\n```\r\n", "I'm a bit lost - we were discussing a missing state dict key, now we are discussing invalid translation.\r\n\r\nDid my suggestion help to resolve the problem of the missing key and now you're presenting the next issue?\r\n\r\nWrt to your transformers result with your model, do you get any better behavior if you encode the tokens via transformers and then feed it to generate? perhaps the dict has somehow changed? though a repeated 21 is suspiciously bad.\r\n\r\n", "@stas00\r\n> Did my suggestion help to resolve the problem of the missing key and now you're presenting the next issue?\r\n\r\nYes, thanks for the helpful comments. \r\nSorry, I should post it as another issue.\r\n\r\n\r\n> do you get any better behavior if you encode the tokens via transformers and then feed it to generate?\r\n\r\nI do not use transformers tokenizer because my fairseq model has a different vocab size, and it's impossible to encode/decode by a single tokenizer model. Converting token to id is used by fairseq's `Dictionary`.\r\nI'll post another issue if necessary after scrutinizing my code.\r\n\r\n\r\nThanks for the big help!", "Thank you for clarifying that your original issue has been resolved. Please feel free to close this issue when you feel it's working for you.\r\n\r\nBased on your comments, I'm concerned about 2 things:\r\n1. your different dictionaries - a model has to come with the exact dict it was trained on, after conversion too. So it sounds that something isn't right there. If you're not sure what's happening perhaps try to clarify how it came to be that your fairseq model has a different vocab size.\r\n2. perhaps that `output_projection` layer is getting in the way of your model if it was trained without it. You could try to hunt down the few lines where it's used in the code and and bypass it and test whether your translation works then. If you're comfortable editing the source code that is." ]
1,613
1,614
1,614
CONTRIBUTOR
null
Hi there, question about fairseq NMT model ([FSMT](https://huggingface.co/transformers/model_doc/fsmt.html)) conversion. I tried to convert my own fairseq-nmt model ([`transformer_wmt_en_de`](https://github.com/pytorch/fairseq/blob/master/fairseq/models/transformer.py#L1046)) based on [this conversion script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/fsmt/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py). However, `decoder.embed_out` weight is missing after converting fairseq model to transformers FSMT model. This parameter exists when not specifing `--share-all-embeddings` or `--share-decoder-input-output-embed`, while official fairseq wmt models do not have `decoder.embed_out` weight because specifying `--share-all-embedding`. https://github.com/pytorch/fairseq/issues/2537 Are there any solution or tips to converting own fairseq model?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10298/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10298/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10297/comments
https://api.github.com/repos/huggingface/transformers/issues/10297/events
https://github.com/huggingface/transformers/issues/10297
812,554,099
MDU6SXNzdWU4MTI1NTQwOTk=
10,297
AutoTokenizer from pretrained BERT throws TypeError when encoding certain input
{ "login": "sorenmulli", "id": 42035306, "node_id": "MDQ6VXNlcjQyMDM1MzA2", "avatar_url": "https://avatars.githubusercontent.com/u/42035306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sorenmulli", "html_url": "https://github.com/sorenmulli", "followers_url": "https://api.github.com/users/sorenmulli/followers", "following_url": "https://api.github.com/users/sorenmulli/following{/other_user}", "gists_url": "https://api.github.com/users/sorenmulli/gists{/gist_id}", "starred_url": "https://api.github.com/users/sorenmulli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sorenmulli/subscriptions", "organizations_url": "https://api.github.com/users/sorenmulli/orgs", "repos_url": "https://api.github.com/users/sorenmulli/repos", "events_url": "https://api.github.com/users/sorenmulli/events{/privacy}", "received_events_url": "https://api.github.com/users/sorenmulli/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! Thank you for opening an issue with a reproducible example, it helps a lot.\r\n\r\nThe issue here is that you're using the `encode` method to encode a batch, which it can't do. Encode only encodes single sequences, and can accept a \"batch\" of two because it processes them as two independent sequences that should be joined together, for example for text-classification where you would want to classify the relationship between two sequences (tasks like Next Sentence Prediction from BERT or Sentence Ordering Prediction ALBERT).\r\n\r\nThe method you're looking for is the `__call__` method of the tokenizer, which handles exactly all the use-cases you've mentioned, and is the recommended API for tokenizers:\r\n\r\n```py\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\ntokenizer([\"hello\", \"world\"]) # <--- This works\r\ntokenizer([\"hello\"]) # <--- This works too :)\r\ntokenizer([\"dette\", \"er\", \"en\", \"sø\"]) # <--- This works as well!\r\n```\r\n\r\n[Here is the documentation](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__) for that method, hope that helps!", "Thank you very much for this good explanation which clearly resolves my problem.\r\n\r\nDo you by any chance know whether this behaviour changed in the last years time?\r\nThe transformers-based repos [NERDA](https://github.com/ebanalyse/NERDA) and [danlp](https://github.com/alexandrainst/danlp) seem to rely on `tokenizer.encode` to be working as you show the call method does, and as such fail on the current version, but work on 3.5.1 (https://github.com/alexandrainst/danlp/issues/113)", "I believe the `encode` method never accepted batches as inputs. We introduced `encode_plus` and `batch_encode_plus` down the road, the latter being the first to handle batching.\r\n\r\nWhile these two methods are deprecated, they're still tested and working, and they're used under the hood when calling `__call__`.\r\n\r\nWhat is happening here is that v3.5.1 is treating your input as individual words (but by all means it shouldn't as the `is_split_into_words` argument is `False` by default), rather than as different batches, I was mistaken in my first analysis. Something did change between version v3.5.1 and v4.0.0, all the breaking changes are documented in the [migration guide](https://huggingface.co/transformers/migration.html).\r\n\r\nIf you want to get back to the previous behavior, you have two ways of handling it:\r\n\r\n- Specify that you don't want a fast tokenizer. The main change affecting you here is that the `AutoTokenizer` returns a fast tokenizer by default (in Rust) rather than the python-based tokenizer. You can change that behavior with the following:\r\n```py\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\", use_fast=False)\r\n```\r\n- The behavior you're relying on here is the `is_split_into_words` parameter: you're passing it a list of words, rather than a sequence of words. That it worked in previous versions seems like a bug to me, here's how you would handle it now (works with a fast tokenizer):\r\n```py\r\ntokenizer([\"hello\", \"world\"], is_split_into_words=True)\r\ntokenizer([\"hello\"], is_split_into_words=True)\r\ntokenizer([\"dette\", \"er\", \"en\", \"sø\"], is_split_into_words=True)\r\n```", "Thank you, the `is_split_into_words` clears up the confusion between batches and tokens clearly for me!", "Hi @LysandreJik \r\n\r\nI am excpericing the same error:\r\n`'TypeError: TextEncodeInput must be Union[TextInputSequence,Tuple[InputSequence, InputSequence]]'`\r\n\r\nwhile running below code:\r\n```python\r\nself.tokenizer.encode_plus(example[0],\r\n add_special_tokens=True,\r\n padding='max_length',\r\n max_length=max_length,\r\n return_attention_mask=True,\r\n return_tensors='pt')\r\n```\r\n\r\n`example[0]` is list of Int which I encoded:\r\n[49518, 111, 22560, 20, 1112, 128, 29, 568, 7, 7244, 10, 10905, 111, 12396, 3781, 111, 4878, 1087, 396, 10, 812, 111, 3077, 629, 847, 202, 3607, 490, 5, 3302, 9, 17890, 154, 10, 3077, 629, 4878, 42, 76, 479, 10130, 273, 363, 2156, 5, 1112, 2763, 8176, 111, 262, 7, 7244, 41, 2319, 68, 508, 4, 245, 325, 2450, 14, 56, 57, 12850, 9, 2213, 9, 7668, 14, 74, 33, 23398, 2156, 1195, 87, 20546, 2156, 5, 752, 1229, 3781, 479, 2589, 6040, 17811, 28455, 5, 1087, 7, 18720, 3633, 14, 24, 21, 33602, 19, 780, 111, 773, 629...\r\n\r\nNow i want to pad it and get the attention back.\r\nIn the docs it mentioned that i can send List[Int]\r\nwhat I am missing ?\r\n", "Hi @shon-otmazgin could you open a new issue with a reproducible code example + full stack trace so that we can take a look? Thanks!", "Taking a look at it, I believe the documentation is wrong here and the fast tokenizers handle strings as inputs. Have you tried using `prepare_for_model` for your use-case?", "I will take a look on `prepare_for_model ` this is new to me.\r\n`prepare_for_model ` accept list of input_ids, can pad and return attention mask? ", "So we dived into it with @n1t0 and actually the problem here is slightly complex. The slow & fast tokenizers have roughly the same API with a few excptions, and this is one of them: the fast tokenizers are great at handling strings and at being extremely efficient with a bunch of features (offsets is one example of a really powerful feature), but they're not made to handle lists of ints.\r\n\r\nIn this particular case, while I think it is theoretically possible with fast tokenizers methods by using some private methods, it seems you would be way better off to use a slow tokenizer to achieve what you're looking for.\r\n\r\nBut this begs the question: is there a way you could share your use-case so that we could study it and understand why you need to pass already processed lists of ints to the tokenizer, instead of tokenizing the text and relying on the information within the encoding?\r\n\r\nHere the fast tokenizers would probably be way more efficient at handling this use-case in a one step process, rather than the two step process we're trying to achieve here.", "Also, I'm seeing the following in the docs for `encode_plus`:\r\n![image](https://user-images.githubusercontent.com/30755778/113611903-25f58f00-961d-11eb-89f6-dbc5102f7dca.png)\r\n\r\nand for `batch_encode_plus`:\r\n![image](https://user-images.githubusercontent.com/30755778/113611980-3c9be600-961d-11eb-883c-bfc30dbb3dc5.png)\r\n\r\nIs there a docstring we've forgotten somewhere that tells this is also supported for fast tokenizers?\r\n\r\n", "I tell you what happened:\r\nI worked on version 3.3.1 which by default `use_fast=False` for `AutoTokenizer`.\r\nI upgraded to version 4.4.2 and that break. -> `use_fast `changed to `True `for `AutoTokenizer`.\r\n\r\n@LysandreJik thank you very much for your help. appreciate that :)" ]
1,613
1,617
1,614
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: Arch Linux - Python version: 3.9.1 - PyTorch version (GPU?): 1.7.1, no - Tensorflow version (GPU?): Not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Guess from git blame: @LysandreJik , @thomwolf @n1t0 ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce When I use a pretrained BERT tokenizer, it throws a TypeError on singleton input or input containing ø/æ/å. It was discovered when I used the pretrained `Maltehb/danish-bert-botxo` which would fail in the below way on any input containing Danish characters (ø/æ/å), but I also realized that it happens with the standard `bert-base-uncased` as shown below. Steps to reproduce the behavior: 1. Run these line ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") tokenizer.encode(["hello", "world"]) # <--- This works tokenizer.encode(["hello"]) # <--- This throws the below shown stack trace tokenizer.encode(["dette", "er", "en", "sø"]) # <--- This throws the same error ``` Stack trace ```py --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-13-ef056deb5f59> in <module> ----> 1 tokenizer.encode(["hello"]) ~/.venv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py in encode(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, return_tensors, **kwargs) 2102 ``convert_tokens_to_ids`` method). 2103 """ -> 2104 encoded_inputs = self.encode_plus( 2105 text, 2106 text_pair=text_pair, ~/.venv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py in encode_plus(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2418 ) 2419 -> 2420 return self._encode_plus( 2421 text=text, 2422 text_pair=text_pair, ~/.venv/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py in _encode_plus(self, text, text_pair, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 453 454 batched_input = [(text, text_pair)] if text_pair else [text] --> 455 batched_output = self._batch_encode_plus( 456 batched_input, 457 is_split_into_words=is_split_into_words, ~/.venv/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose) 380 ) 381 --> 382 encodings = self._tokenizer.encode_batch( 383 batch_text_or_text_pairs, 384 add_special_tokens=add_special_tokens, TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I expect the model not to throw a type error when the types are the same. I also expected that the tokenization would produce id's. [This issue](https://github.com/alexandrainst/danlp/issues/113) is caused by the above I am grateful for the software and thank you in advance for the help!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10297/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10297/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10296/comments
https://api.github.com/repos/huggingface/transformers/issues/10296/events
https://github.com/huggingface/transformers/issues/10296
812,510,410
MDU6SXNzdWU4MTI1MTA0MTA=
10,296
[predict] AttributeError: 'Seq2SeqTrainer' object has no attribute 'metrics_format'
{ "login": "Tan1997", "id": 29818962, "node_id": "MDQ6VXNlcjI5ODE4OTYy", "avatar_url": "https://avatars.githubusercontent.com/u/29818962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tan1997", "html_url": "https://github.com/Tan1997", "followers_url": "https://api.github.com/users/Tan1997/followers", "following_url": "https://api.github.com/users/Tan1997/following{/other_user}", "gists_url": "https://api.github.com/users/Tan1997/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tan1997/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tan1997/subscriptions", "organizations_url": "https://api.github.com/users/Tan1997/orgs", "repos_url": "https://api.github.com/users/Tan1997/repos", "events_url": "https://api.github.com/users/Tan1997/events{/privacy}", "received_events_url": "https://api.github.com/users/Tan1997/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`metrics_format` was recently introduced on master, you should update the transformers version to master.", "Thanks! I'll try it again!" ]
1,613
1,614
1,614
NONE
null
Hi everybody When using mbart for machine translation prediction, i got: Traceback (most recent call last): File "/Users/lishuqi/Desktop/WAT2021/transformers-master/examples/seq2seq/run_seq2seq.py", line 667, in <module> main() File "/Users/lishuqi/Desktop/WAT2021/transformers-master/examples/seq2seq/run_seq2seq.py", line 637, in main metrics_formatted = trainer.metrics_format(metrics) AttributeError: 'Seq2SeqTrainer' object has no attribute 'metrics_format' Am I doing something wrong with the translation? @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10296/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10295/comments
https://api.github.com/repos/huggingface/transformers/issues/10295/events
https://github.com/huggingface/transformers/pull/10295
812,507,839
MDExOlB1bGxSZXF1ZXN0NTc2ODYxMzQ4
10,295
[examples/seq2seq] defensive programming + expand/correct README
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,614
1,614
CONTRIBUTOR
null
This PR deals with the new s2s script and its usage - mostly documentation. This PR: `run_seq2seq.py`: * checks for invalid column names `README.md`: * largely expands the document explaining and exemplifying the supported formats * documents the nuances of t5 and mbart translation - I hope we fix this on the programmatical level in the future * fixes examples where scores were bad - all examples were verified to work and provide good scores, including the custom files, which were far from easy to figure out. Hopefully now it'll be easier. * makes the examples quick to complete by running only a short sample - this is important to notice breakages, e.g. in eval stage - nobody is going to wait for train to complete in hours. * adds cnn/daily mail dataset * recovers one preprocessed dataset from the last s2s incarnation recommendation: it is offered for high bleu scores (the other 3 are either identical or are just slightly worse than the preprocessed ones - full porting status: https://github.com/huggingface/transformers/issues/10044) @patil-suraj, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10295/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10295", "html_url": "https://github.com/huggingface/transformers/pull/10295", "diff_url": "https://github.com/huggingface/transformers/pull/10295.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10295.patch", "merged_at": 1614020330000 }
https://api.github.com/repos/huggingface/transformers/issues/10294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10294/comments
https://api.github.com/repos/huggingface/transformers/issues/10294/events
https://github.com/huggingface/transformers/issues/10294
812,484,281
MDU6SXNzdWU4MTI0ODQyODE=
10,294
Marian input decoding bug
{ "login": "Mehrad0711", "id": 28717374, "node_id": "MDQ6VXNlcjI4NzE3Mzc0", "avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehrad0711", "html_url": "https://github.com/Mehrad0711", "followers_url": "https://api.github.com/users/Mehrad0711/followers", "following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}", "gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions", "organizations_url": "https://api.github.com/users/Mehrad0711/orgs", "repos_url": "https://api.github.com/users/Mehrad0711/repos", "events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehrad0711/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hey @Mehrad0711,\r\n\r\nThanks a lot for the very clean & easy to understand issue!\r\nI can reproduce the error and would be super happy about a PR to fix it! Your fix to let the context manager handle the `spm_target` sounds like the correct solution to me!", "Hi @patrickvonplaten!\r\nThank you for your feedback. I just submitted a PR fixing this issue.\r\nThanks ahead for reviewing." ]
1,613
1,615
1,615
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Marian Language I am using the model on (English, Chinese ...): English, German The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce I'm examining Marian models to translate text. I noticed `convert_tokens_to_string` method uses `spm_target` which can be problematic if we want to decode source text. Here is my script: ``` from transformers import MarianTokenizer, MarianModel tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de') model = MarianModel.from_pretrained('Helsinki-NLP/opus-mt-en-de') input_text = "I was eating lunch when he saw me" target_text = "Ich aß gerade zu Mittag, als er mich sah" input_tokenized = tokenizer(input_text, return_tensors='pt') with tokenizer.as_target_tokenizer(): target_tokenized = tokenizer(target_text, return_tensors='pt') print(tokenizer.decode(input_tokenized.data['input_ids'][0])) with tokenizer.as_target_tokenizer(): print(tokenizer.decode(target_tokenized.data['input_ids'][0])) ``` stdout: ``` I was▁eating▁lunch▁when he▁saw me Ich aß gerade zu Mittag, als er mich sah ``` As you can see the input text is not decoded correctly since `spm_target` is used. A potential fix is to use `current_spm` and let `as_target_tokenizer` context manager decide which spm should be used (similar to text encoding): ``` def convert_tokens_to_string(self, tokens: List[str]) -> str: return self.current_spm.DecodePieces(tokens) ``` I can PR the fix if needed. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: master branch (f6e53e3c2bafb37c861db71a4b28c304403af92b) - Python version: 3.7.4 - PyTorch version (GPU?): 1.7.1 (False)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10294/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10294/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10293/comments
https://api.github.com/repos/huggingface/transformers/issues/10293/events
https://github.com/huggingface/transformers/issues/10293
812,481,435
MDU6SXNzdWU4MTI0ODE0MzU=
10,293
[pretrained] model classes aren't checking the arch of the pretrained model it loads
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "Ah indeed, that's a good request ! There's no reason, we could definitely raise a warning when loading the weights by checking the model type in the configuration against the arch's model type. Do you want to open a PR?", "Why a warning and not an assert? If the code throws a totally unrelated long backtrace how would a user know to search for an earlier warnings?\r\n\r\nDo you see a use-case where someone may need to load mismatching arch for the given model?", "After thinking about it, you're right that an error would be better. I can't think of use-cases where that would affect someone's workflow negatively.", "> Do you want to open a PR?\r\n\r\nI could, but realistically it might not happen soon. But since it's not a complicated task perhaps asking the community to help? I guess it'd be as simple as:\r\n\r\n1. read the config of the downloaded model as soon as the config got downloaded\r\n2. compare `config.arch` with model's arch\r\n3. assert if mismatch", "Hi @LysandreJik Does someone work on that ? I'd like to make my first contribution to the project", "Hi @ankh6, feel free to work on it! The issue is not reserved until a PR is opened with some progress made towards solving the issue.", "And when you solve it, one test can be:\r\n```\r\npython -c 'from transformers import PegasusForConditionalGeneration; PegasusForConditionalGeneration.from_pretrained(\"patrickvonplaten/t5-tiny-random\")'\r\n```\r\nbut this one doesn't crash, just spits a lot of warnings. \r\n\r\nThis one does crash:\r\n```\r\npython -c 'from transformers import BartForConditionalGeneration; BartForConditionalGeneration.from_pretrained(\"prajjwal1/bert-tiny\")'\r\n```\r\n\r\nSo it'd be a better candidate to go into the test suite. \r\n\r\nWe want a tiny model so that it runs the test fast.", "@LysandreJik If I understand correctly we should check that the input is in the PRETRAINED_VOCAB_FILES_MAP object (for this issue). Should the assertion occur when we call is_torch_available method, i.e. in src/transformers/models/gpt2/__init__.py, ? ", "As soon as you retrieved the config file and you know which model's class is used, so that you have the 2 things to compare.\r\n\r\nIt definitely shouldn't happen in the specific model files, but inside the common library.\r\n\r\nMost likely there should be one check inside one of the super-classes for model's `from_pretrained` and the same for the tokenizer. Since either may have this conflict.", "Hi, \r\nI've made some progress on this issue. Think I've fixed it for initiating models. \r\nTo show if my approach is fine shall I submit a PR? \r\n\r\nI've essentially added an assert statement in the `from_pretrained` method in the `PretrainedConfig` class. ", "That sounds about right, and yes PR please - thank you!", "Added a pull request #10586 " ]
1,613
1,616
1,616
CONTRIBUTOR
null
While comparing different models trained on xsum (most of which are Bart) I made a mistake and passed "google/pegasus-xsum" to `BartForConditionalGeneration` ``` BartForConditionalGeneration.from_pretrained("google/pegasus-xsum") ``` I got: ``` Some weights of the model checkpoint at google/pegasus-xsum were not used when initializing BartForConditionalGeneration: ['model.encoder.layer_norm.weight', 'model.encoder.layer_norm.bias', 'model.decoder.layer_norm.weight', 'model.decoder.layer_norm.bias'] - This IS expected if you are initializing BartForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BartForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at google/pegasus-xsum and are newly initialized: ['model.encoder.embed_positions.weight', 'model.encoder.layernorm_embedding.weight', 'model.encoder.layernorm_embedding.bias', 'model.decoder.embed_positions.weight', 'model.decoder.layernorm_embedding.weight', 'model.decoder.layernorm_embedding.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Traceback (most recent call last): File "./bart-summarize2.py", line 8, in <module> tokenizer = BartTokenizer.from_pretrained(mname) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1788, in from_pretrained return cls._from_pretrained( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 1860, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/roberta/tokenization_roberta.py", line 159, in __init__ super().__init__( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/gpt2/tokenization_gpt2.py", line 179, in __init__ with open(vocab_file, encoding="utf-8") as vocab_handle: TypeError: expected str, bytes or os.PathLike object, not NoneType ``` Any reason why the model class doesn't check that it's being fed a wrong architecture? It could detect that and give a corresponding error message, rather than spitting random errors like above? I was pretty sure it was a bug in pegasus model until I noticed that pegasus != Bart. Thanks. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10293/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10292/comments
https://api.github.com/repos/huggingface/transformers/issues/10292/events
https://github.com/huggingface/transformers/issues/10292
812,465,989
MDU6SXNzdWU4MTI0NjU5ODk=
10,292
[examples s2s] AttributeError: 'MBartTokenizerFast' object has no attribute 'tgt_lang'
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "#10287 contains the fix.", "Confirmed that it works, albeit the cl args changed so tested with:\r\n```\r\nPYTHONPATH=src python examples/seq2seq/run_translation.py --model_name_or_path facebook/mbart-large-en-ro --do_train --do_eval --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_train_batch_size=2 --per_device_eval_batch_size=2 --overwrite_output_dir --predict_with_generate --source_lang en_XX --target_lang ro_RO --max_val_samples 10 --max_train_samples 10\r\n```" ]
1,613
1,615
1,615
CONTRIBUTOR
null
After this PR https://github.com/huggingface/transformers/pull/10205 This is still broken for other models: ``` python examples/seq2seq/run_seq2seq.py --model_name_or_path facebook/mbart-large-en-ro --do_train --do_eval --task translation_en_to_ro --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --output_dir /tmp/tst-translation --per_device_train_batch_size=16 --per_device_eval_batch_size=16 --overwrite_output_dir --predict_with_generate --max_train_samples 500 --max_val_samples 500 ``` ``` Traceback (most recent call last): File "examples/seq2seq/run_seq2seq.py", line 668, in <module> main() File "examples/seq2seq/run_seq2seq.py", line 469, in main train_dataset = train_dataset.map( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1120, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1091, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "examples/seq2seq/run_seq2seq.py", line 450, in preprocess_function with tokenizer.as_target_tokenizer(): File "/home/stas/anaconda3/envs/main-38/lib/python3.8/contextlib.py", line 113, in __enter__ return next(self.gen) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/mbart/tokenization_mbart_fast.py", line 193, in as_target_tokenizer self.set_tgt_lang_special_tokens(self.tgt_lang) AttributeError: 'MBartTokenizerFast' object has no attribute 'tgt_lang' ``` @patil-suraj, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10292/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10291/comments
https://api.github.com/repos/huggingface/transformers/issues/10291/events
https://github.com/huggingface/transformers/pull/10291
812,402,882
MDExOlB1bGxSZXF1ZXN0NTc2Nzc0OTg0
10,291
Fix example links in the task summary
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
COLLABORATOR
null
# What does this PR do? This PR fixes (and adds or removes) the links shown in the task summary. Fixes #10288
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10291/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10291", "html_url": "https://github.com/huggingface/transformers/pull/10291", "diff_url": "https://github.com/huggingface/transformers/pull/10291.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10291.patch", "merged_at": 1613775855000 }
https://api.github.com/repos/huggingface/transformers/issues/10290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10290/comments
https://api.github.com/repos/huggingface/transformers/issues/10290/events
https://github.com/huggingface/transformers/issues/10290
812,400,467
MDU6SXNzdWU4MTI0MDA0Njc=
10,290
Trainer train continues after resume_from_checkpoint on a checkpoint with early stop
{ "login": "tanmay17061", "id": 32801726, "node_id": "MDQ6VXNlcjMyODAxNzI2", "avatar_url": "https://avatars.githubusercontent.com/u/32801726?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmay17061", "html_url": "https://github.com/tanmay17061", "followers_url": "https://api.github.com/users/tanmay17061/followers", "following_url": "https://api.github.com/users/tanmay17061/following{/other_user}", "gists_url": "https://api.github.com/users/tanmay17061/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmay17061/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmay17061/subscriptions", "organizations_url": "https://api.github.com/users/tanmay17061/orgs", "repos_url": "https://api.github.com/users/tanmay17061/repos", "events_url": "https://api.github.com/users/tanmay17061/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmay17061/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, I can see the problem. I'm not sure there is an easy fix however and I don't have time right now to build a proper callback checkpointing system. Will have to wait a little bit to be fixed!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@sgugger Could you please reopen this issue? The issue persists despite being automatically closed. " ]
1,613
1,705
1,619
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> When continuing training from checkpoint, Trainer does not check if the checkpoint terminated with an `self.control.should_training_stop == True`. `self.control.should_training_stop == True` holds when: 1. `state.global_step >= state.max_steps` * training does not resume on `resume_from_checkpoint` due to recovering steps information (`state.global_step`) from checkpoint state 👍 2. Due to early stopping condition True * training resumes as no mechanism to find previous early stopping state 👎 * even `early_stopping_patience_counter` is restarted from 0 on `EarlyStoppingCallback` init, irrespective of `resume_from_checkpoint` 👎 ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger as issue in Trainer. ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Initialize `Trainer.train` with `resume_from_checkpoint` pointing to a checkpoint that stopped due to early stopping <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Training should not happen as the checkpoint loaded had stopped due to early stopping. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10290/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10289/comments
https://api.github.com/repos/huggingface/transformers/issues/10289/events
https://github.com/huggingface/transformers/issues/10289
812,382,266
MDU6SXNzdWU4MTIzODIyNjY=
10,289
Masking issues with GPT2LMHeadModel.generate()
{ "login": "xxbidiao", "id": 1439638, "node_id": "MDQ6VXNlcjE0Mzk2Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1439638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xxbidiao", "html_url": "https://github.com/xxbidiao", "followers_url": "https://api.github.com/users/xxbidiao/followers", "following_url": "https://api.github.com/users/xxbidiao/following{/other_user}", "gists_url": "https://api.github.com/users/xxbidiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/xxbidiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xxbidiao/subscriptions", "organizations_url": "https://api.github.com/users/xxbidiao/orgs", "repos_url": "https://api.github.com/users/xxbidiao/repos", "events_url": "https://api.github.com/users/xxbidiao/events{/privacy}", "received_events_url": "https://api.github.com/users/xxbidiao/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @xxbidiao, \r\n\r\nFor batched generation GPT2 has to be used in quite a special way... -> could you check out [this](https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517/2) forum post to see whether this makes sense for you?", "```\r\nimport torch,transformers\r\ngpt2_model = transformers.GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\nprint(gpt2_model.generate(torch.tensor([[100,200,300]]),do_sample=False))\r\nprint(gpt2_model.generate(torch.tensor([[100,200,300,50256]]),attention_mask=torch.tensor([[1,1,1,0]]),do_sample=False))\r\nprint(gpt2_model.generate(torch.tensor([[50256,100,200,300]]),attention_mask=torch.tensor([[0,1,1,1]]),do_sample=False))\r\n```\r\n\r\n```\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\ntensor([[100, 200, 300, 84, 12, 75, 84, 12, 75, 84, 12, 75, 84, 12,\r\n 75, 84, 12, 75, 84, 12]])\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\ntensor([[ 100, 200, 300, 50256, 198, 198, 7, 16, 8, 383,\r\n 3381, 366, 75, 1, 1724, 262, 4129, 286, 262, 4731]])\r\ntensor([[50256, 100, 200, 300, 84, 12, 75, 84, 12, 75,\r\n 84, 12, 75, 84, 12, 75, 84, 12, 75, 84]])\r\n```\r\n\r\nLooks like it works! Will double check. Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
CONTRIBUTOR
null
Is this intended behavior, that padding a sentence and attention_mask it will not give the exact same generation result comparing to the same sentence unpadded? Edit: [This notebook](https://colab.research.google.com/drive/1oyFRFigtSNUYwKO1EQPRHfEqke0-F6_N?usp=sharing) demonstrates this, with the newest version available on colab. I realized that I didn't turn sampling off with the example below but the colab one has sampling off. ``` >>> gpt2_model = transformers.GPT2LMHeadModel.from_pretrained("gpt2") >>> gpt2_model.generate(torch.tensor([[100,200,300]])) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. tensor([[100, 200, 300, 84, 12, 75, 84, 12, 75, 84, 12, 75, 84, 12, 75, 84, 12, 75, 84, 12]]) >>> gpt2_model.generate(torch.tensor([[100,200,300,50256]]),attention_mask=torch.tensor([[1,1,1,0]])) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. tensor([[ 100, 200, 300, 50256, 198, 198, 7, 16, 8, 383, 3381, 366, 75, 1, 1724, 262, 4129, 286, 262, 4731]]) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10289/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10288/comments
https://api.github.com/repos/huggingface/transformers/issues/10288/events
https://github.com/huggingface/transformers/issues/10288
812,377,810
MDU6SXNzdWU4MTIzNzc4MTA=
10,288
Minor documentation issue
{ "login": "gwc4github", "id": 3164663, "node_id": "MDQ6VXNlcjMxNjQ2NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3164663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gwc4github", "html_url": "https://github.com/gwc4github", "followers_url": "https://api.github.com/users/gwc4github/followers", "following_url": "https://api.github.com/users/gwc4github/following{/other_user}", "gists_url": "https://api.github.com/users/gwc4github/gists{/gist_id}", "starred_url": "https://api.github.com/users/gwc4github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gwc4github/subscriptions", "organizations_url": "https://api.github.com/users/gwc4github/orgs", "repos_url": "https://api.github.com/users/gwc4github/repos", "events_url": "https://api.github.com/users/gwc4github/events{/privacy}", "received_events_url": "https://api.github.com/users/gwc4github/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for flagging! Those have not been updated in a while so I made a pass over that file.", "I still see the bad links. Is the change getting pushed/merged later?", "It will only be seen in the [master documentation](https://huggingface.co/transformers/master/) for now. At the next release, it will become visible in the stable documentation.", "Will the new run_ner.py work with PyTorch and TF? PyTorch-Lightening too?\r\n" ]
1,613
1,613
1,613
NONE
null
## Minor issue in the Fine tuning docs In the "Named Entity Recognition" section of the "Summary of tasks" documentation page there are some bad links. Here is a link to the section: https://huggingface.co/transformers/task_summary.html#named-entity-recognition The text in question is: ##Named Entity Recognition Named Entity Recognition (NER) is the task of classifying tokens according to a class, for example, identifying a token as a person, an organisation or a location. An example of a named entity recognition dataset is the CoNLL-2003 dataset, which is entirely based on that task. If you would like to fine-tune a model on an NER task, you may leverage the run_ner.py (PyTorch), run_pl_ner.py (leveraging pytorch-lightning) or the run_tf_ner.py (TensorFlow) scripts. ## ISSUE: The links for the following give a 404 error: run_pl_ner.py, run_tf_ner.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10288/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10287/comments
https://api.github.com/repos/huggingface/transformers/issues/10287/events
https://github.com/huggingface/transformers/pull/10287
812,324,206
MDExOlB1bGxSZXF1ZXN0NTc2NzA5MDg3
10,287
Deprecate prepare_seq2seq_batch
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi all! Sorry, but this seems to be cleaner: (Some feature request: #14255)\r\n```python\r\nencoded_train_dataset = train_dataset.map(\r\n lambda batch: tokenizer.prepare_seq2seq_batch(\r\n batch['text'], batch['summary'], padding='max_length', truncation=True, max_length=256, max_target_length=64\r\n ),\r\n batched=True,\r\n remove_columns=train_dataset.column_names,\r\n)\r\n```" ]
1,613
1,635
1,614
COLLABORATOR
null
# What does this PR do? This PR officially deprecates `prepare_seq2seq_batch` to prepare its removal in Transformers v5. As discussed before, the proper way to prepare data for sequence-to-sequence tasks is to: - call the tokenizer on the inputs - call the tokenizers on the targets inside the context manager `as_target_tokenizer` When only dealing with input texts without targets, just using the tokenizer call works perfectly well. For `mBART` and `mBART50` tokenizers the source and target language can be specified at init or changed at any time by setting the attributes `.src_lang` and `.tgt_lang`. Here is a full example showing how to port old code using `prepare_seq2seq_batch` to the new way in the case of an mBART tokenizer (remove the mentiones of `src_lang` and `tgt_lang` for other tokenizers: ``` tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro') batch = tokenizer.prepare_seq2seq_batch(src_texts, tgt_texts, padding=True, truncation=True, src_lang="en_XX", tgt_lang="ro_RO", return_tensors="pt") ``` becomes ``` tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro', src_lang="en_XX", tgt_lang="ro_RO") batch = tokenizer(src_texts, padding=True, truncation=True, return_tensors="pt") with tokenizer.as_target_tokenizer(): targets = tokenizer(tgt_texts, padding=True, truncation=True, return_tensors="pt") batch["labels"] = targets["input_ids"] ``` The languages can be changed at any time with ``` tokenizer.src_lang = new_src_code tokenizer.tgt_lang = new_tgt_code ``` This PR fixes a few things in `MBartTokenizer` and `MBartTokenizerFast` for the new API to work completely and removes all mentions of `prepare_seq2seq_batch` from the documentation and tests (except the test of that method in the common tests). It was already not used anymore in the seq2seq example `run_seq2seq`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10287/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10287/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10287", "html_url": "https://github.com/huggingface/transformers/pull/10287", "diff_url": "https://github.com/huggingface/transformers/pull/10287.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10287.patch", "merged_at": 1614015377000 }
https://api.github.com/repos/huggingface/transformers/issues/10286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10286/comments
https://api.github.com/repos/huggingface/transformers/issues/10286/events
https://github.com/huggingface/transformers/pull/10286
812,315,025
MDExOlB1bGxSZXF1ZXN0NTc2NzAwNjY5
10,286
Introduce save_strategy training argument
{ "login": "tanmay17061", "id": 32801726, "node_id": "MDQ6VXNlcjMyODAxNzI2", "avatar_url": "https://avatars.githubusercontent.com/u/32801726?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmay17061", "html_url": "https://github.com/tanmay17061", "followers_url": "https://api.github.com/users/tanmay17061/followers", "following_url": "https://api.github.com/users/tanmay17061/following{/other_user}", "gists_url": "https://api.github.com/users/tanmay17061/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmay17061/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmay17061/subscriptions", "organizations_url": "https://api.github.com/users/tanmay17061/orgs", "repos_url": "https://api.github.com/users/tanmay17061/repos", "events_url": "https://api.github.com/users/tanmay17061/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmay17061/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @sgugger, \r\nGot some time to raise the changes we talked about in [my previous PR](https://github.com/huggingface/transformers/pull/10267). \r\nDo let me know if I missed something. \r\nThanks!", "Hi @LysandreJik / @sgugger,\r\n\r\nThanks for your inputs! I can think of some better names, _but before that_: \r\nIs deprecating usage of `EvaluationStrategy` and keeping its definition along with the `TimeStrategy` (or whatever the name would be) in `trainer_utils.py` for the time being a good option? Can also throw a FutureWarning when `EvaluationStrategy` is used.", "Yes that would be the preferred option: not use it anymore but still keep it until v5, and each time someone uses it a `FutureWarning` indicating it's deprecated and will be removed in version 5 is thrown.\r\nLet me know if you have other questions!", "Hi @sgugger,\r\nSeems like the latest changes are failing a certain `make modified_only_fixup` test.\r\nI'm not entirely sure where this test is failing.\r\nGiven that this test is passing on my local machine, this could be due to erratic hard-updation of some test/doc?\r\n\r\n> 2021-02-26T19:45:22.6161202Z Checking/fixing src/transformers/__init__.py src/transformers/integrations.py src/transformers/models/__init__.py src/transformers/models/auto/configuration_auto.py src/transformers/models/auto/modeling_auto.py src/transformers/models/auto/modeling_tf_auto.py src/transformers/trainer_callback.py src/transformers/trainer_tf.py src/transformers/trainer_utils.py src/transformers/training_args.py src/transformers/training_args_tf.py src/transformers/utils/dummy_pt_objects.py src/transformers/utils/dummy_tf_objects.py src/transformers/utils/dummy_tokenizers_objects.py src/transformers/utils/notebook.py tests/test_trainer.py tests/test_trainer_callback.py utils/check_repo.py\r\n> 2021-02-26T19:45:24.8027639Z All done! ✨ 🍰 ✨\r\n> 2021-02-26T19:45:24.8028839Z 18 files left unchanged.\r\n> 2021-02-26T19:45:28.4905837Z tests/test_trainer.py:1060:37: F821 undefined name 'EvaluationStrategy'\r\n> 2021-02-26T19:45:28.5127548Z make: *** [modified_only_fixup] Error 1\r\n> 2021-02-26T19:45:28.5129201Z Makefile:7: recipe for target 'modified_only_fixup' failed", "Oh and for the failing test, you missed an `EvaluationStrategy` toward the end of `tests/test_trainer.py`, that's why you have the error.", "Thanks! Fixed it now.\r\nAlthough I see some approaches mentioned on other forums, I'm not entirely sure what would be the best approach to print a warning on usage of enum `EvaluationStrategy`.\r\nIf not too complex, you can point me towards how to do it.\r\nOtherwise, you can merge this PR :-).", "I'm not finding anything easy to do that, so I think we can merge for now and I'll keep looking." ]
1,613
1,614
1,614
CONTRIBUTOR
null
* Introduce save_strategy training argument * collapse EvaluationStrategy and LoggingStrategy into a single TimeStrategy enum * modify tests to use modified enum # What does this PR do? 1. Introduce new `save_strategy` argument to decide on interval between 2 model saves during training. 2. Introduce a unified enum `TimeStrategy` which is used across `evaluation_strategy`, `logging_strategy` and `save_strategy`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [Discussed during PR for logging_strategy.](https://github.com/huggingface/transformers/pull/10267) - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10286/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10286", "html_url": "https://github.com/huggingface/transformers/pull/10286", "diff_url": "https://github.com/huggingface/transformers/pull/10286.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10286.patch", "merged_at": 1614472462000 }
https://api.github.com/repos/huggingface/transformers/issues/10285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10285/comments
https://api.github.com/repos/huggingface/transformers/issues/10285/events
https://github.com/huggingface/transformers/issues/10285
812,291,499
MDU6SXNzdWU4MTIyOTE0OTk=
10,285
Random Word Replacement Probability
{ "login": "BradSegal", "id": 13371536, "node_id": "MDQ6VXNlcjEzMzcxNTM2", "avatar_url": "https://avatars.githubusercontent.com/u/13371536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BradSegal", "html_url": "https://github.com/BradSegal", "followers_url": "https://api.github.com/users/BradSegal/followers", "following_url": "https://api.github.com/users/BradSegal/following{/other_user}", "gists_url": "https://api.github.com/users/BradSegal/gists{/gist_id}", "starred_url": "https://api.github.com/users/BradSegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BradSegal/subscriptions", "organizations_url": "https://api.github.com/users/BradSegal/orgs", "repos_url": "https://api.github.com/users/BradSegal/repos", "events_url": "https://api.github.com/users/BradSegal/events{/privacy}", "received_events_url": "https://api.github.com/users/BradSegal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Nevermind, didn't see the not for replaced indices so it's 50% of the remaining 20% after masking" ]
1,613
1,613
1,613
NONE
null
Hi, It appears that the token masking function replaces tokens with random words 50% of the time instead of the commented 10%. https://github.com/huggingface/transformers/blob/709c86b5a925f1efe650e24ee8b1f52bdc5a3acb/src/transformers/data/data_collator.py#L381
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10285/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10284/comments
https://api.github.com/repos/huggingface/transformers/issues/10284/events
https://github.com/huggingface/transformers/pull/10284
812,264,600
MDExOlB1bGxSZXF1ZXN0NTc2NjU3OTE0
10,284
Patch zero shot distillation script cuda issue
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
Quick patch to #10244 replacing an accidental deletion of `.cuda()` when using cuda. Causes error with multi-GPU.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10284/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10284", "html_url": "https://github.com/huggingface/transformers/pull/10284", "diff_url": "https://github.com/huggingface/transformers/pull/10284.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10284.patch", "merged_at": 1613761618000 }
https://api.github.com/repos/huggingface/transformers/issues/10283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10283/comments
https://api.github.com/repos/huggingface/transformers/issues/10283/events
https://github.com/huggingface/transformers/pull/10283
812,244,065
MDExOlB1bGxSZXF1ZXN0NTc2NjQwODQ5
10,283
Clean TF BART and TF Seq2Ses template
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,614
1,614
CONTRIBUTOR
null
# What does this PR do? This PR aims to clean TF BART and the TF Seq2Seq template by adding explicit keyword arguments, typing and update the documentation in the model implementation to make it easier to understand and read.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10283/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10283", "html_url": "https://github.com/huggingface/transformers/pull/10283", "diff_url": "https://github.com/huggingface/transformers/pull/10283.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10283.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10282/comments
https://api.github.com/repos/huggingface/transformers/issues/10282/events
https://github.com/huggingface/transformers/issues/10282
812,236,535
MDU6SXNzdWU4MTIyMzY1MzU=
10,282
[tests] tests/test_trainer_distributed.py intermittent failure
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "One other solution - since this is a single node we could use a unique file rather than port for setting up the distributed process group.\r\n\r\nThat is `init_process_group()` with `init_method=\"file:///tmp/unique_file\"` - but the trainer currently hardcodes the `env://` method so we may need to make it more flexible around that.\r\n\r\nReference: https://pytorch.org/docs/master/distributed.html#torch.distributed.init_process_group", "since we are switching to docker runs, this becomes moot, as there will be no processes from previous runs." ]
1,613
1,616
1,616
CONTRIBUTOR
null
`tests/test_trainer_distributed.py` fails occasionally on multi-gpu github runner CI and as a result doesn't free up the 29500 default distributed port. This could be caused by an occasional deadlock discusses in testing_utils.py's `_stream_subprocess``. When debugging one such zombie it was stuck in `exec(eval(sys.stdin.readline()))` Note that other similar tests under `examples` don't exhibit the same behavior - perhaps it somehow has to do with this being a different script that it runs (this test runs its own file as the distributed script). The bt of the subsequent failures is long and confusing, as there are several mixed failures, but it's all really one failure: `Address already in use` since the previous distributed run of the same test didn't free up this port. A quick check should show which process is bound to it: ``` netstat -tulpn | grep :29500 ``` The full bt: ``` NCCL_DEBUG=INFO pytest -sv tests/test_trainer_distributed.py ================================================================ test session starts ================================================================ platform linux -- Python 3.7.4, pytest-6.2.2, py-1.10.0, pluggy-0.13.1 -- /home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python cachedir: .pytest_cache rootdir: /home/github_actions/actions-runner/_work/transformers/transformers plugins: xdist-2.2.1, forked-1.3.0 collected 1 item tests/test_trainer_distributed.py::TestTrainerDistributed::test_trainer Running: /home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python -m torch.distributed.launch --nproc_per_node=2 /home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py --output_dir /tmp/tmp2k265qn5 stderr: Traceback (most recent call last): stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py", line 82, in <module> stderr: training_args = parser.parse_args_into_dataclasses()[0] stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/hf_argparser.py", line 180, in parse_args_into_dataclasses stderr: obj = dtype(**inputs) stderr: File "<string>", line 61, in __init__ stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 491, in __post_init__ stderr: if is_torch_available() and self.device.type != "cuda" and self.fp16: stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1369, in wrapper stderr: return func(*args, **kwargs) stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 620, in device stderr: return self._setup_devices stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1359, in __get__ stderr: cached = self.fget(obj) stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1369, in wrapper stderr: return func(*args, **kwargs) stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 605, in _setup_devices stderr: torch.distributed.init_process_group(backend="nccl") stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 436, in init_process_group stderr: store, rank, world_size = next(rendezvous_iterator) stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/rendezvous.py", line 179, in _env_rendezvous_handler stderr: store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout) stderr: RuntimeError: Address already in use stderr: Traceback (most recent call last): stderr: File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main stderr: "__main__", mod_spec) stderr: File "/usr/lib/python3.7/runpy.py", line 85, in _run_code stderr: exec(code, run_globals) stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in <module> stderr: main() stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main stderr: cmd=cmd) stderr: subprocess.CalledProcessError: Command '['/home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python', '-u', '/home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py', '--local_rank=1', '--output_dir', '/tmp/tmp2k265qn5']' returned non-zero exit status 1. stdout: ***************************************** stdout: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. stdout: ***************************************** stdout: multi-gpu-ci-runner:18062:18062 [1] NCCL INFO Bootstrap : Using [0]ens6:10.128.0.66<0> stdout: multi-gpu-ci-runner:18062:18062 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation stdout: stdout: multi-gpu-ci-runner:18062:18062 [1] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1] stdout: multi-gpu-ci-runner:18062:18062 [1] NCCL INFO NET/Socket : Using [0]ens6:10.128.0.66<0> stdout: multi-gpu-ci-runner:18062:18062 [1] NCCL INFO Using network Socket stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO Call to connect returned Connection refused, retrying stdout: stdout: multi-gpu-ci-runner:18062:18089 [1] include/socket.h:403 NCCL WARN Connect to 10.128.0.66<52523> failed : Connection refused stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO bootstrap.cc:95 -> 2 stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO bootstrap.cc:309 -> 2 stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO init.cc:555 -> 2 stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO init.cc:840 -> 2 stdout: multi-gpu-ci-runner:18062:18089 [1] NCCL INFO group.cc:73 -> 2 [Async thread] stderr: Traceback (most recent call last): stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py", line 82, in <module> stderr: training_args = parser.parse_args_into_dataclasses()[0] stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/hf_argparser.py", line 180, in parse_args_into_dataclasses stderr: obj = dtype(**inputs) stderr: File "<string>", line 61, in __init__ stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 491, in __post_init__ stderr: if is_torch_available() and self.device.type != "cuda" and self.fp16: stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1369, in wrapper stderr: return func(*args, **kwargs) stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 620, in device stderr: return self._setup_devices stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1359, in __get__ stderr: cached = self.fget(obj) stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py", line 1369, in wrapper stderr: return func(*args, **kwargs) stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py", line 605, in _setup_devices stderr: torch.distributed.init_process_group(backend="nccl") stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 455, in init_process_group stderr: barrier() stderr: File "/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1960, in barrier stderr: work = _default_pg.barrier() stderr: RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8 FAILED ===================================================================== FAILURES ====================================================================== ________________________________________________________ TestTrainerDistributed.test_trainer ________________________________________________________ self = <tests.test_trainer_distributed.TestTrainerDistributed testMethod=test_trainer> @require_torch_multi_gpu def test_trainer(self): distributed_args = f""" -m torch.distributed.launch --nproc_per_node={torch.cuda.device_count()} {self.test_file_dir}/test_trainer_distributed.py """.split() output_dir = self.get_auto_remove_tmp_dir() args = f"--output_dir {output_dir}".split() cmd = [sys.executable] + distributed_args + args > execute_subprocess_async(cmd, env=self.get_env()) tests/test_trainer_distributed.py:72: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cmd = ['/home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python', '-m', 'torch.distributed.launc.../github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py', '--output_dir', ...] env = {'HOME': '/home/github_actions', 'KMP_DUPLICATE_LIB_OK': 'True', 'KMP_INIT_AT_FORK': 'FALSE', 'LANG': 'C.UTF-8', ...}, stdin = None timeout = 180, quiet = False, echo = True def execute_subprocess_async(cmd, env=None, stdin=None, timeout=180, quiet=False, echo=True) -> _RunOutput: loop = asyncio.get_event_loop() result = loop.run_until_complete( _stream_subprocess(cmd, env=env, stdin=stdin, timeout=timeout, quiet=quiet, echo=echo) ) cmd_str = " ".join(cmd) if result.returncode > 0: stderr = "\n".join(result.stderr) raise RuntimeError( > f"'{cmd_str}' failed with returncode {result.returncode}\n\n" f"The combined stderr from workers follows:\n{stderr}" ) ``` A short term workaround could be to randomize the port, so this test won't trumple upon its previous zombie. ``` + from random import randint + master_port = 2950 + randint(1, 99) distributed_args = f""" -m torch.distributed.launch --nproc_per_node={torch.cuda.device_count()} + --master_port {master_port} {self.test_file_dir}/test_trainer_distributed.py """.split() ``` but this is a band-aid and a real solution is needed. It also will be an issue with any other distributed tests that rely on the same default port number. I will keep on monitoring the issue. Meanwhile this PR https://github.com/huggingface/transformers/pull/10281 should help with preventing the incremental number of zombies from scheduled runs. It's difficult to debug w/o being able to reproduce this problem at will.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10282/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10281/comments
https://api.github.com/repos/huggingface/transformers/issues/10281/events
https://github.com/huggingface/transformers/pull/10281
812,198,366
MDExOlB1bGxSZXF1ZXN0NTc2NjAyODYx
10,281
[CI] Kill any run-away pytest processes
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
As discussed on slack this PR proposes to change github runner to kill any run-away pytest processes before starting a new job. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10281", "html_url": "https://github.com/huggingface/transformers/pull/10281", "diff_url": "https://github.com/huggingface/transformers/pull/10281.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10281.patch", "merged_at": 1613759797000 }
https://api.github.com/repos/huggingface/transformers/issues/10280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10280/comments
https://api.github.com/repos/huggingface/transformers/issues/10280/events
https://github.com/huggingface/transformers/issues/10280
812,134,417
MDU6SXNzdWU4MTIxMzQ0MTc=
10,280
Trainer.train argument resume_from_last_checkpoint
{ "login": "tanmay17061", "id": 32801726, "node_id": "MDQ6VXNlcjMyODAxNzI2", "avatar_url": "https://avatars.githubusercontent.com/u/32801726?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmay17061", "html_url": "https://github.com/tanmay17061", "followers_url": "https://api.github.com/users/tanmay17061/followers", "following_url": "https://api.github.com/users/tanmay17061/following{/other_user}", "gists_url": "https://api.github.com/users/tanmay17061/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmay17061/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmay17061/subscriptions", "organizations_url": "https://api.github.com/users/tanmay17061/orgs", "repos_url": "https://api.github.com/users/tanmay17061/repos", "events_url": "https://api.github.com/users/tanmay17061/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmay17061/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Instead of adding a new argument, I would use the existing `resume_from_checkpoint` and change its type to bool or str/PathLike. If it's a bool and if it's `True`, we then use `get_last_checkpoint` to get the last checkpoint in `args.output_dir`. Does that sound good to you?", "Yes, SGTM. I have raised [a PR](https://github.com/huggingface/transformers/pull/10334) doing the same. Do let me know if there is any other change required as well!\r\n\r\n**PS**: Can you also review my [other PR](https://github.com/huggingface/transformers/pull/10286) introducing `save_strategy` in `TrainingArguments`? This PR is the last one to round-up the `save_strategy`, `evaluation_strategy` and `logging_strategy` enhancements. \r\n\r\nThanks!", "> # 🚀 Feature request\r\n> `Trainer.train` accepts `resume_from_checkpoint` argument, which requires the user to explicitly provide the checkpoint location to continue training from. `resume_from_last_checkpoint` can be useful to resume training by picking the latest checkpoint from `output_dir` of the `TrainingArguments` passed.\r\n> \r\n> ## Motivation\r\n> 1. The checkpoint directory is created by the library, so user needs to navigate to the directory to find the value to provide for `resume_from_checkpoint`\r\n> 2. User may just want to resume from the last valid checkpoint since their training got disrupted previously (a common scenario for someone to want to resume training). All they know is the `output_dir` they provided initially\r\n> \r\n> This motivates to provide a `resume_from_last_checkpoint=True` to the `Trainer.train(...)` call, which will pick the latest checkpoint from `args.output_dir`. FYI `get_last_checkpoint` function from `trainer_utils` can be used to do exactly the same.\r\n> \r\n> ## Your contribution\r\n> I can raise a PR if it is a useful feature to have!\r\nIs it possible to train while adding a new category to the dataset using this resume_from_checkpoint argument?\r\n" ]
1,613
1,665
1,614
CONTRIBUTOR
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> `Trainer.train` accepts `resume_from_checkpoint` argument, which requires the user to explicitly provide the checkpoint location to continue training from. `resume_from_last_checkpoint` can be useful to resume training by picking the latest checkpoint from `output_dir` of the `TrainingArguments` passed. ## Motivation 1. The checkpoint directory is created by the library, so user needs to navigate to the directory to find the value to provide for `resume_from_checkpoint` 2. User may just want to resume from the last valid checkpoint since their training got disrupted previously (a common scenario for someone to want to resume training). All they know is the `output_dir` they provided initially This motivates to provide a `resume_from_last_checkpoint=True` to the `Trainer.train(...)` call, which will pick the latest checkpoint from `args.output_dir`. FYI `get_last_checkpoint` function from `trainer_utils` can be used to do exactly the same. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I can raise a PR if it is a useful feature to have!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10280/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10279/comments
https://api.github.com/repos/huggingface/transformers/issues/10279/events
https://github.com/huggingface/transformers/issues/10279
812,133,918
MDU6SXNzdWU4MTIxMzM5MTg=
10,279
Performance of mbart-large-50-many-to-many-mmt on de/fr/it
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Unstale", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,621
1,621
MEMBER
null
Hi everybody I am using ` mbart-large-50-many-to-many-mmt` and I am running into the following problem. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.0.dev0 (installed from source) - Platform: Linux - Python version: 3.8.5 - PyTorch version (CPU): 1.7.1 ### Who can help @patrickvonplaten, @patil-suraj ## Information I am using the `mbart-large-50-many-to-many-mmt` model for translation and it works as expected when translating German to English but when translating to other languages such as French or Italian it seems broken. I am using the same code as highlighted in the model card. ```Python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") input_text = "Der Mars hat einen neuen Besucher, wenn auch einen robotischen: \ Nach einer mehr als 472 Millionen Kilometer langen Reise setzte am Donnerstagabend \ das amerikanische Roboterfahrzeug Perseverance sanft im Marsstaub auf. " tokenizer.src_lang = "de_DE" encoded = tokenizer(input_text, return_tensors="pt") generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=False)[0] #--> '</s>en_XX Mars has a new visitor, but also a robotic one: After a journey of more than 472 million kilometers, the American robotic vehicle Perseverance gently set off in Mars dust on Thursday evening.</s>' tokenizer.src_lang = "de_DE" encoded = tokenizer(input_text, return_tensors="pt") generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=False)[0] #--> '</s>fr_XX On Mars, on Mars, on Mars, on Mars, on Mars, on Mars, on Mars, on Mars, on Mars.</s>' tokenizer.src_lang = "de_DE" encoded = tokenizer(input_text, return_tensors="pt") generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.lang_code_to_id["it_IT"]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=False)[0] #--> '</s>it_IT Mars has a new visitor, anche robotico: After a journey di più di 472 milioni di chilometri, Thursday evening, the American robot vehicle Perseverance si è calmato in the dust of Mars.</s>' ``` Am I doing something wrong with the translation or is the performance on these languages expected to be worse?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10279/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10278/comments
https://api.github.com/repos/huggingface/transformers/issues/10278/events
https://github.com/huggingface/transformers/issues/10278
812,102,166
MDU6SXNzdWU4MTIxMDIxNjY=
10,278
Improving training time for Marian MT model with the Trainer
{ "login": "ronaldvelzen", "id": 61542345, "node_id": "MDQ6VXNlcjYxNTQyMzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/61542345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ronaldvelzen", "html_url": "https://github.com/ronaldvelzen", "followers_url": "https://api.github.com/users/ronaldvelzen/followers", "following_url": "https://api.github.com/users/ronaldvelzen/following{/other_user}", "gists_url": "https://api.github.com/users/ronaldvelzen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ronaldvelzen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ronaldvelzen/subscriptions", "organizations_url": "https://api.github.com/users/ronaldvelzen/orgs", "repos_url": "https://api.github.com/users/ronaldvelzen/repos", "events_url": "https://api.github.com/users/ronaldvelzen/events{/privacy}", "received_events_url": "https://api.github.com/users/ronaldvelzen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "+1 I found the same problem. The bottleneck seems to be huggingface/datasets. Hence I switched back to use the old customized dataset, which was way more faster.", "Hi ! There's currently an issue in huggingface/datasets that makes iterating through the dataset slow if your dataset is big.\r\nWe're working on a fix and we'll do a new release soon to address this :)\r\n\r\nI'll ping you when the fix is ready if you want to try it out !\r\nedit: https://github.com/huggingface/datasets/pull/2122 fixed it", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@gyin-ai could you share your custom solution? I'm running into the same problem" ]
1,613
1,627
1,619
NONE
null
## Environment info - Platform: Linux-4.19.0-14-cloud-amd64-x86_64-with-debian-10.7 - Python version: 3.7.9 - PyTorch version (GPU): 1.7.1 - Using GPU in script?: Yes (2 Tesla V100 GPUs with 16160MiB memory) - CUDA Version: 11.0 - transformers: 4.3.2 - datasets: 1.3.0 ### Who can help @lhoestq @sgugger @sshleifer ## Information The model I am using is a [MarianMTModel](https://huggingface.co/transformers/model_doc/marian.html#marianmtmodel) to train a machine translation model for Italian to Dutch. In order to so I perform the following steps: - Split the data 1. The dataset originally consists of 2 parts, 1 text file containing Italian sentences and 1 text file containing the corresponding Dutch sentences. (42,940,499 sentences). These text files are combined in a pandas dataframe with a source and target column and is written to a csv file. Based on this [issue](https://github.com/huggingface/datasets/issues/610#issuecomment-691672919), which solved out-of-memory issues, I split the dataframe in 1000 csv chunks. 2. The csv files are loaded with the load_dataset method (which results in a 7.5G csv-train.arrow file) from the datasets library as follows: ```python train_files = glob.glob(data_folder + 'shards/data_chunk_train_*') # list of the individual csv files train_dataset = load_dataset('csv', data_files=train_files, split='train', cache_dir=cache_folder, keep_in_memory=True) ``` I use the keep_in_memory=True here to hopefully make things faster during training. - Tokenization 1. At first, I to batch tokenized all the sentences with the map function. However, this resulted in 16 * 32G cache-files and gave a training time of 2000 hours. So I changed this to use the set_transform method in the latest release from datasets as follows: ``` python def encode(example): return tokenizer.prepare_seq2seq_batch(src_texts=example['source'], tgt_texts=example['target'], padding='max_length', max_length=512) train_dataset.set_transform(encode) ``` I use the max_length here to tokenize every sentence to the same size. The tokenizer is a MarianTokenizer (where the spm files and vocab are trained with sentencepiece) and is defined as follows: ```python tokenizer = MarianTokenizer(vocab='tokenizer/vocab.json', source_spm='tokenizer/source.model', target_spm='tokenizer/target.model', source_lang='it', target_lang='nl', model_max_length=512) ``` - Model 1. A MarianMTModel is configured with the following MarianConfig (same config as the pretrained MarianMT models): ``` python model = MarianConfig(decoder_layers=6, encoder_layers=6, d_model= 512, decoder_attention_heads=8, decoder_ffn_dim=2048, decoder_layerdrop=0.0, encoder_attention_heads=8, encoder_ffn_dim=2048, encoder_layerdrop=0.0, max_position_embeddings=512) model = MarianMTModel(configuration) ``` - Training 1. After I loaded the data, created the MarianTokenizer and configured the MarianMTmodel I started training with the Trainer. I used the following trainingArguments: ``` python training_args = TrainingArguments(num_train_epochs=3, per_device_train_batch_size=12, per_device_eval_batch_size=12, warmup_steps=100, weight_decay=0.01, logging_dir='./logs', logging_steps=5000, save_steps=10000, disable_tqdm=False, logging_first_step=True, fp16=True, remove_unused_columns = False) ``` I was not able to increase the batch size, as this gave out of memory errors on the GPU. And finally started training: ```python trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=dev_dataset # contains 50,000 samples ) trainer.train() ``` ## Expected behavior I expect the model to train within a reasonable amount of time (i.e. a couple of days). However, the training process is going to take about 500 hours: 0 %| | 219/5367564 [01:18<531:01:44, 2.81it/s] I was wondering if this is expected behaviour that it takes that long to train. Could you please give me any suggestions on how to modify this to make it faster?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10278/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10277/comments
https://api.github.com/repos/huggingface/transformers/issues/10277/events
https://github.com/huggingface/transformers/issues/10277
812,082,080
MDU6SXNzdWU4MTIwODIwODA=
10,277
ImportError: cannot import name 'pipeline' from 'transformers' (unknown location)
{ "login": "yipy0005", "id": 8023685, "node_id": "MDQ6VXNlcjgwMjM2ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/8023685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yipy0005", "html_url": "https://github.com/yipy0005", "followers_url": "https://api.github.com/users/yipy0005/followers", "following_url": "https://api.github.com/users/yipy0005/following{/other_user}", "gists_url": "https://api.github.com/users/yipy0005/gists{/gist_id}", "starred_url": "https://api.github.com/users/yipy0005/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yipy0005/subscriptions", "organizations_url": "https://api.github.com/users/yipy0005/orgs", "repos_url": "https://api.github.com/users/yipy0005/repos", "events_url": "https://api.github.com/users/yipy0005/events{/privacy}", "received_events_url": "https://api.github.com/users/yipy0005/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm on MacOS btw.", "Possibly duplicate of https://github.com/huggingface/transformers/issues/9939", "> Possibly duplicate of https://github.com/huggingface/transformers/issues/9939\n\nI have installed TF 2.0 right at the start. Is there a version to update to resolve this error?", "So after install TF 2.0 with conda, I performed pip install --upgrade tensorflow to v2.4.1 and it works now.", "The issue happens again with latest version of tensorflow and transformers.\r\n\r\n`>>> import transformers`\r\n`>>> from transformers import pipeline`\r\n`Traceback (most recent call last):`\r\n ` File \"<stdin>\", line 1, in <module>`\r\n`ImportError: cannot import name 'pipeline' from 'transformers' (unknown location)`\r\n`>>> tensorflow.__version__`\r\n`'2.5.0'`\r\n`>>> transformers.__version__`\r\n`'4.7.0'`", "I had the same problem but used the false transformer:\r\n**Initial**\r\n`conda update -c conda-forge transformers`\r\n\r\n**Before**\r\n> tensorflow.__version__ # '2.3.0'\r\n> transformers.__version__ # '2.1.1'\r\n\r\n**Solution**\r\n`conda install -c huggingface transformers `\r\n\r\n**After**\r\n> tensorflow.__version__ # '2.3.0'\r\n> transformers.__version__ # '4.11.3'\r\n> torch.__version__ # '1.10.0'\r\n\r\n", "It is related to the solution. I have to solve it by explicit stating the version. Otherwise, it keep installing the conda-forge version.\r\n\r\n`conda install -c huggingface transformers =4.11.3`", "What is your python files name? Mine was _tokenizers_ and when I changed it to _use_tokenizers_ it works. Maybe using names like \"tokenizers\", \"pipeline\" for files not the best idea.", "> What is your python files name? Mine was _tokenizers_ and when I changed it to _use_tokenizers_ it works. Maybe using names like \"tokenizers\", \"pipeline\" for files not the best idea.\r\n\r\nI made the mistake of calling mine tokenize.py - once changed the code worked. " ]
1,613
1,674
1,613
NONE
null
Hi, I created an env with conda, installed TF, then installed PyTorch, then "pip install git+https://github.com/huggingface/transformers", but when I ran 'python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I hate you'))"', it gave me the ImportError. How can I resolve this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10277/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10276/comments
https://api.github.com/repos/huggingface/transformers/issues/10276/events
https://github.com/huggingface/transformers/pull/10276
812,024,747
MDExOlB1bGxSZXF1ZXN0NTc2NDU5MjQ3
10,276
Move the TF NER example
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Nice! What means \"same way of training\", same way than what?", "Same way as the current `run_ner` script." ]
1,613
1,614
1,613
CONTRIBUTOR
null
# What does this PR do? This PR moves the `run_tf_ner.py` example into the legacy folder because it uses the "legacy" way to train a model with the `utils_ner.py` file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10276/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10276", "html_url": "https://github.com/huggingface/transformers/pull/10276", "diff_url": "https://github.com/huggingface/transformers/pull/10276.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10276.patch", "merged_at": 1613768773000 }
https://api.github.com/repos/huggingface/transformers/issues/10275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10275/comments
https://api.github.com/repos/huggingface/transformers/issues/10275/events
https://github.com/huggingface/transformers/pull/10275
812,003,724
MDExOlB1bGxSZXF1ZXN0NTc2NDQxNTQx
10,275
Fix squad processor for TF
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,686
1,619
CONTRIBUTOR
null
# What does this PR do? This PR fixes the Squad processor that prepares and creates a `tf.data.dataset` to be able to be used in the `TFTrainer` through the `run_tf_squad.py` example script. There were two issues: 1. The `token_type_ids` was forced to be `True` in the tokenizer output even if this argument was not part of the tokenizer's property `model_input_names`. Now it always belongs to the created dataset. 2. The `input_processing` method that parses the inputs of a TF model doesn't allow arguments that are not part of the signature, then the extra features have been removed from the created dataset. # Fixes issue #10246
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10275/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10275", "html_url": "https://github.com/huggingface/transformers/pull/10275", "diff_url": "https://github.com/huggingface/transformers/pull/10275.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10275.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10274/comments
https://api.github.com/repos/huggingface/transformers/issues/10274/events
https://github.com/huggingface/transformers/pull/10274
811,915,965
MDExOlB1bGxSZXF1ZXN0NTc2MzY4MTA1
10,274
Rework the AMP for TF XLNet
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, `bfloat16` is only for TPU. Hence, we cannot really test it elsewhere than inside a TPU context. I have added the `bfloat16` condition only if XLNet is run on TPU because we were handling a specific case when the model is run under AMP." ]
1,613
1,614
1,614
CONTRIBUTOR
null
# What does this PR do? This PR reworks the AMP of XLNet to remove some useless casts for better and less confusing AMP compliancy.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10274", "html_url": "https://github.com/huggingface/transformers/pull/10274", "diff_url": "https://github.com/huggingface/transformers/pull/10274.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10274.patch", "merged_at": 1614173909000 }
https://api.github.com/repos/huggingface/transformers/issues/10273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10273/comments
https://api.github.com/repos/huggingface/transformers/issues/10273/events
https://github.com/huggingface/transformers/issues/10273
811,721,478
MDU6SXNzdWU4MTE3MjE0Nzg=
10,273
ElectraForQuestionAnswering with SQuADHead
{ "login": "bkiat1123", "id": 49676590, "node_id": "MDQ6VXNlcjQ5Njc2NTkw", "avatar_url": "https://avatars.githubusercontent.com/u/49676590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bkiat1123", "html_url": "https://github.com/bkiat1123", "followers_url": "https://api.github.com/users/bkiat1123/followers", "following_url": "https://api.github.com/users/bkiat1123/following{/other_user}", "gists_url": "https://api.github.com/users/bkiat1123/gists{/gist_id}", "starred_url": "https://api.github.com/users/bkiat1123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bkiat1123/subscriptions", "organizations_url": "https://api.github.com/users/bkiat1123/orgs", "repos_url": "https://api.github.com/users/bkiat1123/repos", "events_url": "https://api.github.com/users/bkiat1123/events{/privacy}", "received_events_url": "https://api.github.com/users/bkiat1123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! We would welcome a PR that offers this. Maybe instead of renaming the current QA model to `Simple` (which would break backwards-compatibility), we could add a new model called `ElectraForQuestionAnsweringBeamSearch`? What do you think?", "Agreed. We should use a new name for the model for backward-compatibility. I will submit a PR soon.", "This sounds great, thanks @bkiat1123!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
# 🚀 Feature request <!-- --> Implement ElectraForQuestionAnswering as described in the paper. https://arxiv.org/abs/2003.10555 ## Motivation <!-- --> In the original implementation, the authors use question answering module from XLNet rather than simple linear layer. There is a huge gap of performance between these 2 question answering module, especially on Squad2.0 like tasks. I suggest to follow the original implementation and rename the one with simple linear layer to ElectraForQuestionAnsweringSimple. ## Your contribution <!-- --> My team and I have implemented it using SQuADHead from modeling utils. I can submit a PR and make other necessary changes. Code example: ``` from transformers import ElectraModel, ElectraPreTrainedModel from transformers.modeling_utils import SQuADHead class ElectraForQuestionAnswering(ElectraPreTrainedModel): def __init__(self, config): super().__init__(config) self.start_n_top = config.start_n_top self.end_n_top = config.end_n_top self.electra = ElectraModel(config) self.squad_head = SQuADHead(config) self.init_weights() def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, head_mask=None, start_positions=None, end_positions=None, is_impossible=None, cls_index=None, p_mask=None, return_dict=None): return_dict = return_dict if return_dict is not None else self.config.use_return_dict transformer_outputs = self.electra( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, head_mask=head_mask, return_dict=return_dict, ) hidden_states = transformer_outputs[0] return self.squad_head(hidden_states=hidden_states, start_positions=start_positions, end_positions=end_positions, cls_index=cls_index, is_impossible=is_impossible, p_mask=p_mask, return_dict=return_dict, ) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10273/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10272/comments
https://api.github.com/repos/huggingface/transformers/issues/10272/events
https://github.com/huggingface/transformers/issues/10272
811,715,503
MDU6SXNzdWU4MTE3MTU1MDM=
10,272
Summarization of long text with T5 seems to output random memory content
{ "login": "db1981", "id": 64541288, "node_id": "MDQ6VXNlcjY0NTQxMjg4", "avatar_url": "https://avatars.githubusercontent.com/u/64541288?v=4", "gravatar_id": "", "url": "https://api.github.com/users/db1981", "html_url": "https://github.com/db1981", "followers_url": "https://api.github.com/users/db1981/followers", "following_url": "https://api.github.com/users/db1981/following{/other_user}", "gists_url": "https://api.github.com/users/db1981/gists{/gist_id}", "starred_url": "https://api.github.com/users/db1981/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/db1981/subscriptions", "organizations_url": "https://api.github.com/users/db1981/orgs", "repos_url": "https://api.github.com/users/db1981/repos", "events_url": "https://api.github.com/users/db1981/events{/privacy}", "received_events_url": "https://api.github.com/users/db1981/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hey @db1981,\r\n\r\ncould you please post a fully reproducible code snippet in the following format:\r\n\r\n```python\r\nfrom transformers import T5ForConditionalGeneration\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"...\")\r\n\r\n...\r\n```\r\n\r\nso that we can help you better?", "Hi @patrickvonplaten thanks for you reply! Below the full code!\r\n\r\nfrom transformers import T5ForConditionalGeneration, T5Tokenizer\r\n\r\n```python\r\nmodel_str = \"t5-base\"\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(model_str)\r\ntokenizer = T5Tokenizer.from_pretrained(model_str)\r\n\r\nfull_path = \"./brief.txt\"\r\n\r\nwith open(full_path) as file: # Use file to refer to the file object\r\n text = file.read()\r\n\r\n proc_text = text.strip().replace(\"\\n\",\"\")\r\n len_text = len(proc_text.split())\r\n\r\n inputs = tokenizer.encode(\"summarize: \" + proc_text, return_tensors=\"pt\", max_length= (len_text + 1), truncation=True)\r\n outputs = model.generate(\r\n inputs, \r\n max_length=round(len_text/3), \r\n min_length=round(len_text/5), \r\n no_repeat_ngram_size=2,\r\n length_penalty=2.0, \r\n num_beams=4, \r\n early_stopping=False)\r\n\r\n final = tokenizer.decode(outputs[0], skip_special_tokens=True)\r\n\r\n print(\"%d words: %s\" % (len(final.split()), final))\r\n", "Yeah actually your input text is too long here I think -> T5 was only trained to handle up to 512 tokens (which corresponds to less than 512 words), so T5 will definitely not perform well for > 1500 words", "@patrickvonplaten correct, T5 was originally trained to handle up to 512 tokens. But recently the Transformers library was updated to handle longer texts, using the Longformer apparoach. I thought that behind the scenes a longer text would have triggered automagically Longformer...I'll now give. it a shot with that class directly.\r\n\r\nThanks! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
Hello everyone, I'm trying to summarizing a long text (~1800 words) using the T5 model. I set the max-length and min_length parameters as well, and when I do so, the output seems to contain random memory content... Here my code: #len_text=1755 inputs = tokenizer.encode("summarize: " + proc_text, return_tensors="pt", max_length= (len_text + 1), truncation=True) outputs = model.generate( inputs, max_length=round(len_text/3), #~590 words min_length=round(len_text/5), #~350 words no_repeat_ngram_size=2, length_penalty=2.0, num_beams=4, early_stopping=False) And here the output (~150 words only): the key is to deploy predictive maintenance on assets where it makes sense. a combination of machine learning and data driven analytics can be used to plan, analyze, plan and expand across an enterprise to gain real savings and improvements. to achieve high operational efficiency and availability, ensuring that all assets are performing at peak performance with high availability and the lowest possible maintenance costs, companies are provided with some compelling options to manage their assets. the cost of adopting the wrong strategy can actually introduce failures in themselves. many companies in the process industry and energy sectors are still- - ­­.­ s­-­­» - n­r­h­n hh gra­­,­[­&­...­_­/­*­—­s... '­–­ [­“­?­;­e­ and­(­**­”­ (­---­]­" & – / _ »­­:­ "­ *­',--..,.-...[[__[*-_-"-&-s-–&&_-(-//-n'-,n-d-'rs/d&m––re»â«_â­â-ââ–â? â€[“?.”123467891012—:’...;‘’’–‘‘'&’-«­‘­’&–——‘–-—–==– ‘‘-’—-e, ‘’­l’ — e–’_—_–_&—&/–... ‘­o ’ ‘–/— ‘—’.– “­ ‘& ‘ re­re i–“– (–,–e’,’ (‘—e&,—...—/& (&.&...’/ ‘...– =–./’=- ‘- «– „– «-»–»- (— (-)-‘&'–«–*–: ‘e-a­ii­a–n–o–s–en­y­en–i—=­= ‘=&=?&*&# ;&e : r– and eng­t­d&‘ ‘ ‘ee,,&n&y-r&o-in–y–d’; ‘n’e ‘s& I attached the full text I'm attempting to summarize as well. Thanks! [brief.txt](https://github.com/huggingface/transformers/files/6007898/brief.txt)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10272/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10271/comments
https://api.github.com/repos/huggingface/transformers/issues/10271/events
https://github.com/huggingface/transformers/pull/10271
811,551,271
MDExOlB1bGxSZXF1ZXN0NTc2MDYzNzI3
10,271
[test] fix func signature
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
This PR makes a small fix where the func argument cannot be `None` as it's used w/o checking it's `None`. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10271/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10271", "html_url": "https://github.com/huggingface/transformers/pull/10271", "diff_url": "https://github.com/huggingface/transformers/pull/10271.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10271.patch", "merged_at": 1613695482000 }
https://api.github.com/repos/huggingface/transformers/issues/10270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10270/comments
https://api.github.com/repos/huggingface/transformers/issues/10270/events
https://github.com/huggingface/transformers/pull/10270
811,545,519
MDExOlB1bGxSZXF1ZXN0NTc2MDU4OTA1
10,270
[ISSUES.md] propose using google colab to reproduce problems
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
It makes the reproduction process much faster if a user supplies a google colab notebook where we can see the problem. This PR adds this suggestion to the existing how-to list. @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10270/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10270", "html_url": "https://github.com/huggingface/transformers/pull/10270", "diff_url": "https://github.com/huggingface/transformers/pull/10270.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10270.patch", "merged_at": 1613697352000 }
https://api.github.com/repos/huggingface/transformers/issues/10269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10269/comments
https://api.github.com/repos/huggingface/transformers/issues/10269/events
https://github.com/huggingface/transformers/issues/10269
811,501,187
MDU6SXNzdWU4MTE1MDExODc=
10,269
Language Modeling Task (GPT2 / CLM) Does Not Generate Line Breaks?
{ "login": "ColinConwell", "id": 23064382, "node_id": "MDQ6VXNlcjIzMDY0Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/23064382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ColinConwell", "html_url": "https://github.com/ColinConwell", "followers_url": "https://api.github.com/users/ColinConwell/followers", "following_url": "https://api.github.com/users/ColinConwell/following{/other_user}", "gists_url": "https://api.github.com/users/ColinConwell/gists{/gist_id}", "starred_url": "https://api.github.com/users/ColinConwell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ColinConwell/subscriptions", "organizations_url": "https://api.github.com/users/ColinConwell/orgs", "repos_url": "https://api.github.com/users/ColinConwell/repos", "events_url": "https://api.github.com/users/ColinConwell/events{/privacy}", "received_events_url": "https://api.github.com/users/ColinConwell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi Colin. I ran into this same issue when I switched over to using the datasets library to load my poetry corpus, where line breaks are super important. \r\n\r\nI ended up making a slightly modified version of the built-in [text](https://github.com/huggingface/datasets/blob/master/src/datasets/packaged_modules/text/text.py) loader called text_with_linebreaks, changing line 62 to `batch = batch.splitlines(True)` to keep the newlines. \r\n", "@jncasey Thanks for the rapid reply! I figured the culprit here might be the switch over to huggingface/datasets. How did you end up incorporating this into your workflow? Did you modify other scripts to reference text_with_linebreaks?", "Yes, my training script is a sloppily modified version of the run_clm.py example. I added a new training arg for whether to keep the line breaks, and check for that arg in the section where the script determines which loader to use based on the file extension of the data files. ", "Cc @lhoestq to see how we could surface that functionality more easily.", "Maybe let's add a `keep_linebreaks` parameter to the text loader ? What do you think ?\r\nThis is already a feature request: https://github.com/huggingface/datasets/issues/870", "Thanks for the rapid replies, and relevant updates. would there be interest then in surfacing this new functionality an extra level to the run_[c]lm.py script? or should we just modify the relevant load_dataset call in that script?", "We will do that as soon as there is a new release of datasets to pin in the requirements! For now changing the `load_dataset` in the script if you have a source install is the best way.", "That seems a fine enough solution to me. Thanks again for the assistance. I'll close the issue for now." ]
1,613
1,614
1,614
NONE
null
The legacy run_language_modeling.py script produced output that respected line breaks in the train_data_file. The updated run_clm.py script does not. I imagine this is likely due to how the dataset is processed in the new script, but if it is, how do I intervene and fix it? ## Environment info - general environment: Google Colab - `transformers` version: 4.4.0.dev0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: <fill in> ### Who can help Models: - gpt2: @patrickvonplaten, @LysandreJik Library: - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [x] the official example scripts: run_clm.py | run_language_modeling.py * [x] my own modified scripts: colab notebooks that use these scripts The tasks I am working on is: * [x] my own task or dataset: Tiny Shakespeare (from text file) ## To reproduce Steps to reproduce the behavior: 1. Download https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt 2. python run_clm.py with --train_file set to input.txt 3. Instantiate finetuned GPT2 model and use model.generate to create new sequence Colab notebooks may be found below: Original (with legacy run_language_modeling.py): https://colab.research.google.com/drive/1ieS4TuaFNJhuunaAM9wVmyp-n8Yx9_la?usp=sharing Updated (with updated run_clm.py): https://colab.research.google.com/drive/1dqIzv7WLk7sDOmFhLdMDhyKCIEcvw3lB?usp=sharing ## Expected behavior When using the legacy run_language_modeling.py script, the output is as expected, with the correct line breaks: <img width="951" alt="Screen Shot 2021-02-18 at 4 54 21 PM" src="https://user-images.githubusercontent.com/23064382/108426683-0986ca00-720a-11eb-9a3b-ae45fbcd7ce7.png"> When running the updated run_clm.py script, line breaks are conspicuously missing: <img width="1027" alt="Screen Shot 2021-02-18 at 4 54 37 PM" src="https://user-images.githubusercontent.com/23064382/108426696-10154180-720a-11eb-9792-23b88e71c911.png"> Is there a straightforward way to remedy this? My thanks as always for this wonderful repo, all your hard work, and any assistance you might be able to provide.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10269/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10268/comments
https://api.github.com/repos/huggingface/transformers/issues/10268/events
https://github.com/huggingface/transformers/pull/10268
811,465,551
MDExOlB1bGxSZXF1ZXN0NTc1OTg5MDQ5
10,268
[trainer] implement support for full fp16 in evaluation/predict
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
This PR allows users to use `model.half()` in evaluation/predict, which may or may not deliver results identical to fp32 eval. The outcome depends on how the model was trained and the application. e.g. if I use `--label_smoothing` with t5-small I get `eval loss = nan`, but bleu scores are exactly the same as with fp32. ### Need Besides users asking for it in the past, the real need that prompted me to implement this is based on this Issue: https://github.com/huggingface/transformers/issues/10161. To explain - DeepSpeed trains in fp16, while keeping master copy of fp32 weights on cpu, which allows fitting a model like t5-11b (45GB in params) onto a 40GB gpu (only 22.5GB in fp16). But then the user wants to eval and deepspeed is of no use here at the moment. So we need to give a way to users to run full fp16 in eval, which is what this PR proposes. This PR: * [x] adds `is_in_train` public Trainer attribute which helps to tell whether `evaluation` is running on its own, or called from `train` * [x] adds `--fp16_full_eval` to enable the full fp16 mode under eval/predict (while `full-fp16` would read better, I picked the name starting with `--fp16_` to align/group well with other3 `--fp16_*` args. * [x] adds the first test that measures gpu mem deltas - let's hope it proves to work across different gpus The logic is a bit tricky since we must not `model.to(device)` before `model.half()` or otherwise the model loading will OOM, but I hope I was able to keep it simple and not error-prone. Perhaps instead of replaying `place_on_device` logic at the end of `train` in the deepspeed clean up section - it'd be better to re-play the full logic in the `predict_loop`? So that each stage can decide at its beginning how and when to put the model on device. A few small related fixes: * [x] fixes `_wrap_model` to do nothing under deepspeed * [x] fixes `--fp16` help to remove apex-only comment, as it's outdated. Questions: * [ ] Should I add a log, saying that half is used at `model.half()` activation * [ ] I put it inside `prediction_step` which seems to be the right place, it won't run if it's a re-entrant eval-inside-train * [ ] as the `inputs` are `ints` I don't think we need to switch them to `half()` as well. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10268/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10268", "html_url": "https://github.com/huggingface/transformers/pull/10268", "diff_url": "https://github.com/huggingface/transformers/pull/10268.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10268.patch", "merged_at": 1613696555000 }
https://api.github.com/repos/huggingface/transformers/issues/10267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10267/comments
https://api.github.com/repos/huggingface/transformers/issues/10267/events
https://github.com/huggingface/transformers/pull/10267
811,426,635
MDExOlB1bGxSZXF1ZXN0NTc1OTU2NDM4
10,267
Introduce logging_strategy training argument
{ "login": "tanmay17061", "id": 32801726, "node_id": "MDQ6VXNlcjMyODAxNzI2", "avatar_url": "https://avatars.githubusercontent.com/u/32801726?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmay17061", "html_url": "https://github.com/tanmay17061", "followers_url": "https://api.github.com/users/tanmay17061/followers", "following_url": "https://api.github.com/users/tanmay17061/following{/other_user}", "gists_url": "https://api.github.com/users/tanmay17061/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmay17061/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmay17061/subscriptions", "organizations_url": "https://api.github.com/users/tanmay17061/orgs", "repos_url": "https://api.github.com/users/tanmay17061/repos", "events_url": "https://api.github.com/users/tanmay17061/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmay17061/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Currently WIP. \r\nThanks!", "Thanks, yes that worked out! \r\nIMO, defaulting `eval_steps` to `logging_steps` is not a good decision any longer. \r\nWith `logging_strategy` introduced, it seems more intuitive to decouple both. In case user chooses `logging_strategy=\"epoch\"`, `logging_steps` is no longer a valid quantity. \r\nWhat is your take on this?", "> In case user chooses logging_strategy=\"epoch\", logging_steps is no longer a valid quantity.\r\n\r\nIt will just be ignored in that case, so there is no weird behavior for the user.", "Sure! That makes sense. \r\nYou can review the changes and let me know if any changes required. \r\nThanks", "Yes, something like `TimeInterval` (or `TimeStrategy`) will be a good generic enum. \r\nI can work on this generic enum and `saving_strategy` earlier next week most probably. Will raise a PR soon enough. \r\nFor now I've made the amends to introduce `LoggingStrategy.NO`.", "Great! We can merge this in the meantime.\r\nLooking forward to your next PR!" ]
1,613
1,613
1,613
CONTRIBUTOR
null
Introduce logging_strategy training argument in TrainingArguments and TFTrainingArguments. (#9838) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> 1. Introduce a `logging_strategy` argument in TrainingArguments. 2. Define a LoggingStrategy enumeration. This is similar to `EvalStrategy`. <!-- Remove if not applicable --> Fixes #9838 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [Link to issue](https://github.com/huggingface/transformers/issues/9838). - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> As changes in trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10267/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10267", "html_url": "https://github.com/huggingface/transformers/pull/10267", "diff_url": "https://github.com/huggingface/transformers/pull/10267.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10267.patch", "merged_at": 1613753362000 }
https://api.github.com/repos/huggingface/transformers/issues/10266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10266/comments
https://api.github.com/repos/huggingface/transformers/issues/10266/events
https://github.com/huggingface/transformers/pull/10266
811,410,806
MDExOlB1bGxSZXF1ZXN0NTc1OTQzMjQ3
10,266
[trainer] add Trainer methods for metrics logging and saving
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger, are you ok if we merge this and I will ask Second Good Issue to help with this - I'm not sure I will have time to do this and test all the scripts in the coming days, and since you guys discuss changing this script again, we should probably merge this first.", "I'm fine with that!", "Started an issue here: https://github.com/huggingface/transformers/issues/10337 - This is an easy task so I think First Good Issue might work. Let me know if I should bump it to Second.\r\n" ]
1,613
1,614
1,614
CONTRIBUTOR
null
This PR introduces: * [x] `trainer.log_metrics` - to perform consistent formatting for logged metrics * [x] `trainer.save_metrics` - to save the metrics This removes a lot of pointless noise from the example scripts and makes them much easier to read and understand. It doesn't take away from a user understanding the example, since these helper methods are just removing formatting and file saving. If accepted it should be easy to replicate to other example scripts so that they all produce a consistent output and are all easier to read. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10266/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10266", "html_url": "https://github.com/huggingface/transformers/pull/10266", "diff_url": "https://github.com/huggingface/transformers/pull/10266.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10266.patch", "merged_at": 1614027773000 }
https://api.github.com/repos/huggingface/transformers/issues/10265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10265/comments
https://api.github.com/repos/huggingface/transformers/issues/10265/events
https://github.com/huggingface/transformers/issues/10265
811,280,768
MDU6SXNzdWU4MTEyODA3Njg=
10,265
Tapas Tokenizer makes DataFrame iterrows() iterator crazy ...
{ "login": "jeromemassot", "id": 20254310, "node_id": "MDQ6VXNlcjIwMjU0MzEw", "avatar_url": "https://avatars.githubusercontent.com/u/20254310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeromemassot", "html_url": "https://github.com/jeromemassot", "followers_url": "https://api.github.com/users/jeromemassot/followers", "following_url": "https://api.github.com/users/jeromemassot/following{/other_user}", "gists_url": "https://api.github.com/users/jeromemassot/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeromemassot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeromemassot/subscriptions", "organizations_url": "https://api.github.com/users/jeromemassot/orgs", "repos_url": "https://api.github.com/users/jeromemassot/repos", "events_url": "https://api.github.com/users/jeromemassot/events{/privacy}", "received_events_url": "https://api.github.com/users/jeromemassot/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nCan you provide the table on which you tried this?\r\n\r\nTo add numeric value information to the table, each cell in the table is replaced by a `Cell` object. A `Cell` object has 2 attributes: `text` (the original string corresponding to the cell value) and an optional `numeric_value` (which can be a `float_value` or a `date`). \r\n\r\nDid you apply `.astype(str)` on your Pandas dataframe before providing it to `TapasTokenizer`? Since this is required before encoding the table.", "Hi Niels,\r\nIndeed, the code changes the DataFrame content to Cell format... but the `iterrows() `returns sometimes a correct row format which is transformed into Cell format... but sometimes a Cell object which is transformed into a Cell (of a Cell) object with text attribute initialized to the original Cell object !! :( \r\n\r\nI have applied the `.astype(str)` yeap, before and after the sample() call... just to be sure :)\r\n\r\nHere is the table with ; as separator : \r\n\r\nWater injected volume (% P.V.);Oil recovery (% I.O.I.P.);Watercut (%)\r\n214.11;61;98\r\n215.23;61;98\r\n216.36;61.1;99\r\n217.49;61.1;99\r\n218.62;61.2;98\r\n219.75;61.2;99\r\n220.88;61.2;98\r\n222.02;61.3;98\r\n223.15;61.3;99\r\n224.28;61.4;98\r\n225.41;61.4;99\r\n226.55;61.4;98\r\n227.67;61.4;99\r\n228.8;61.5;99\r\n229.94;61.5;99\r\n231.07;61.6;98\r\n232.2;61.6;99\r\n233.33;61.6;98\r\n234.47;61.7;99\r\n235.6;61.7;98\r\n236.73;61.8;99\r\n237.86;61.8;99\r\n239;61.8;99\r\n240.11;61.9;99\r\n241.24;61.9;99\r\n242.39;61.9;99\r\n243.51;62;99\r\n244.64;62;99\r\n245.77;62;99\r\n246.9;62.1;99\r\n248.03;62.1;99\r\n\r\nHow to repeat : \r\n\r\ntable = pd.read_csv(os.path.join(DATA_PATH, \"Table_01.csv\"), sep=\";\").astype(str)\r\ntable = table.sample(frac=1.0, random_state=42, replace=False).astype(str)\r\n\r\ninputs = tokenizer(table=table, queries=questions, padding='max_length', return_tensors=\"pt\")\r\n\r\nGood luck :)", "I was able to reproduce it, however when I reset the indices after sampling, it works:\r\n\r\n`table = table.sample(frac=1.0, random_state=42, replace=False).reset_index(drop=True).astype(str)`\r\n\r\nWill look into why it can't handle without resetting the row indices.\r\n", "Hi Niels,\r\nI have also tried the reset index... and in my side it was crashing the same. But it was withtout the drop=True.\r\nThis behavior is very strange. I have check the Pandas documentation and normally sample() should returned a DataFrame object... nothing fancy here. \r\nAnd it does because the iterrows() outside the Tapas Tokenizer works fine :)\r\nSo Tapas Tokenizer is doing something on the DataFrame modified by the sample() function :)", "Here's a notebook illustrating the issue, and fixing it:\r\n\r\nhttps://colab.research.google.com/drive/10MbZiMKyEWUGk2Y1fvIj0Y0_lB42NO38?usp=sharing\r\n\r\nThe reason why you're getting the error is because in the part where each cell of the table is replaced by a Cell object:\r\n\r\nhttps://github.com/huggingface/transformers/blob/97e688bc220514cd5ea072f06b186401c9cfbbd0/src/transformers/models/tapas/tokenization_tapas.py#L2742-L2745, the row indices are used. \r\n\r\nThis can be fixed by replacing `table` with `table.reset_index(drop=True)` in the first line (or resetting the index of the table before providing it to the tokenizer). Another solution is to replace the final line by `table.iloc[row_index, col_index] = Cell(text=table.iloc[row_index, col_index])`. Will make a small PR to add this.\r\n\r\nThank you for spotting the error!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
## Environment info - `transformers` version: 4.3.2 - Platform: Colab Pro - Python version: Python 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 torch-scatter 2.0.5 - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: Tesla P100 - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. @n1t0, @LysandreJik --> ## Information Model I am using : Tapas Very very very strange (at least for me, a Computer Science newbie) with the function when the table ingested has been resampled with the pd.DataFrame.sample() method. In the following block of code, the rows iterator returns corrupted rows with my table. I have check the iterrows() outside the Tapas Tokenizer and the rows returned are correct. But inside the Tokenizer, the rows are sometimes ok but sometimes Cell objects and corresponding to wrong rows !! ``` # Second, replace cell values by Cell objects for row_index, row in table.iterrows(): for col_index, cell in enumerate(row): table.iloc[row_index, col_index] = Cell(text=cell) ``` The direct result in my case is a crash in the normalize_for_match() method : AttributeError: 'Cell' object has no attribute 'lower' which is normal since several rows in the table now are of Cell type and not str. I cannot see why the rows iterator suddenly returns corrupted data, for both Type and Values. Thanks Best regards Jerome The problem arises when using: * [ X] my own modified scripts: I am using the Tapas Tokenizer with shuffled Pandas DataFrame for table. The tasks I am working on is: * [ X] my own task or dataset: Total R&D ## To reproduce Steps to reproduce the behavior: 1. Use a standard Pandas DataFrame read from csv 2. Shuffle this DataFrame by using sample with frac=1 3. Tokenize the DataFrame as table using the Tapas Tokenizer # Second, replace cell values by Cell objects ``` for row_index, row in table.iterrows(): for col_index, cell in enumerate(row): table.iloc[row_index, col_index] = Cell(text=cell) ``` ## Expected behavior The iterrows() is returning unconsistent row information for both type and content. <!-- A clear and concise description of what you would expect to happen. --> The iterrows() should return consistent row values.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10265/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10264/comments
https://api.github.com/repos/huggingface/transformers/issues/10264/events
https://github.com/huggingface/transformers/pull/10264
811,237,394
MDExOlB1bGxSZXF1ZXN0NTc1Nzk4Mzcw
10,264
Making TF TransfoXL model compliant with AMP
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
# What does this PR do? This PR makes the TF TransfoXL model compliant with AMP. All the slow tests are passing as well for these models. These two models cannot be XLA compliant for now, as it seems that tf.where cannot be used in XLA if the x and y parameters are None. See the _get_global_attn_indices method which has this case. I have opened [an issue](https://github.com/tensorflow/tensorflow/issues/47211) on the TF repo in order to ask if it is an expected behavior or a bug.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10264/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10264", "html_url": "https://github.com/huggingface/transformers/pull/10264", "diff_url": "https://github.com/huggingface/transformers/pull/10264.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10264.patch", "merged_at": 1613735888000 }
https://api.github.com/repos/huggingface/transformers/issues/10263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10263/comments
https://api.github.com/repos/huggingface/transformers/issues/10263/events
https://github.com/huggingface/transformers/issues/10263
811,193,366
MDU6SXNzdWU4MTExOTMzNjY=
10,263
NER label re-alignment always expects B labelled first sub-words
{ "login": "joshdevins", "id": 181622, "node_id": "MDQ6VXNlcjE4MTYyMg==", "avatar_url": "https://avatars.githubusercontent.com/u/181622?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshdevins", "html_url": "https://github.com/joshdevins", "followers_url": "https://api.github.com/users/joshdevins/followers", "following_url": "https://api.github.com/users/joshdevins/following{/other_user}", "gists_url": "https://api.github.com/users/joshdevins/gists{/gist_id}", "starred_url": "https://api.github.com/users/joshdevins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joshdevins/subscriptions", "organizations_url": "https://api.github.com/users/joshdevins/orgs", "repos_url": "https://api.github.com/users/joshdevins/repos", "events_url": "https://api.github.com/users/joshdevins/events{/privacy}", "received_events_url": "https://api.github.com/users/joshdevins/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" }, { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" } ]
closed
false
null
[]
[ "Hello @joshdevins! Indeed, this is a valid issue. The current pipeline outputs tokens that were attributed a class, but ignores the following tokens. For models that were trained with labels on all subwords this works, but using a padded sub-word label like you've done yields unsatisfactory results.\r\n\r\nI think we could do better here when specifying `grouped_entities=True` to the NER pipeline, by looking ahead and checking if the tokens following a classified token are subwords tokens, in which case they can be grouped alongside the start of word token. I think this could be achievable by using offsets in fast tokenizers, as fast tokenizers are necessary for grouped entities anyway.\r\n\r\nWe can open a Good First Issue for this, or would you like to try your hand at it?", "I think there's a few strategies that can be used to realign labels in the pipeline (I can enumerate these later). However, if we put these strategies in the pipeline only, the [evaluation used in fine-tuning NER with the script](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py#L341-L349) will differ/be more limited since the evaluation currently has just two choices: use the label of the first sub-word only (ignore the other sub-words), or use each of labels on sub-words. It would be best to have the same realignment strategies available in both places.\r\n\r\nIn addition, the strategy used at training time for evaluation should really be the one that is used in the pipeline (or at least the default). So we might also consider storing the strategy in the config file that the pipeline can later read.\r\n\r\nHappy to hear your thoughts. I'm trying to write down all the realignment strategies that make sense so I will update the thread later once I can wrap my head around the options 😆", "Strategies that I can think of for how to label at inference time (+for evaluation):\r\n\r\n- If training with padded sub-words/label for first sub-word only, e.g. `Max Mustermann` → `Max` `Must` `##erman` `##n` → `B-PER` `I-PER` `X` `X`\r\n - Use the label from the first sub-word (default)\r\n- If training with the same label for each sub-word, e.g. `Max Mustermann` → `Max` `Must` `##erman` `##n` → `B-PER` `I-PER` `I-PER` `I-PER`\r\n - \"First\": (See above) Use the label from the first sub-word\r\n - \"Max\": Use the label with the maximum score across all sub-words\r\n - \"Average\": Average the score of each label across each sub-word and take the label with the maximum score (default)\r\n\r\nThis is a nice example of the latter two, see [Step 4: Evaluation](https://blog.codecentric.de/en/2020/12/ner-with-little-data-transformers-to-the-rescue/)\r\n\r\n![subword_voting](https://user-images.githubusercontent.com/181622/108532374-1376fe80-72d8-11eb-848c-475fbca9d7df.png)\r\n\r\nAs a general principle, I would argue that if `grouped_entities=True`, we should never be returning sub-words alone. Either they're part of a word that has a label, or they're not. I honestly still don't understand what the flag `ignore_subwords` is supposed to control 🤷 \r\n\r\nI would propose two flags:\r\n - `grouped_entities` (boolean) -- note that this implies subword grouping/label realignment (see below)\r\n - `True` will group all words into larger entities, e.g. Max Mustermann -> B-PER I-PER -> \"Max Musterman\" (PER)\r\n - `False` will leave words separated, , e.g. Max Mustermann -> B-PER I-PER -> \"Max Musterman\" (PER)\r\n - `subword_label_realignment` (boolean or strategy name)\r\n - `True` will use the default for the way the NER fine-tuning was performed, see default suggestions above\r\n - `False` will leave sub-words alone -- note that this implies that `grouped_entities` should be ignores\r\n - strategy name -- based on the above strategies", "> As a general principle, I would argue that if grouped_entities=True, we should never be returning sub-words alone. Either they're part of a word that has a label, or they're not. I honestly still don't understand what the flag ignore_subwords is supposed to control :shrug: \r\n\r\nI definitely agree with that statement, and it seems like the most straightforward way to improve that pipeline. I agree with the two flags you propose. Having finer control over these would be of great utility.\r\n\r\n> In addition, the strategy used at training time for evaluation should really be the one that is used in the pipeline (or at least the default). So we might also consider storing the strategy in the config file that the pipeline can later read.\r\n\r\nYes, definitely. These are definitely model-specific as they're reliant on the training, so adding them to the configuration would make things simpler.", "@LysandreJik Sounds good. Unfortunately I don't have time myself to work on this right now but hopefully in the future if someone else doesn't pick this one up.", "I'll put this up as a good first issue to see if a member of the community feels like working on it. Thank you for the discussion and for writing all of this up!", "I like to work on this. @LysandreJik besides @joshdevins's solution is there anything that I should consider? Do you have any suggestions? \r\nI'm thinking to add these two flags [here](https://github.com/huggingface/transformers/blob/39f70a405838bec8a8446150d1d8741688a737a2/src/transformers/pipelines/token_classification.py#L76) and probably change `group_sub_entities` and `group_entities ` functions too.", "Wonderful @elk-cloner! I think it's good to take it step by step, and @joshdevins' proposal already offers a very complete approach to re-alignment.\r\n\r\nYes, adding those two flags to the `__init__` makes sense! An important part of the development of that feature will be to develop tests to ensure that the behavior is the expected one. Please ping both @Narsil and I on the PR so that we can review!", "Thanks @elk-cloner for having a look! Happy to contribute by reviewing PRs, etc." ]
1,613
1,621
1,621
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.3.1 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.7 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help - bert, tokenizers, pipelines: @LysandreJik - trainer, maintained examples: @sgugger ## Information Model I am using (Bert, XLNet ...): [DistilBERT fine-tuned for conll03](https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english) The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Fine-tune a BERT model for NER/conll03 using the `run_ner.py` example script, all default values 2. Correct the label alignments, see [config.json](https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/config.json) 3. Infer using entities that have not been seen at training time, and are composed of multiple word-parts as defined by WordPiece (my assumption as to the cause). 4. Sub-words are labelled but pipeline re-grouping/label alignment relies on perfect sub-word labelling: E.g. Accenture → A ##cc ##ent ##ure → B-ORG O O O → A (ORG) E.g. Max Mustermann → Max Must ##erman ##n → B-PER I-PER I-PER O → Max Musterman (PER) E.g. Elasticsearch → El ##astic ##sea #rch → O O I-MISC O → ##sea (MISC) ## Expected behavior I would expect that the realignment takes the label from the first word part or the best scoring sub-word part and propogates that label to the entire word, never returning sub-words. The default in `run_ner.py` is to use a padded sub-word label at training as per the BERT paper, but I've not tried setting that to `False` yet as that's not the typical/standard practice. E.g. Accenture → A ##cc ##ent ##ure → B-ORG O O O → Accenture (ORG) E.g. Max Mustermann → Max Must ##erman ##n → B-PER I-PER I-PER O → Max Mustermann (PER) E.g. Elasticsearch → El ##astic ##sea #rch → O O I-MISC O → Elasticsearch (MISC) I'll add that it seems odd that this business logic is in the `pipeline`. When evaluating on conll03, I assume we are using the sub-words/first word, but this realignment should be considered during evaluation. As-is, I suspect the recall is lower than it should be.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10263/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/10263/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10262/comments
https://api.github.com/repos/huggingface/transformers/issues/10262/events
https://github.com/huggingface/transformers/pull/10262
811,189,257
MDExOlB1bGxSZXF1ZXN0NTc1NzU3NDY2
10,262
Making TF T5 model compliant with AMP and XLA
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
# What does this PR do? This PR makes the TF T5 model compliant with AMP and XLA. All the slow tests are passing as well for the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10262/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10262", "html_url": "https://github.com/huggingface/transformers/pull/10262", "diff_url": "https://github.com/huggingface/transformers/pull/10262.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10262.patch", "merged_at": 1613735836000 }
https://api.github.com/repos/huggingface/transformers/issues/10261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10261/comments
https://api.github.com/repos/huggingface/transformers/issues/10261/events
https://github.com/huggingface/transformers/pull/10261
811,137,294
MDExOlB1bGxSZXF1ZXN0NTc1NzE0MTYy
10,261
Making TF OpenAI GPT model compliant with AMP and XLA
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
# What does this PR do? This PR makes the TF OpenAI GPT model compliant with AMP and XLA. All the slow tests are passing as well for the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10261/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10261", "html_url": "https://github.com/huggingface/transformers/pull/10261", "diff_url": "https://github.com/huggingface/transformers/pull/10261.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10261.patch", "merged_at": 1613745205000 }
https://api.github.com/repos/huggingface/transformers/issues/10260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10260/comments
https://api.github.com/repos/huggingface/transformers/issues/10260/events
https://github.com/huggingface/transformers/pull/10260
811,101,145
MDExOlB1bGxSZXF1ZXN0NTc1NjgzODAz
10,260
Making TF MPNet model compliant with XLA
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
# What does this PR do? This PR makes the TF MPNet model compliant with XLA. All the slow tests are passing as well for the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10260/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10260", "html_url": "https://github.com/huggingface/transformers/pull/10260", "diff_url": "https://github.com/huggingface/transformers/pull/10260.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10260.patch", "merged_at": 1613735801000 }
https://api.github.com/repos/huggingface/transformers/issues/10259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10259/comments
https://api.github.com/repos/huggingface/transformers/issues/10259/events
https://github.com/huggingface/transformers/pull/10259
811,051,925
MDExOlB1bGxSZXF1ZXN0NTc1NjQxODYx
10,259
Making TF MobileBert model compliant with AMP
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
# What does this PR do? This PR makes the TF MobileBert model compliant with AMP. All the slow tests are passing as well for the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10259/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10259", "html_url": "https://github.com/huggingface/transformers/pull/10259", "diff_url": "https://github.com/huggingface/transformers/pull/10259.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10259.patch", "merged_at": 1613735725000 }
https://api.github.com/repos/huggingface/transformers/issues/10258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10258/comments
https://api.github.com/repos/huggingface/transformers/issues/10258/events
https://github.com/huggingface/transformers/issues/10258
811,046,305
MDU6SXNzdWU4MTEwNDYzMDU=
10,258
Deberta Tokenizer convert_ids_to_tokens() is not giving expected results
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems expected behavior but something is still not right with this tokenizer", "Seems like they have not implemented a decoder for the tokenizer. I will have a look at it.", "It might be expected behaviour because it is based on GPT2 tokenizer and it is also having similar results\r\n", "That is not true:\r\n```\r\nfrom transformers import GPT2Tokenizer\r\nt = GPT2Tokenizer.from_pretrained('gpt2')\r\nencoded = t(\"Hi I am Bhadresh. I found an issue in Deberta Tokenizer\")\r\nt.convert_ids_to_tokens(encoded['input_ids'])\r\n```\r\n['Hi',\r\n 'ĠI',\r\n 'Ġam',\r\n 'ĠBh',\r\n 'ad',\r\n 'resh',\r\n '.',\r\n 'ĠI',\r\n 'Ġfound',\r\n 'Ġan',\r\n 'Ġissue',\r\n 'Ġin',\r\n 'ĠDe',\r\n 'bert',\r\n 'a',\r\n 'ĠToken',\r\n 'izer']", "Ya, you are right!, Something is missing in the implementation I can't figure out what!\r\n\r\nI try to convert the SQUAD2 dataset in to feature using the SQAUD.py file in data preprocessing. \r\nAfter Conversion when I decode the input id it is returning context like this\r\n\r\n`Who are you?IamBhadresh`\r\n\r\nI mean in context space is not considered!\r\n\r\nThe ConvertExampletoFeature uses `convert_ids_to_tokens` internally I suspect that was creating issue", "You can convert them back with the following code:\r\n```\r\nfrom transformers import DebertaTokenizer\r\nt = DebertaTokenizer.from_pretrained('microsoft/deberta-base')\r\nexample = \"Hi I am Bhadresh. I found an issue in Deberta Tokenizer\"\r\n\r\nencoded_example = t.encode(example)\r\n\r\n[t.gpt2_tokenizer.decode([t.gpt2_tokenizer.sym(id)]) if t.gpt2_tokenizer.sym(id) not in t.all_special_tokens else t.gpt2_tokenizer.sym(id) for id in encoded_example]\r\n```\r\nOutput:\r\n```\r\n['[CLS]',\r\n 'Hi',\r\n ' I',\r\n ' am',\r\n ' Bh',\r\n 'ad',\r\n 'resh',\r\n '.',\r\n ' I',\r\n ' found',\r\n ' an',\r\n ' issue',\r\n ' in',\r\n ' De',\r\n 'bert',\r\n 'a',\r\n ' Token',\r\n 'izer',\r\n '[SEP]']\r\n```\r\n\r\nAfter some digging into the code, I am actually not sure if I should create a patch for it or not. I think with a patch we can **probably** also remove the method [download_asset](https://github.com/huggingface/transformers/blob/cdd31b4de4b446ccff9428d14fbeb45c4d96c608/src/transformers/models/deberta/tokenization_deberta.py#L224) and refactor the [load_vocab](https://github.com/huggingface/transformers/blob/cdd31b4de4b446ccff9428d14fbeb45c4d96c608/src/transformers/models/deberta/tokenization_deberta.py#L270) method. \r\n\r\nI am not sure if this was discussed before but when we create the required files from the `bpe_encoder.bin`, we could probably get rid of the [GPT2Tokenizer](https://github.com/huggingface/transformers/blob/cdd31b4de4b446ccff9428d14fbeb45c4d96c608/src/transformers/models/deberta/tokenization_deberta.py#L301) class in tokenization_deberta.py and the DebertaTokenizer could inherit directly from the GPT2Tokenizer (like the RobertaTokenizer).\r\n\r\nI will leave it to @LysandreJik and @BigBird01 to decide what to do with it. ", "@BigBird01, would you be open to having the `DebertaTokenizer` inheriting directly from the GPT-2 tokenizer as @cronoik proposes? It would prevent such cases like the one mentioned in this issue from happening.", "Yes. Let’s do it this way.\n\nGet Outlook for iOS<https://aka.ms/o0ukef>\n________________________________\nFrom: Lysandre Debut <[email protected]>\nSent: Monday, February 22, 2021 5:57:14 AM\nTo: huggingface/transformers <[email protected]>\nCc: Pengcheng He <[email protected]>; Mention <[email protected]>\nSubject: Re: [huggingface/transformers] Deberta Tokenizer convert_ids_to_tokens() is not giving expected results (#10258)\n\n\n@BigBird01<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FBigBird01&data=04%7C01%7CPengcheng.H%40microsoft.com%7C25586e57796247f3a7f108d8d739c427%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637495990401189739%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=2WajMgFu%2BFniyQviBr3x6%2BcK0dp4q4k2mXPctOA5Ldo%3D&reserved=0>, would you be open to having the DebertaTokenizer inheriting directly from the GPT-2 tokenizer as @cronoik<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fcronoik&data=04%7C01%7CPengcheng.H%40microsoft.com%7C25586e57796247f3a7f108d8d739c427%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637495990401189739%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=HE78YNCso1ULWJoBDEOahmmpmgdE7%2Bq7jrSBwpq1%2FiU%3D&reserved=0> proposes? It would prevent such cases like the one mentioned in this issue from happening.\n\n—\nYou are receiving this because you were mentioned.\nReply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fissues%2F10258%23issuecomment-783393615&data=04%7C01%7CPengcheng.H%40microsoft.com%7C25586e57796247f3a7f108d8d739c427%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637495990401199701%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=4j7yOLhgDl83UAGBcxomF%2F3iBjpQScMRqFh%2BneP64ro%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAJDNDRU4H6NOWYGMYNP7BMTTAJPDVANCNFSM4X2GBZKQ&data=04%7C01%7CPengcheng.H%40microsoft.com%7C25586e57796247f3a7f108d8d739c427%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637495990401199701%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=GhMxfK7jJhigF9AloayWdcB9Z3lA6nxfxki%2F4Gef4TE%3D&reserved=0>.\n", "@cronoik do you want to take a stab at it?", "Yes.\r\n@LysandreJik " ]
1,613
1,617
1,617
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.0 - Platform: Colab - Python version: 3.9 - PyTorch version (GPU?): No - Tensorflow version (GPU?): No - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information I am using Deberta Tokenizer. `convert_ids_to_tokens()` of the tokenizer is not working fine. The problem arises when using: * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset ## To reproduce Steps to reproduce the behavior: 1. Get Debrta Tokenizer ```python from transformers import DebertaTokenizer deberta_tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base') ``` 2. Encode Some Example Using Tokenizer ```python example = "Hi I am Bhadresh. I found an issue in Deberta Tokenizer" encoded_example = distilbert_tokenizer.encode(example) ``` 3. Convert Ids to tokens: ```python distilbert_tokenizer.convert_ids_to_tokens(encoded_example ) """ Output: ['[CLS]', '17250', '314', '716', '16581', '324', '3447', '13', '314', '1043', '281', '2071', '287', '1024', '4835', '64', '29130', '7509', '[SEP]'] """ ``` [Colab Link For Reproducing](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/DebertaTokenizerIssue.ipynb) ## Expected behavior It should return some tokens like this ``` ['[CLS]', 'hi', 'i', 'am', 'b', '##had', '##resh', '.', 'i', 'found', 'an', 'issue', 'in', 'de', '##bert', '##a', 'token', '##izer', '[SEP]'] ``` Not just convert an integer to string like the current behavior #### Tagging SMEs for help: @n1t0, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10258/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10257/comments
https://api.github.com/repos/huggingface/transformers/issues/10257/events
https://github.com/huggingface/transformers/pull/10257
811,039,610
MDExOlB1bGxSZXF1ZXN0NTc1NjMxMzE1
10,257
Making TF Lxmert model compliant with AMP
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
# What does this PR do? This PR makes the TF Lxmert model compliant with AMP. All the slow tests are passing as well for the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10257/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10257", "html_url": "https://github.com/huggingface/transformers/pull/10257", "diff_url": "https://github.com/huggingface/transformers/pull/10257.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10257.patch", "merged_at": 1613735655000 }
https://api.github.com/repos/huggingface/transformers/issues/10256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10256/comments
https://api.github.com/repos/huggingface/transformers/issues/10256/events
https://github.com/huggingface/transformers/issues/10256
810,994,180
MDU6SXNzdWU4MTA5OTQxODA=
10,256
[Question]: Register new Tokenizer
{ "login": "aleSuglia", "id": 1479733, "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aleSuglia", "html_url": "https://github.com/aleSuglia", "followers_url": "https://api.github.com/users/aleSuglia/followers", "following_url": "https://api.github.com/users/aleSuglia/following{/other_user}", "gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}", "starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions", "organizations_url": "https://api.github.com/users/aleSuglia/orgs", "repos_url": "https://api.github.com/users/aleSuglia/repos", "events_url": "https://api.github.com/users/aleSuglia/events{/privacy}", "received_events_url": "https://api.github.com/users/aleSuglia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! `AutoTokenizer` is only used to redirect to the correct tokenizer implementation under the hood, and not to resolve to any tokenizer object. The procedure here would be to create your tokenizer like you want it to be, either by using the `tokenizers` library, by tweaking an existing one or by creating yours from scratch.\r\n\r\nThen, you can open a PR on the repo and have your tokenizer/model be added to the available architectures, and available in the `Auto*` classes so that others may leverage your checkpoints easily.", "So I take you're not planning to have an automatic module discovery. I see. Anyway, I feel like an equally nice way to solve this is to have a folder on your current path called `heriot-watt/my_model_name`. In it, I have my config files and tokenizer files that belong to the `Tokenizer` I'm inheriting from. Then, In my package `__init__.py` I had to add the following:\r\n```python\r\nMODEL_MAPPING.update({\r\n MyModelConfig: MyModel\r\n})\r\n\r\nCONFIG_MAPPING.update({\r\n \"my_model\": MyModelConfig\r\n})\r\n\r\nTOKENIZER_MAPPING.update({\r\n MyModelConfig: (MyModelTokenizer, MyModelTokenizerFast)\r\n})\r\n\r\nMODEL_NAMES_MAPPING.update({\r\n \"my_model_name\": \"MyModel\"\r\n})\r\n```\r\nIn this way, I'm able to use the `Auto*` API just fine :) ", "Thanks for showing us how you do it! That's a very interesting usage of the AutoModels, and definitely something we would be interested in adding. For example via a `transformers.register_auto_model(xxx)` or something along those lines.", "I think the AllenNLP registrable is a very good starting point for this: https://github.com/allenai/allennlp/blob/main/allennlp/common/registrable.py", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Maybe this is reckless, but I could see value in at least partially inverting this relationship. If my `.save_pretrained()` implementation could drop a hint about what module an implementation resides in, Auto Classes could have the ability to try a dynamic import without needing any registration api, and the `Auto*.from_pretrained()` caller would be relieved of the burden of making sure implementation classes are loaded ahead of time.\r\n\r\nI honestly went looking for where this happened in the code multiple times and assumed I just hadn't figured out how it worked yet.", "This is sloppy and hardly thought through, but\r\n```diff\r\ndiff --git a/src/transformers/models/auto/tokenization_auto.py b/src/transformers/models/auto/tokenization_auto.py\r\nindex f07e366c7..3ad9d1e22 100644\r\n--- a/src/transformers/models/auto/tokenization_auto.py\r\n+++ b/src/transformers/models/auto/tokenization_auto.py\r\n@@ -14,6 +14,7 @@\r\n # limitations under the License.\r\n \"\"\" Auto Tokenizer class. \"\"\"\r\n\r\n+import importlib\r\n import json\r\n import os\r\n from collections import OrderedDict\r\n@@ -538,6 +539,10 @@ class AutoTokenizer:\r\n if tokenizer_class is None:\r\n tokenizer_class_candidate = config_tokenizer_class\r\n tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate)\r\n+ if tokenizer_class is None:\r\n+ tokenizer_module = tokenizer_config.get(\"tokenizer_module\")\r\n+ tokenizer_module = importlib.import_module(tokenizer_module)\r\n+ tokenizer_class = getattr(tokenizer_module, config_tokenizer_class)\r\n\r\n if tokenizer_class is None:\r\n raise ValueError(\r\n```\r\nfor example, would allow subclasses that were not officially included with `transformers` to use\r\n`super().__init__(..., tokenizer_module=self.__module__, ...)` in their constructor. That seems to be enough for the setting to save in the tokenizer_config.json file. Then the caller would no longer have to be aware of what imports are necessary for a `.from_pretrained()` call to succeed.", "After 9870093f7b31bf774fe6bdfeed5e08f0d4649b07 I am unsure how to use a third party tokenizer class because `transformers.models.auto.tokenization_auto.tokenizer_class_from_name()` is using\r\n```python\r\nmodule = importlib.import_module(f\".{module_name}\", \"transformers.models\")\r\n```\r\nand trying to load and trying to use anything outside of transformers raises\r\n```\r\nValueError: attempted relative import beyond top-level package\r\n```\r\nThe workaround I have at the moment is adding \r\n```python\r\ntransformers.models.auto.tokenization_auto.TOKENIZER_MAPPING_NAMES.update((\r\n (\"MyModel\", ('MyModelTokenizer', 'MyModelTokenizerFast')),\r\n))\r\nsys.modules['transformers.models.MyModel'] = sys.modules[__name__]\r\n```\r\nto replace the `TOKENIZER_MAPPING` patch used in previous versions. But dynamically patching in additional modules seems far more aggressive than updating data structures.\r\n\r\nIt would have been very convenient here if the module names in TOKENIZER_MAPPING_NAMES had included the \".\" rather than it being added by `tokenizer_class_from_name()`." ]
1,613
1,631
1,619
CONTRIBUTOR
null
Hi there, I'm in the process of creating a new Transformer model. I have my own codebase and I'm using Transformers as an external library. If I implement a new Tokenizer that inherits from an existing one (say the BERT one) is there any way to "register" my new tokenizer so that Huggingface automatically instantiate it? I would like to support the `AutoTokenizer` API: ```python tokenizer = AutoTokenizer.from_pretrained("heriot-watt/my_model_name") ``` And I would like that `AutoTokenizer` looks in my PYTHONPATH and automatically resolves the `Tokenizer` class with the name `my_model_name`. I've seen that currently, Transformers uses a hardcoded resolution strategy defined in `configuration_auto.py` or `tokenization_auto.py`. For instance, AllenNLP uses a nice register annotation to automatically resolve models, dataset reader and so on. What would be the best solution here? Thanks for your answer, Alessandro
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10256/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10256/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10255/comments
https://api.github.com/repos/huggingface/transformers/issues/10255/events
https://github.com/huggingface/transformers/pull/10255
810,951,630
MDExOlB1bGxSZXF1ZXN0NTc1NTU2OTAx
10,255
Addition of on-the-fly loading for MLM training and fix for default pad_to_max_length value for TPU
{ "login": "DarshanDeshpande", "id": 39432636, "node_id": "MDQ6VXNlcjM5NDMyNjM2", "avatar_url": "https://avatars.githubusercontent.com/u/39432636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DarshanDeshpande", "html_url": "https://github.com/DarshanDeshpande", "followers_url": "https://api.github.com/users/DarshanDeshpande/followers", "following_url": "https://api.github.com/users/DarshanDeshpande/following{/other_user}", "gists_url": "https://api.github.com/users/DarshanDeshpande/gists{/gist_id}", "starred_url": "https://api.github.com/users/DarshanDeshpande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarshanDeshpande/subscriptions", "organizations_url": "https://api.github.com/users/DarshanDeshpande/orgs", "repos_url": "https://api.github.com/users/DarshanDeshpande/repos", "events_url": "https://api.github.com/users/DarshanDeshpande/events{/privacy}", "received_events_url": "https://api.github.com/users/DarshanDeshpande/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your PR! We don't want to switch the examples to use on-the-fly tokenization however as in most cases it's actually faster to do it once and for all. Having to do it on-the-fly for a training with huge data is more of a specific use-case. Your PR can be referenced as an example of how to do it in practice but I don't think we will merge it." ]
1,613
1,613
1,613
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10204, #10024 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @lhoestq @patil-suraj <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10255/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10255/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10255", "html_url": "https://github.com/huggingface/transformers/pull/10255", "diff_url": "https://github.com/huggingface/transformers/pull/10255.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10255.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10254/comments
https://api.github.com/repos/huggingface/transformers/issues/10254/events
https://github.com/huggingface/transformers/issues/10254
810,856,096
MDU6SXNzdWU4MTA4NTYwOTY=
10,254
ImportError: cannot import name 'MBart50TokenizerFast' from 'transformers' (unknown location)
{ "login": "loretoparisi", "id": 163333, "node_id": "MDQ6VXNlcjE2MzMzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loretoparisi", "html_url": "https://github.com/loretoparisi", "followers_url": "https://api.github.com/users/loretoparisi/followers", "following_url": "https://api.github.com/users/loretoparisi/following{/other_user}", "gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}", "starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions", "organizations_url": "https://api.github.com/users/loretoparisi/orgs", "repos_url": "https://api.github.com/users/loretoparisi/repos", "events_url": "https://api.github.com/users/loretoparisi/events{/privacy}", "received_events_url": "https://api.github.com/users/loretoparisi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @loretoparisi \r\n\r\nDid you install sentencepiece ? The tokenizer needs sentencepiece", "@patil-suraj thanks I did right now\r\n\r\n```\r\nroot@d2f0e8a5ec76:/app# pip install sentencepiece\r\nCollecting sentencepiece\r\n Downloading https://files.pythonhosted.org/packages/f5/99/e0808cb947ba10f575839c43e8fafc9cc44e4a7a2c8f79c60db48220a577/sentencepiece-0.1.95-cp37-cp37m-manylinux2014_x86_64.whl (1.2MB)\r\n |████████████████████████████████| 1.2MB 507kB/s \r\nInstalling collected packages: sentencepiece\r\nSuccessfully installed sentencepiece-0.1.95\r\nWARNING: You are using pip version 19.3; however, version 21.0.1 is available.\r\nYou should consider upgrading via the 'pip install --upgrade pip' command.\r\nroot@d2f0e8a5ec76:/app# python src/translation/run.py \r\nTraceback (most recent call last):\r\n File \"src/translation/run.py\", line 7, in <module>\r\n from transformers import MBartForConditionalGeneration, MBart50TokenizerFast\r\nImportError: cannot import name 'MBart50TokenizerFast' from 'transformers' (unknown location)\r\n```\r\n\r\nCodebase is here: https://github.com/loretoparisi/hf-experiments/blob/master/src/translation/run.py", "Hi @loretoparisi! Could you show the results of `pip list` so we can investigate? Maybe `tokenizers` is missing, that's what's required for the fast tokenizer. Thanks!", "@LysandreJik of course!\r\n\r\n```\r\nroot@d2f0e8a5ec76:/app# pip list\r\nPackage Version \r\n---------------------- ------------\r\nabsl-py 0.11.0 \r\nappdirs 1.4.4 \r\nastunparse 1.6.3 \r\naudioread 2.1.9 \r\ncached-property 1.5.2 \r\ncachetools 4.2.1 \r\ncertifi 2020.12.5 \r\ncffi 1.14.5 \r\nchardet 4.0.0 \r\nclick 7.1.2 \r\ncycler 0.10.0 \r\ndecorator 4.4.2 \r\ndocopt 0.6.2 \r\nfilelock 3.0.12 \r\nflatbuffers 1.12 \r\ngast 0.3.3 \r\ngoogle-auth 1.26.1 \r\ngoogle-auth-oauthlib 0.4.2 \r\ngoogle-pasta 0.2.0 \r\ngrpcio 1.32.0 \r\nh5py 2.10.0 \r\nidna 2.10 \r\nimageio 2.9.0 \r\nimportlib-metadata 3.4.0 \r\njoblib 1.0.1 \r\nKeras 2.4.3 \r\nKeras-Preprocessing 1.1.2 \r\nkiwisolver 1.3.1 \r\nlibrosa 0.8.0 \r\nllvmlite 0.35.0 \r\nMarkdown 3.3.3 \r\nmatplotlib 3.3.4 \r\nmunkres 1.1.4 \r\nnetworkx 2.5 \r\nnumba 0.52.0 \r\nnumpy 1.19.5 \r\noauthlib 3.1.0 \r\nopt-einsum 3.3.0 \r\npackaging 20.9 \r\npandas 1.2.2 \r\nPillow 8.1.0 \r\npip 19.3 \r\npooch 1.3.0 \r\nprotobuf 3.14.0 \r\npyannote.algorithms 0.8 \r\npyannote.core 4.1 \r\npyannote.parser 0.8 \r\npyasn1 0.4.8 \r\npyasn1-modules 0.2.8 \r\npycparser 2.20 \r\npyparsing 2.4.7 \r\npython-dateutil 2.8.1 \r\npytz 2021.1 \r\nPyWavelets 1.1.1 \r\nPyYAML 5.4.1 \r\nregex 2020.11.13 \r\nrequests 2.25.1 \r\nrequests-oauthlib 1.3.0 \r\nresampy 0.2.2 \r\nrsa 4.7.1 \r\nsacremoses 0.0.43 \r\nscikit-image 0.18.1 \r\nscikit-learn 0.24.1 \r\nscipy 1.6.0 \r\nsentencepiece 0.1.95 \r\nsetuptools 41.4.0 \r\nSIDEKIT 1.3.8.5.2 \r\nsimplejson 3.17.2 \r\nsix 1.15.0 \r\nsortedcollections 2.1.0 \r\nsortedcontainers 2.3.0 \r\nSoundFile 0.10.3.post1\r\ntensorboard 2.4.1 \r\ntensorboard-plugin-wit 1.8.0 \r\ntensorflow 2.4.1 \r\ntensorflow-estimator 2.4.0 \r\ntermcolor 1.1.0 \r\nthreadpoolctl 2.1.0 \r\ntifffile 2021.2.1 \r\ntokenizers 0.10.1 \r\ntorch 1.7.1 \r\ntorchvision 0.8.2 \r\ntqdm 4.56.2 \r\ntransformers 4.3.2 \r\ntyping-extensions 3.7.4.3 \r\nurllib3 1.26.3 \r\nWerkzeug 1.0.1 \r\nwheel 0.36.2 \r\nwrapt 1.12.1 \r\nxarray 0.16.2 \r\nzipp 3.4.0 \r\n```\r\n\r\nHere I can see `tokenizers 0.10.1 `, not sure if that's the right version though.", "Ah, I think I have found the culprit! MBart-50 was only just released on the `master` branch and you seem to be using version v4.3.2, which does not have it yet. Could you install from source and let me know if you still have the issue?", "I installed transformer 4.3.2\r\nCould any let me know how to install it from the source?\r\n![image](https://user-images.githubusercontent.com/58412261/108588225-090f4000-737e-11eb-80ef-e565672891ac.png)\r\n", "To install from source clone the repo and run `pip install .` from the root of the repo or run\r\n\r\n`pip install git+https://github.com/huggingface/transformers.git`, which will install the master branch.\r\n", "Confirmed it works with master branch install!\r\n```\r\n['联合国首脑说,叙利亚没有军事解决办法']\r\n```" ]
1,613
1,614
1,614
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: Linux-4.19.121-linuxkit-x86_64-with-debian-10.1 - Python version: 3.7.4 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: <NO> - Using distributed or parallel set-up in script?: <NO> ### Who can help Model: https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt @patrickvonplaten, @patil-suraj <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: (translation) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python import os from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_en = "The head of the United Nations says there is no military solution in Syria" model = MBartForConditionalGeneration.from_pretrained( "facebook/mbart-large-50-one-to-many-mmt", cache_dir=os.getenv("cache_dir", "model")) tokenizer = MBart50TokenizerFast.from_pretrained( "facebook/mbart-large-50-one-to-many-mmt", src_lang="en_XX") model_inputs = tokenizer(article_en, return_tensors="pt") # translate from English to Hindi generated_tokens = model.generate( **model_inputs, forced_bos_token_id=tokenizer.lang_code_to_id["hi_IN"] ) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => 'संयुक्त राष्ट्र के नेता कहते हैं कि सीरिया में कोई सैन्य समाधान नहीं है' # translate from English to Chinese generated_tokens = model.generate( **model_inputs, forced_bos_token_id=tokenizer.lang_code_to_id["zh_CN"] ) decoded = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => '联合国首脑说,叙利亚没有军事解决办法' print(decoded) ```` ERROR: ``` Traceback (most recent call last): File "src/translation/run.py", line 7, in <module> from transformers import MBartForConditionalGeneration, MBart50TokenizerFast ImportError: cannot import name 'MBart50TokenizerFast' from 'transformers' (unknown location) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior no error <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10254/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10253/comments
https://api.github.com/repos/huggingface/transformers/issues/10253/events
https://github.com/huggingface/transformers/issues/10253
810,691,443
MDU6SXNzdWU4MTA2OTE0NDM=
10,253
Load custom models
{ "login": "ayiyoh", "id": 6640801, "node_id": "MDQ6VXNlcjY2NDA4MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/6640801?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayiyoh", "html_url": "https://github.com/ayiyoh", "followers_url": "https://api.github.com/users/ayiyoh/followers", "following_url": "https://api.github.com/users/ayiyoh/following{/other_user}", "gists_url": "https://api.github.com/users/ayiyoh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayiyoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayiyoh/subscriptions", "organizations_url": "https://api.github.com/users/ayiyoh/orgs", "repos_url": "https://api.github.com/users/ayiyoh/repos", "events_url": "https://api.github.com/users/ayiyoh/events{/privacy}", "received_events_url": "https://api.github.com/users/ayiyoh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! Could you provide a reproducible code example, for example the extended custom model you created, so that we can take a look?\r\n\r\nAlso, can you let us know what's in the `./SqueezeBert/results/best_checkpoint/` directory? It's trying to look for a configuration file there but it doesn't find it.", "Thank you @LysandreJik for getting back! I have prepared a Google colab and it just ran fine: https://colab.research.google.com/drive/1SKx0DXHrgVUMFK7sk6jU05o_SnFKrc6k#scrollTo=MOijCowF3dqk\r\n\r\nThere must be something else in my code (which I can't share). Closing this now." ]
1,613
1,613
1,613
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: RHEL 7 - Python version: 3.7 - PyTorch version (GPU?): 1.7.0 (GPU) - Tensorflow version (GPU?): N/A - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): A custom model The problem arises when using: * [ ] the official example scripts: (give details below) * [* ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [*] my own task or dataset: (give details below) I created a custom model by extending the SqueezeBertPreTrainedModel and added another classification head for multi-task learning. Trained with Trainer and TrainingArguments successfully, and saved the model by calling trainer.save_model(TRAINED_MODEL_PATH). Everything worked fine. However, when I tried to load the model by calling MyCustomModelClass.from_pretrained(TRAINED_MODEL_PATH, local_files_only=True), an error was thrown: ``` Traceback (most recent call last): File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/configuration_utils.py", line 424, in get_config_dict use_auth_token=use_auth_token, File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/file_utils.py", line 1086, in cached_path local_files_only=local_files_only, File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/file_utils.py", line 1259, in get_from_cache "Cannot find the requested files in the cached path and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path and outgoing traffic has been disabled. To enable model look-ups and downloads online, set 'local_files_only' to False. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/ec2-user/workspaces/compressed_transformers/src/Compressed_transformers/compression/evaluate.py", line 135, in <module> local_files_only=True, File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/modeling_utils.py", line 962, in from_pretrained **kwargs, File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/configuration_utils.py", line 376, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/ec2-user/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/configuration_utils.py", line 436, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for './SqueezeBert/results/best_checkpoint/config.json'. Make sure that: - './SqueezeBert/results/best_checkpoint/config.json' is a correct model identifier listed on 'https://huggingface.co/models' - or './SqueezeBert/results/best_checkpoint/config.json' is the correct path to a directory containing a config.json file ``` ## To reproduce Steps to reproduce the behavior: 1. Extend the SqueezeBertPreTrainedModel (maybe other PreTrainedModel classes as well) class and create a model with a dataset 2. Train the model with the dataset and save the model using trainer.save_model(MODEL_DIR) 3. Load the model by calling MyCustomModelClass.from_pretrained(MODEL_DIR) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior It shouldn't look for models from the internet or model classes available in the library when AutoModel or AutoConfig is not used. When MyCustomModelClass.from_pretrained(MODEL_DIR) is called, it should be able to look up config.json and load the checkpoint correctly. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10253/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10252/comments
https://api.github.com/repos/huggingface/transformers/issues/10252/events
https://github.com/huggingface/transformers/issues/10252
810,610,888
MDU6SXNzdWU4MTA2MTA4ODg=
10,252
microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract not available for tensorflow
{ "login": "abhijithneilabraham", "id": 35420019, "node_id": "MDQ6VXNlcjM1NDIwMDE5", "avatar_url": "https://avatars.githubusercontent.com/u/35420019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhijithneilabraham", "html_url": "https://github.com/abhijithneilabraham", "followers_url": "https://api.github.com/users/abhijithneilabraham/followers", "following_url": "https://api.github.com/users/abhijithneilabraham/following{/other_user}", "gists_url": "https://api.github.com/users/abhijithneilabraham/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhijithneilabraham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhijithneilabraham/subscriptions", "organizations_url": "https://api.github.com/users/abhijithneilabraham/orgs", "repos_url": "https://api.github.com/users/abhijithneilabraham/repos", "events_url": "https://api.github.com/users/abhijithneilabraham/events{/privacy}", "received_events_url": "https://api.github.com/users/abhijithneilabraham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello!\r\n\r\nYou can load PyTorch weights into Tensorflow with `TFBertModel.from_pretrained(\"microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract\", from_pt=True)`", "That works, thanks!" ]
1,613
1,613
1,613
NONE
null
@jplu The above said model is available for pytorch but not for tensorflow. How to convert a pytorch checkpoint to tensorflow for this one? Is it possible for doing a contribution for the same?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10252/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10251/comments
https://api.github.com/repos/huggingface/transformers/issues/10251/events
https://github.com/huggingface/transformers/pull/10251
810,605,926
MDExOlB1bGxSZXF1ZXN0NTc1MjY4OTY1
10,251
[ci] scheduled job test
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "well, the job never finished, something or something aborted the workflow - so the test wasn't complete.", "The test was probably too long (>6 hours) and was stopped by CircleCI. This PR that was just merged should help in that regard: https://github.com/huggingface/transformers/pull/10152", "That PR won't help, since in this test I removed both tf jobs - it was just one pt set of jobs per runner. Needed to do it since tf jobs were getting to run first.\r\n\r\nBut otherwise this is an awesome improvement for TF jobs!\r\n" ]
1,613
1,613
1,613
CONTRIBUTOR
null
please ignore this time a branch on huggingface and not a fork
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10251/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10251", "html_url": "https://github.com/huggingface/transformers/pull/10251", "diff_url": "https://github.com/huggingface/transformers/pull/10251.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10251.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10250/comments
https://api.github.com/repos/huggingface/transformers/issues/10250/events
https://github.com/huggingface/transformers/pull/10250
810,602,577
MDExOlB1bGxSZXF1ZXN0NTc1MjY2MjEx
10,250
[CI] force scheduled action hub re-run
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
please ignore testing only SLOW pt job
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10250/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10250", "html_url": "https://github.com/huggingface/transformers/pull/10250", "diff_url": "https://github.com/huggingface/transformers/pull/10250.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10250.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10249/comments
https://api.github.com/repos/huggingface/transformers/issues/10249/events
https://github.com/huggingface/transformers/pull/10249
810,588,946
MDExOlB1bGxSZXF1ZXN0NTc1MjU0OTI5
10,249
[CI] force scheduled action hub re-run
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
please ignore
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10249/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10249", "html_url": "https://github.com/huggingface/transformers/pull/10249", "diff_url": "https://github.com/huggingface/transformers/pull/10249.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10249.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10248/comments
https://api.github.com/repos/huggingface/transformers/issues/10248/events
https://github.com/huggingface/transformers/pull/10248
810,575,800
MDExOlB1bGxSZXF1ZXN0NTc1MjQzOTE0
10,248
[CI] 2 fixes
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
This PR: - fixes invalid port - adds missing requirements install which lead to multiple test failures @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10248/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10248", "html_url": "https://github.com/huggingface/transformers/pull/10248", "diff_url": "https://github.com/huggingface/transformers/pull/10248.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10248.patch", "merged_at": 1613599960000 }
https://api.github.com/repos/huggingface/transformers/issues/10247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10247/comments
https://api.github.com/repos/huggingface/transformers/issues/10247/events
https://github.com/huggingface/transformers/issues/10247
810,559,530
MDU6SXNzdWU4MTA1NTk1MzA=
10,247
[BUG] [Ray-Tune] ValueError: checkpoint not in list
{ "login": "neel04", "id": 11617870, "node_id": "MDQ6VXNlcjExNjE3ODcw", "avatar_url": "https://avatars.githubusercontent.com/u/11617870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neel04", "html_url": "https://github.com/neel04", "followers_url": "https://api.github.com/users/neel04/followers", "following_url": "https://api.github.com/users/neel04/following{/other_user}", "gists_url": "https://api.github.com/users/neel04/gists{/gist_id}", "starred_url": "https://api.github.com/users/neel04/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neel04/subscriptions", "organizations_url": "https://api.github.com/users/neel04/orgs", "repos_url": "https://api.github.com/users/neel04/repos", "events_url": "https://api.github.com/users/neel04/events{/privacy}", "received_events_url": "https://api.github.com/users/neel04/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@neel04 can you try a few things:\r\n- What version of Ray are you using? Can you try with the latest Ray (1.2).\r\n- When using the PBT scheduler, it's actually not compatible with Tune search algorithms (see the compatibility matrix here https://docs.ray.io/en/master/tune/api_docs/schedulers.html#summary). Can you remove either HyperOptSearch or PopulationBasedTraining and try again.\r\n- Can you pass in an absolute path to the `output_dir` in `TrainingArguments` instead of a relative one. I think right now it's being set to `./results`.\r\n- And just make sure to run all cells in the notebook from scratch just in case any state is being saved from previous runs.\r\n\r\nThe error you posted is coming Huggingface checkpointing, so a person from HF might be better suited to help out here.\r\n\r\nAlso, if none of the above works for you, it would help if you could post a small, reproducible example, perhaps with dummy data, that can be run. Thanks!", "- I am using the Latest Ray version: `1.2.0`\r\n- Trying `PBT` alone without HyperOPT yields the same error. I always scrap the kernel after most runs due to this reason only, but it yields the same error on all configurations (12GB RAM or 25, P100 or V100).\r\n- Set all paths to absolute, made no difference.\r\n\r\nI have shared a [gist](https://colab.research.google.com/drive/1uuhCac9hTw1dcDnqpHuvZ63rMDTGNWWM?usp=sharing) that successfully reproduces the error with a dummy dataset. You can download the checkpoint `zip` [here in Google Drive](https://drive.google.com/drive/folders/1z8OgKtOxPlEDQV9905CyqyuaUEBx5FxK?usp=sharing).\r\n\r\nIt seems strange to me why `hyperparameter_search` in general is so buggy and complex :( as compared to solutions using native libraries. ", "@neel04 thanks a lot for the posting this issue and the easy to run code. There is a bug with the checkpoint directories that is causing this error. This branch has a fix: https://github.com/amogkam/transformers/tree/tune-fix. Can you try it out and let me know if it is working for you. Thanks again!", "@amogkam Thanx a lot for the branch. I am trying to use it but I am facing 2 problems:-\r\n\r\n1. When constructing the `training_args` variable, It is not allowing me to use `adafactor` optimizer. Is this an expected limitation of this branch?\r\n2. I am getting this error still :( that prevents all trials to run except the first one:-\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trial_runner.py\", line 586, in _process_trial\r\n results = self.trial_executor.fetch_result(trial)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py\", line 609, in fetch_result\r\n result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/_private/client_mode_hook.py\", line 47, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/worker.py\", line 1456, in get\r\n raise value.as_instanceof_cause()\r\nray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=1384, ip=172.28.0.2)\r\n File \"python/ray/_raylet.pyx\", line 480, in ray._raylet.execute_task\r\n File \"python/ray/_raylet.pyx\", line 432, in ray._raylet.execute_task.function_executor\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py\", line 167, in train_buffered\r\n result = self.train()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py\", line 226, in train\r\n result = self.step()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 366, in step\r\n self._report_thread_runner_error(block=True)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 513, in _report_thread_runner_error\r\n (\"Trial raised an exception. Traceback:\\n{}\".format(err_tb_str)\r\nray.tune.error.TuneError: Trial raised an exception. Traceback:\r\nray::ImplicitFunc.train_buffered() (pid=1384, ip=172.28.0.2)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 248, in run\r\n self._entrypoint()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 316, in entrypoint\r\n self._status_reporter.get_checkpoint())\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 576, in _trainable_func\r\n output = fn()\r\n File \"/content/transformers/src/transformers/integrations.py\", line 164, in _objective\r\n trainer.train(model_path=model_path, trial=trial)\r\n File \"/content/transformers/src/transformers/trainer.py\", line 757, in train\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py\", line 435, in __next__\r\n data = self._next_data()\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py\", line 475, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"<ipython-input-12-cd510628f360>\", line 10, in __getitem__\r\nTypeError: new(): invalid data type 'str'\r\n```\r\nPretty huge tracebacks, but at least the previous error is gone which is an improvement. \r\nI will try to see if I can reproduce in the gist\r\n\r\n**EDIT:-** scrap all that above. Right now, I am just focusing on the [gist](https://colab.research.google.com/drive/1uuhCac9hTw1dcDnqpHuvZ63rMDTGNWWM?usp=sharing) where I am still getting the Valueerror with a list. can you confirm that you can reproduce the error again?", "@neel04 ah when installing from my fork you also have to specify the branch: `pip install git+https://github.com/amogkam/transformers.git@tune-fix`. The transformers version should be `4.4.0.dev0`. The gist works for me, and I'm not seeing the other 2 issues that you posted. Can you confirm that this branch works for you?", "Yep, that fixes it :blush: Thanks for your help! one last thing - is it normal to see a single trial take a _very_ long time while having all the other trials paused?", "@amogkam is this bug still present in 1.4.1? I am running into same problem when I set number of trails to more than 10." ]
1,613
1,625
1,613
NONE
null
## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: Yes ### Who can help Models: - tensorflow: @jplu Library: - ray/raytune: @richardliaw, @amogkam --> ## Information Model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce I want to do this as a text classification task where I have a sequence and I want to classify it into either one of the 20 labels (all of them numeric). Whenever I start tuning/Hyperparameter search, it starts running the first trial and logs a bit, then goes to the second trial showing the first one to be "running". This is how the logs look like:- ``` You are using PopulationBasedTraining but you haven't enabled checkpointing. This means your trials will train from scratch everytime they are exploiting new configurations. Consider enabling checkpointing by passing `keep_checkpoints_num=1` as an additional argument to `Trainer.hyperparameter_search`. == Status == Memory usage on this node: 4.3/25.5 GiB PopulationBasedTraining: 0 checkpoints, 0 perturbs Resources requested: 3/4 CPUs, 1/1 GPUs, 0.0/14.99 GiB heap, 0.0/5.18 GiB objects (0/1.0 accelerator_type:P100) Result logdir: /root/ray_results/tune_transformer_pbt Number of trials: 1/100 (1 RUNNING) +-----------------+----------+-------+-----------+-------+----------------+--------------+ | Trial name | status | loc | w_decay | lr | train_bs/gpu | num_epochs | |-----------------+----------+-------+-----------+-------+----------------+--------------| | _inner_0755e982 | RUNNING | | 0.366291 | 4e-05 | 8 | 15 | +-----------------+----------+-------+-----------+-------+----------------+--------------+ Result for _inner_0755e982: date: 2021-02-17_15-52-04 done: false eval_accuracy: 0.14948453608247422 eval_f1: 0.14948453608247422 eval_loss: 8.17737865447998 eval_precision: 0.14948453608247422 eval_recall: 0.14948453608247422 eval_runtime: 6.8976 eval_samples_per_second: 56.252 experiment_id: 5d2db84f7e9745a997bfcadedbd7d440 hostname: 0df2f30fd76b iterations_since_restore: 1 node_ip: 172.28.0.2 objective: 0.14948453608247422 pid: 39957 time_since_restore: 21.89615297317505 time_this_iter_s: 21.89615297317505 time_total_s: 21.89615297317505 timestamp: 1613577124 timesteps_since_restore: 0 training_iteration: 1 trial_id: 0755e982 == Status == Memory usage on this node: 6.3/25.5 GiB PopulationBasedTraining: 0 checkpoints, 0 perturbs Resources requested: 3/4 CPUs, 1/1 GPUs, 0.0/14.99 GiB heap, 0.0/5.18 GiB objects (0/1.0 accelerator_type:P100) Result logdir: /root/ray_results/tune_transformer_pbt Number of trials: 2/100 (1 PENDING, 1 RUNNING) +-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+ | Trial name | status | loc | w_decay | lr | train_bs/gpu | num_epochs | eval_loss | training_iteration | |-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------| | _inner_0755e982 | RUNNING | 172.28.0.2:39957 | 0.366291 | 4e-05 | 8 | 15 | 8.17738 | 1 | | _inner_07580d98 | PENDING | | 0.376876 | 6e-05 | 8 | 10 | | | +-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+ Result for _inner_0755e982: date: 2021-02-17_15-52-04 done: false eval_accuracy: 0.14948453608247422 eval_f1: 0.14948453608247422 eval_loss: 8.17737865447998 eval_precision: 0.14948453608247422 eval_recall: 0.14948453608247422 eval_runtime: 6.8976 eval_samples_per_second: 56.252 experiment_id: 5d2db84f7e9745a997bfcadedbd7d440 experiment_tag: 1_num_train_epochs=15,per_device_eval_batch_size=16,per_device_train_batch_size=8 hostname: 0df2f30fd76b iterations_since_restore: 1 node_ip: 172.28.0.2 objective: 0.14948453608247422 pid: 39957 time_since_restore: 21.89615297317505 time_this_iter_s: 21.89615297317505 time_total_s: 21.89615297317505 timestamp: 1613577124 timesteps_since_restore: 0 training_iteration: 1 trial_id: 0755e982 Result for _inner_07580d98: date: 2021-02-17_15-52-29 done: false eval_accuracy: 0.14948453608247422 eval_f1: 0.14948453608247422 eval_loss: 8.1666898727417 eval_precision: 0.14948453608247422 eval_recall: 0.14948453608247422 eval_runtime: 6.8883 eval_samples_per_second: 56.327 experiment_id: e5cb4d5b00524454b7f673f971318b30 hostname: 0df2f30fd76b iterations_since_restore: 1 node_ip: 172.28.0.2 objective: 0.14948453608247422 pid: 39986 time_since_restore: 21.889320135116577 time_this_iter_s: 21.889320135116577 time_total_s: 21.889320135116577 timestamp: 1613577149 timesteps_since_restore: 0 training_iteration: 1 trial_id: 07580d98 == Status == Memory usage on this node: 6.3/25.5 GiB PopulationBasedTraining: 0 checkpoints, 0 perturbs Resources requested: 3/4 CPUs, 1/1 GPUs, 0.0/14.99 GiB heap, 0.0/5.18 GiB objects (0/1.0 accelerator_type:P100) Result logdir: /root/ray_results/tune_transformer_pbt Number of trials: 3/100 (1 ERROR, 1 PENDING, 1 RUNNING) +-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+ | Trial name | status | loc | w_decay | lr | train_bs/gpu | num_epochs | eval_loss | training_iteration | |-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------| | _inner_07580d98 | RUNNING | 172.28.0.2:39986 | 0.376876 | 6e-05 | 8 | 10 | 8.16669 | 1 | | _inner_15e144f6 | PENDING | | 0.196785 | 2e-08 | 8 | 10 | | | | _inner_0755e982 | ERROR | | 0.366291 | 4e-05 | 8 | 15 | 8.17738 | 1 | +-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+ Number of errored trials: 1 +-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Trial name | # failures | error file | |-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | _inner_0755e982 | 1 | /root/ray_results/tune_transformer_pbt/_inner_0755e982_1_num_train_epochs=15,per_device_eval_batch_size=16,per_device_train_batch_size=8_2021-02-17_15-51-42/error.txt | +-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Result for _inner_07580d98: date: 2021-02-17_15-52-29 done: false eval_accuracy: 0.14948453608247422 eval_f1: 0.14948453608247422 eval_loss: 8.1666898727417 eval_precision: 0.14948453608247422 eval_recall: 0.14948453608247422 eval_runtime: 6.8883 eval_samples_per_second: 56.327 experiment_id: e5cb4d5b00524454b7f673f971318b30 experiment_tag: 2_num_train_epochs=10,per_device_eval_batch_size=16,per_device_train_batch_size=8 hostname: 0df2f30fd76b iterations_since_restore: 1 node_ip: 172.28.0.2 objective: 0.14948453608247422 pid: 39986 time_since_restore: 21.889320135116577 time_this_iter_s: 21.889320135116577 time_total_s: 21.889320135116577 timestamp: 1613577149 timesteps_since_restore: 0 training_iteration: 1 trial_id: 07580d98 Result for _inner_15e144f6: date: 2021-02-17_15-52-53 done: false eval_accuracy: 0.14948453608247422 eval_f1: 0.14948453608247422 eval_loss: 8.2146635055542 eval_precision: 0.14948453608247422 eval_recall: 0.14948453608247422 eval_runtime: 6.917 eval_samples_per_second: 56.094 experiment_id: 2dc02378bb5d4a20a7c6d0228ad81076 hostname: 0df2f30fd76b iterations_since_restore: 1 node_ip: 172.28.0.2 objective: 0.14948453608247422 pid: 40016 time_since_restore: 21.961735486984253 time_this_iter_s: 21.961735486984253 time_total_s: 21.961735486984253 timestamp: 1613577173 timesteps_since_restore: 0 training_iteration: 1 trial_id: 15e144f6 == Status == Memory usage on this node: 6.2/25.5 GiB PopulationBasedTraining: 0 checkpoints, 0 perturbs Resources requested: 3/4 CPUs, 1/1 GPUs, 0.0/14.99 GiB heap, 0.0/5.18 GiB objects (0/1.0 accelerator_type:P100) Result logdir: /root/ray_results/tune_transformer_pbt Number of trials: 4/100 (2 ERROR, 1 PENDING, 1 RUNNING) +-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+ | Trial name | status | loc | w_decay | lr | train_bs/gpu | num_epochs | eval_loss | training_iteration | |-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------| | _inner_15e144f6 | RUNNING | 172.28.0.2:40016 | 0.196785 | 2e-08 | 8 | 10 | 8.21466 | 1 | | _inner_247daa04 | PENDING | | 0.49907 | 5e-05 | 8 | 15 | | | | _inner_0755e982 | ERROR | | 0.366291 | 4e-05 | 8 | 15 | 8.17738 | 1 | | _inner_07580d98 | ERROR | | 0.376876 | 6e-05 | 8 | 10 | 8.16669 | 1 | +-----------------+----------+------------------+-----------+-------+----------------+--------------+-------------+----------------------+ Number of errored trials: 2 +-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Trial name | # failures | error file | |-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | _inner_0755e982 | 1 | /root/ray_results/tune_transformer_pbt/_inner_0755e982_1_num_train_epochs=15,per_device_eval_batch_size=16,per_device_train_batch_size=8_2021-02-17_15-51-42/error.txt | | _inner_07580d98 | 1 | /root/ray_results/tune_transformer_pbt/_inner_07580d98_2_num_train_epochs=10,per_device_eval_batch_size=16,per_device_train_batch_size=8_2021-02-17_15-52-06/error.txt | +-----------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Result for _inner_15e144f6: date: 2021-02-17_15-52-53 done: false eval_accuracy: 0.14948453608247422 eval_f1: 0.14948453608247422 eval_loss: 8.2146635055542 eval_precision: 0.14948453608247422 eval_recall: 0.14948453608247422 eval_runtime: 6.917 eval_samples_per_second: 56.094 experiment_id: 2dc02378bb5d4a20a7c6d0228ad81076 experiment_tag: 3_num_train_epochs=10,per_device_eval_batch_size=16,per_device_train_batch_size=8 hostname: 0df2f30fd76b iterations_since_restore: 1 node_ip: 172.28.0.2 objective: 0.14948453608247422 pid: 40016 time_since_restore: 21.961735486984253 time_this_iter_s: 21.961735486984253 time_total_s: 21.961735486984253 timestamp: 1613577173 timesteps_since_restore: 0 training_iteration: 1 trial_id: 15e144f6 ``` And this is the error that usually pops up:- ``` 2021-02-17 16:22:17,040 ERROR trial_runner.py:616 -- Trial _inner_34e77498: Error processing event. Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/ray/tune/trial_runner.py", line 586, in _process_trial results = self.trial_executor.fetch_result(trial) File "/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py", line 609, in fetch_result result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT) File "/usr/local/lib/python3.6/dist-packages/ray/_private/client_mode_hook.py", line 47, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/ray/worker.py", line 1456, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=473, ip=172.28.0.2) File "python/ray/_raylet.pyx", line 480, in ray._raylet.execute_task File "python/ray/_raylet.pyx", line 432, in ray._raylet.execute_task.function_executor File "/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py", line 167, in train_buffered result = self.train() File "/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py", line 226, in train result = self.step() File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 366, in step self._report_thread_runner_error(block=True) File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 513, in _report_thread_runner_error ("Trial raised an exception. Traceback:\n{}".format(err_tb_str) ray.tune.error.TuneError: Trial raised an exception. Traceback: ray::ImplicitFunc.train_buffered() (pid=473, ip=172.28.0.2) File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 248, in run self._entrypoint() File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 316, in entrypoint self._status_reporter.get_checkpoint()) File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 576, in _trainable_func output = fn() File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 651, in _inner inner(config, checkpoint_dir=None) File "/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py", line 645, in inner fn(config, **fn_kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 160, in _objective local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 983, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1062, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1130, in _save_checkpoint self._rotate_checkpoints(use_mtime=True) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1460, in _rotate_checkpoints checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1448, in _sorted_checkpoints best_model_index = checkpoints_sorted.index(str(Path(self.state.best_model_checkpoint))) ValueError: 'results/run-34e77498/checkpoint-10' is not in list ``` Seems like it is trying to retrieve the best model but there is some sort of bug there (note that I am not checkpointing due to lack of storage capacity). This is a part of the code that may have a clue as to the reason for this bug:- ``` from ray.tune.suggest.hyperopt import HyperOptSearch from ray.tune.schedulers import PopulationBasedTraining from ray.tune import CLIReporter, JupyterNotebookReporter from ray import tune import random pbt = PopulationBasedTraining( time_attr="training_iteration", metric="eval_accuracy", mode="max", perturbation_interval=10, # every 10 `time_attr` units # (training_iterations in this case) hyperparam_mutations={ "weight_decay": tune.uniform(1, 0.0001), "seed": tune.uniform(1,20000), "learning_rate": tune.choice([1e-5, 2e-5, 3e-5, 4e-5, 5e-5, 6e-5, 2e-7, 1e-7, 3e-7, 2e-8]), "adafactor": tune.choice(['True','False']), "adam_beta1": tune.uniform(1.0, 0.0), "adam_beta2": tune.uniform(1.0, 0), "adam_epsilon": tune.choice([1e-8, 2e-8, 3e-8, 1e-9, 2e-9, 3e-10]), "max_grad_norm": tune.uniform(1.0, 0), }) reporter = JupyterNotebookReporter( overwrite = True, metric = 'eval_accuracy', parameter_columns={ "weight_decay": "w_decay", "learning_rate": "lr", "per_device_train_batch_size": "train_bs/gpu", "num_train_epochs": "num_epochs"}, metric_columns=["eval_acc", "eval_loss", "epoch", "training_iteration"]) tune_config = { "per_device_train_batch_size": 8, "per_device_eval_batch_size": 16, "num_train_epochs": tune.choice([10,15]) } def compute_objective(metrics): return metrics["eval_accuracy"] best = trainer.hyperparameter_search(hp_space = lambda _: tune_config, n_trials=100, compute_objective=compute_objective, direction="maximize", backend='ray', #search_alg=HyperOptSearch(metric='accuracy', mode='max', use_early_stopped_trials=True) scheduler=pbt, resources_per_trial={"cpu": 3, "gpu": 1}, keep_checkpoints_num=1, name = "tune_transformer_pbt", progress_reporter=reporter, search_alg=HyperOptSearch(metric='eval_accuracy', mode='max', use_early_stopped_trials=True), reuse_actors=True, checkpoint_at_end=True) ``` I can supply more code if requested, but it is more or less tweaked from official examples. So basically, the first trial keeps running, the second is put to pending while for the continuation of the rest of the trials, it results in an error (strangely not with the first one). I tried waiting till most trials have errored out to see whether it would continue to train the 1st trial but it just terminated giving the list of all trials that couldn't be completed. > I am not sure where this bug is to be filed - rayproject or Huggingface, so I apologize in advance if I have posted in the wrong place.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10247/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10246/comments
https://api.github.com/repos/huggingface/transformers/issues/10246/events
https://github.com/huggingface/transformers/issues/10246
810,521,918
MDU6SXNzdWU4MTA1MjE5MTg=
10,246
TensorFlow Question-Answering example fails to run (cardinality error)
{ "login": "sfvaroglu", "id": 22965499, "node_id": "MDQ6VXNlcjIyOTY1NDk5", "avatar_url": "https://avatars.githubusercontent.com/u/22965499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sfvaroglu", "html_url": "https://github.com/sfvaroglu", "followers_url": "https://api.github.com/users/sfvaroglu/followers", "following_url": "https://api.github.com/users/sfvaroglu/following{/other_user}", "gists_url": "https://api.github.com/users/sfvaroglu/gists{/gist_id}", "starred_url": "https://api.github.com/users/sfvaroglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sfvaroglu/subscriptions", "organizations_url": "https://api.github.com/users/sfvaroglu/orgs", "repos_url": "https://api.github.com/users/sfvaroglu/repos", "events_url": "https://api.github.com/users/sfvaroglu/events{/privacy}", "received_events_url": "https://api.github.com/users/sfvaroglu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @jplu since it seems to come from the `TFTrainer`.", "Hello!\r\n\r\nSince transformers 4.2.0 you need to have TensorFlow 2.3 at least.", "@jplu With TensorFlow 2.3 and transformers 4.4.0.dev0, I'm getting the error below:\r\n\r\n[INFO|trainer_tf.py:522] 2021-02-18 19:18:25,103 >> ***** Running training *****\r\n[INFO|trainer_tf.py:523] 2021-02-18 19:18:25,103 >> Num examples = 87599\r\n[INFO|trainer_tf.py:525] 2021-02-18 19:18:25,104 >> Num Epochs = 2\r\n[INFO|trainer_tf.py:526] 2021-02-18 19:18:25,104 >> Instantaneous batch size per device = 8\r\n[INFO|trainer_tf.py:528] 2021-02-18 19:18:25,105 >> Total train batch size (w. parallel, distributed & accumulation) = 8\r\n[INFO|trainer_tf.py:530] 2021-02-18 19:18:25,105 >> Gradient Accumulation steps = 1\r\n[INFO|trainer_tf.py:531] 2021-02-18 19:18:25,105 >> Steps per epoch = 10950\r\n[INFO|trainer_tf.py:532] 2021-02-18 19:18:25,105 >> Total optimization steps = 21900\r\n2021-02-18 19:18:25.182464: W tensorflow/core/framework/op_kernel.cc:1755] Invalid argument: TypeError: `generator` yielded an element that did not match the expected structure. The expected structure was ({'input_ids': tf.int32, 'attention_mask': tf.int32, 'feature_index': tf.int64, 'qas_id': tf.string}, {'\r\nstart_positions': tf.int64, 'end_positions': tf.int64, 'cls_index': tf.int64, 'p_mask': tf.int32, 'is_impossible': tf.int32}), but the yielded element was ({'input_ids': [0, 6179, 793, 21, 2708, 77, 79, 21, 39504, 8358, 25, 10, 33799, 116, 2, 2, 767, 7, 5, 6256, 45756, 34527, 24292, 9, 957, 6, 2708, 21, 5, 1\r\n354, 9, 6130, 3889, 1488, 757, 8, 6130, 7896, 4, 3224, 2708, 18, 33694, 6, 7896, 56, 57, 36175, 8, 21, 444, 3319, 11, 107, 4, 2708, 21, 576, 7, 544, 25, 10, 39504, 8358, 33799, 11, 5, 9660, 11, 7007, 77, 79, 21, 130, 107, 793, 6, 203, 101, 11029, 362, 9581, 7, 5, 12765, 3281, 24618, 25, 2673, 11, 5, 3470, 36\r\n209, 4, 993, 6256, 45756, 34527, 2349, 194, 14, 23, 5, 86, 9, 69, 5673, 1001, 27534, 7, 3351, 6, 2708, 21, 316, 2383, 1570, 107, 793, 6, 8, 37, 21, 16984, 107, 793, 6, 53, 215, 2349, 32, 31298, 4, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1\r\n, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1\r\n, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1\r\n, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0,\r\n0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'feature_index': 0, 'qas_id': '570c2b046b8089140040fba5'}, {'start_positions': 73, 'end_positions': 76,\r\n'cls_index': 0, 'p_mask': [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'is_impossible': False}).\r\nTraceback (most recent call last):\r\n\r\n File \"/home/svaroglu/.hf_venv/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py\", line 833, in generator_py_func\r\n flattened_values = nest.flatten_up_to(output_types, values)\r\n\r\n File \"/home/svaroglu/.hf_venv/lib/python3.6/site-packages/tensorflow/python/data/util/nest.py\", line 396, in flatten_up_to\r\n assert_shallow_structure(shallow_tree, input_tree)\r\n\r\n File \"/home/svaroglu/.hf_venv/lib/python3.6/site-packages/tensorflow/python/data/util/nest.py\", line 324, in assert_shallow_structure\r\n check_types=check_types)\r\n\r\n File \"/home/svaroglu/.hf_venv/lib/python3.6/site-packages/tensorflow/python/data/util/nest.py\", line 311, in assert_shallow_structure\r\n % (len(input_tree), len(shallow_tree)))\r\n\r\nValueError: The two structures don't have the same sequence length. Input structure has length 5, while shallow structure has length 4.\r\n", "This is because there is an issue in https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py .\r\n\r\nWe will look into it asap and will let you know here once done. Sorry for the inconvenience.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hello, I am getting the same error. \r\n\r\nIt says \"AttributeError: 'Dataset' object has no attribute 'cardinality'\" when I train it. Does anyone know how I should address this issue? ", "> Hello, I am getting the same error.\r\n> \r\n> It says \"AttributeError: 'Dataset' object has no attribute 'cardinality'\" when I train it. Does anyone know how I should address this issue?\r\n\r\n@jeehunkang Could you find a solution to this?", " \"AttributeError: 'Dataset' object has no attribute 'cardinality'\" when I train with TFTrainer. my tf version is 2.14.\r\nFile Name: trainer_tf.py\r\nLine #479 train_ds = self.get_train_tfdataset()\r\nLine #159 self.num_train_examples = self.train_dataset.cardinality().numpy() in \r\n\r\nMy Code snippet\r\ntraining_args = TFTrainingArguments(\r\n output_dir=output_dir,\r\n learning_rate=1e-5,\r\n num_train_epochs=1,\r\n weight_decay=0.01,\r\n logging_steps=1,\r\n max_steps=1\r\n)\r\n\r\ntrainer = TFTrainer(\r\n model=original_model,\r\n args=training_args,\r\n train_dataset=tokenized_datasets['train'],\r\n eval_dataset=tokenized_datasets['validation']\r\n)\r\n\r\ntrainer.train()", "Please note that TFTrainer is no longer actively maintained cc @Rocketknight1 ", "Yes, this is quite an old issue that dates back to a time before I wrote proper Keras support for our TF classes! If people are still getting errors running the modern `run_qa.py` example script or the question answering notebook, please open a new issue and paste your code so I can reproduce and diagnose the error!" ]
1,613
1,698
1,619
NONE
null
## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Linux-4.15.0-111-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.8 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Model I am using (Bert, XLNet ...): bert-base-uncased or roberta-base The problem arises when using: * [ ] the official example scripts: question-answering (run_tf_squad.py) Error message: Instructions for updating: back_prop=False is deprecated. Consider using tf.stop_gradient instead. Instead of: results = tf.map_fn(fn, elems, back_prop=False) Use: results = tf.nest.map_structure(tf.stop_gradient, tf.map_fn(fn, elems)) 87599it [01:00, 1437.03it/s] 10570it [00:11, 958.83it/s] convert squad examples to features: 2%|_ | 1697/87599 [00:13<10:43, 133.40it/s][WARNING|squad.py:118] 2021-02-17 22:20:03,736 >> Could not find answer: 'municipal building and' vs. 'a municipal building' convert squad examples to features: 50%|_____ | 43393/87599 [05:04<05:04, 145.24it/s][WARNING|squad.py:118] 2021-02-17 22:24:55,103 >> Could not find answer: 'message stick,' vs. 'a message stick' convert squad examples to features: 100%|__________| 87599/87599 [10:10<00:00, 143.59it/s] add example index and unique id: 100%|__________| 87599/87599 [00:00<00:00, 784165.53it/s] convert squad examples to features: 100%|__________| 10570/10570 [01:14<00:00, 140.99it/s] add example index and unique id: 100%|__________| 10570/10570 [00:00<00:00, 510000.04it/s] [WARNING|integrations.py:60] 2021-02-17 22:31:16,214 >> Using the `WAND_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none). [INFO|trainer_tf.py:125] 2021-02-17 22:31:16,214 >> To use comet_ml logging, run `pip/conda install comet_ml` see https://www.comet.ml/docs/python-sdk/huggingface/ Traceback (most recent call last): File "run_tf_squad.py", line 256, in <module> main() File "run_tf_squad.py", line 250, in main trainer.train() File "/home/transformers/src/transformers/trainer_tf.py", line 457, in train train_ds = self.get_train_tfdataset() File "/home/transformers/src/transformers/trainer_tf.py", line 141, in get_train_tfdataset self.num_train_examples = self.train_dataset.cardinality().numpy() AttributeError: '_AssertCardinalityDataset' object has no attribute 'cardinality' The tasks I am working on is: * [ ] an official GLUE/SQUaD task: SQUaD v1 ## To reproduce 1. Use the latest master from huggingface/transformers 2. Go to examples/question-answering 3. Run WANDB_DISABLED=true python run_tf_squad.py --model_name_or_path roberta-base --output_dir model --max_seq_length 384 --num_train_epochs 2 --per_device_train_batch_size 8 --per_device_eval_batch_size 16 --do_train --do_eval --logging_dir logs --logging_steps 10 --learning_rate 3e-5 --no_cuda=True --doc_stride 128 Could you take a look @sgugger?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10246/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10245/comments
https://api.github.com/repos/huggingface/transformers/issues/10245/events
https://github.com/huggingface/transformers/issues/10245
810,507,032
MDU6SXNzdWU4MTA1MDcwMzI=
10,245
`compute_metrics` show better results than `generate` because target data leaks
{ "login": "pperle", "id": 17356275, "node_id": "MDQ6VXNlcjE3MzU2Mjc1", "avatar_url": "https://avatars.githubusercontent.com/u/17356275?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pperle", "html_url": "https://github.com/pperle", "followers_url": "https://api.github.com/users/pperle/followers", "following_url": "https://api.github.com/users/pperle/following{/other_user}", "gists_url": "https://api.github.com/users/pperle/gists{/gist_id}", "starred_url": "https://api.github.com/users/pperle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pperle/subscriptions", "organizations_url": "https://api.github.com/users/pperle/orgs", "repos_url": "https://api.github.com/users/pperle/repos", "events_url": "https://api.github.com/users/pperle/events{/privacy}", "received_events_url": "https://api.github.com/users/pperle/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @patrickvonplaten and @sgugger ", "Did you use the flag `--predict_with_generate`? It's there just for this: predicting using the `generate` method and the labels are then not passed (except to compute the loss).", "Thank you for the hint. I followed this tutorial [this example](https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-trainer) and used [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments), which to not have the `predict_with_generate` option, instead of [`Seq2SeqTrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Seq2SeqTrainingArguments).\r\n\r\nMaybe it's just me, but I think the `predict_with_generate` option should be described more visibly. I found it only in [`Seq2SeqTrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Seq2SeqTrainingArguments) and in none of the examples. Also, none of the examples in the documentation use `Seq2SeqTrainingArguments`.\r\n\r\nIf you don't think the documentation should be updated you can close this issue since my confusion has been resolved. Thank you.\r\n", "Yes the documentation is missing a seq2seq example. This is because we have been working on the design recently. The most up-to-date example you should use as reference is the [run_seq2seq script](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: Linux-5.8.18-050818-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using: - t5 The problem arises when using: - my own modified scripts: (give details below) The tasks I am working on is: - my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. train model with `compute_metrics` function to monitor metrics 2. use `generate` to predict text with trained model <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I expect the metrics of `compute_metrics` to be equal to my generated text. <!-- A clear and concise description of what you would expect to happen. --> ## More information While training, I used `compute_metrics` to calculate the metric on my validation set every X steps. I was surprised to see that after training my model did not perform as expected using the `generate` function provided by huggingface. After some digging through the code I think I understand what the problem is. [`compute_metrics`](https://github.com/huggingface/transformers/blob/cd48078ce59a195473729759c76d88ae612b0f7a/src/transformers/trainer.py#L1665) takes as input `preds`, which is [a collection of `logits`](https://github.com/huggingface/transformers/blob/cd48078ce59a195473729759c76d88ae612b0f7a/src/transformers/trainer.py#L1635) from [`prediction_step`](https://github.com/huggingface/transformers/blob/cd48078ce59a195473729759c76d88ae612b0f7a/src/transformers/trainer.py#L1630) which internally [calls `model` with the inputs and targets of the model](https://github.com/huggingface/transformers/blob/cd48078ce59a195473729759c76d88ae612b0f7a/src/transformers/trainer.py#L1733). This means that the target text leaks into `preds.predictions` because `mode.forward` used the targets as [input for the decoder](https://github.com/huggingface/transformers/blob/1cd16512dc8060aa8c2419664f9cb83813ade4d5/src/transformers/models/t5/modeling_t5.py#L1331). This makes the metrics of `compute_metrics` seem much better than they really are. In my opinion the target data should not be used to create `preds.predictions`. Maybe the `generate` function is a better fit.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10245/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10245/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10244/comments
https://api.github.com/repos/huggingface/transformers/issues/10244/events
https://github.com/huggingface/transformers/pull/10244
810,503,529
MDExOlB1bGxSZXF1ZXN0NTc1MTg0MjA5
10,244
Script for distilling zero-shot classifier to more efficient student
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[ { "id": 1838876023, "node_id": "MDU6TGFiZWwxODM4ODc2MDIz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Distillation", "name": "Distillation", "color": "d4c5f9", "default": false, "description": "Related to model distillation" }, { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" } ]
closed
false
null
[]
[ "@LysandreJik cool thanks for the feedback. \r\n\r\n@sgugger Thanks, I added `fp16` for the teacher predictions. It will also now throw an error if someone tries to run it w/ distributed or TPUs and I added a note in the readme about that as well. It _can_ do multi-gpu though and will do so automatically if multiple GPUs are available on the machine, it just can't do multi-node.", "Yes I meant distributed multi-GPU. I did see it will use all GPUs available on the machine however :-)" ]
1,613
1,613
1,613
CONTRIBUTOR
null
This PR introduces a script that provides a way to improve the speed and memory performance of a zero-shot classifier by training a more efficient student model from the zero-shot teacher's predictions over an unlabeled dataset. For a given sequence, the zero-shot classification pipeline requires each possible label to be fed through the large NLI model separately. This requirement slows results considerably, particularly for tasks with a large number of classes `K`. Given (1) an unlabeled corpus and (2) a set of candidate class names, this script allows a user to train a standard classification head with `K` output dimensions. The script generates a softmax distribution for the provided data & class names, and a student classifier is then fine-tuned on these proxy labels. The resulting student model can be used for classifying novel text instances over these `K` classes with an order-of-magnitude boost in inference speed in addition to decreased memory usage. A teacher NLI model can be distilled to a student model by running `distill_classifier.py` like so: ``` python distill_classifier.py \ --data_file unlabeled_data.txt \ --class_names_file class_names.txt \ --output_dir ./distilled_model ``` A number of other args are provided as well, such as `--teacher_name_or_path` and `--student_name_or_path` for specifying the pre-trained student & teacher models to be used (by default `roberta-large-mnli` and `distillbert-base-uncased`) and `--hypothesis_template` for customizing the [hypothesis template](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline.__call__) used by the teacher zero-shot model. The training is implemented via `Trainer`, so any [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) can be specified as well. The resulting model can then be used trivially in a text classification pipeline or in any other way: ```python model = AutoModelForSequenceClassification.from_pretrained("./distilled_model") tokenizer = AutoTokenizer.from_pretrained("./distilled_model") distilled_classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer) ``` See the included [README.md](https://github.com/joeddav/transformers/blob/zero-shot-distillation/examples/research_projects/zero-shot-distillation/README.md) for more details and examples. Soon I'll introduce a similar script for self-training an NLI model, boosting the model's performance after training on only unlabeled data, which model can then be subsequently distilled with this script like any NLI model. **Update**: I also just added a link to a working [colab notebook demo](https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10244/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10244/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10244", "html_url": "https://github.com/huggingface/transformers/pull/10244", "diff_url": "https://github.com/huggingface/transformers/pull/10244.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10244.patch", "merged_at": 1613686125000 }
https://api.github.com/repos/huggingface/transformers/issues/10243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10243/comments
https://api.github.com/repos/huggingface/transformers/issues/10243/events
https://github.com/huggingface/transformers/pull/10243
810,490,627
MDExOlB1bGxSZXF1ZXN0NTc1MTczNjAz
10,243
[trainer] refactor place_model_on_device logic, add deepspeed
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
This PR: * refactors 3 places of `place_model_on_device` logic - into one public attribute with the same name as the `TrainingArguments.place_model_on_device` attribute * adds deepspeed to that logic (it was missing in 2 places) @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10243/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10243", "html_url": "https://github.com/huggingface/transformers/pull/10243", "diff_url": "https://github.com/huggingface/transformers/pull/10243.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10243.patch", "merged_at": 1613605956000 }
https://api.github.com/repos/huggingface/transformers/issues/10242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10242/comments
https://api.github.com/repos/huggingface/transformers/issues/10242/events
https://github.com/huggingface/transformers/issues/10242
810,429,935
MDU6SXNzdWU4MTA0Mjk5MzU=
10,242
Upgrade transformers from 3.5.0 to 4.3.2 instance error
{ "login": "brunopistone", "id": 10196125, "node_id": "MDQ6VXNlcjEwMTk2MTI1", "avatar_url": "https://avatars.githubusercontent.com/u/10196125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brunopistone", "html_url": "https://github.com/brunopistone", "followers_url": "https://api.github.com/users/brunopistone/followers", "following_url": "https://api.github.com/users/brunopistone/following{/other_user}", "gists_url": "https://api.github.com/users/brunopistone/gists{/gist_id}", "starred_url": "https://api.github.com/users/brunopistone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brunopistone/subscriptions", "organizations_url": "https://api.github.com/users/brunopistone/orgs", "repos_url": "https://api.github.com/users/brunopistone/repos", "events_url": "https://api.github.com/users/brunopistone/events{/privacy}", "received_events_url": "https://api.github.com/users/brunopistone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, thanks for opening an issue. The breaking changes from version v3 to v4 are heavily documented: https://huggingface.co/transformers/migration.html#migrating-from-transformers-v3-x-to-v4-x\r\n\r\nYour particular issue is [bullet number 4](https://huggingface.co/transformers/migration.html#switching-the-return-dict-argument-to-true-by-default).", "For our reference, when trying to look for breaking changes between v3 and v4, where did you look? If we can improve visibility for those, it would be great. Maybe \"Migration\" isn't the best identifier? Are there some specific keywords you used?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
Hi guys. I tried to update the transformers module from the version 3.5.0 to the version 4.3.2. After this upgrade, the code that previously was working now has some problems. This is my code: ``` from transformers import AutoConfig, AutoTokenizer, TFAutoModel config = AutoConfig.from_pretrained("amazon/bort") model = TFAutoModel.from_pretrained("amazon/bort", config=config) bert_main_layer = model.bert encoder, pooler = bert_main_layer(input_ids_in, attention_mask=input_masks_in) X = tf.keras.layers.Dropout(config.hidden_dropout_prob)(pooler) X = tf.keras.layers.Dense( constants.CLASSES, kernel_initializer=tf.keras.initializers.TruncatedNormal(stddev=config.initializer_range), activation="softmax" )(X) model = tf.keras.Model( inputs=[input_ids_in, input_masks_in], outputs=[X] ) ``` In particular, when I instantiate the class `bert_model` (`bert_main_layer(input_ids_in, attention_mask=input_masks_in)`), with the 3.5.0 version it returns two tensors: ``` <tf.Tensor 'bert/encoder/layer_._3/output/LayerNorm/batchnorm/add_1:0' shape=(None, 333, 1024) dtype=float32> <tf.Tensor 'bert/pooler/dense/Tanh:0' shape=(None, 1024) dtype=float32> ``` With the version 4.3.2 it returns two strings: ``` last_hidden_state pooler_output ``` The consequence is that now I have this exception on the Dense layer: ``` ValueError: Input 0 of layer dense is incompatible with the layer: : expected min_ndim=2, found ndim=0. Full shape received: [] ``` Since I haven't found any documentation or guidance, can you please help me? What I'm doing wrong? Thank you
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10242/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10241/comments
https://api.github.com/repos/huggingface/transformers/issues/10241/events
https://github.com/huggingface/transformers/pull/10241
810,418,452
MDExOlB1bGxSZXF1ZXN0NTc1MTEzNDMy
10,241
[Trainer] doc update
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,613
1,613
CONTRIBUTOR
null
Trainer doc update: * [x] port the instructions to use the new `run_seq2seq.py` script * [x] add clarifications to using DeepSpeed in the notebook @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10241/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10241", "html_url": "https://github.com/huggingface/transformers/pull/10241", "diff_url": "https://github.com/huggingface/transformers/pull/10241.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10241.patch", "merged_at": 1613606288000 }
https://api.github.com/repos/huggingface/transformers/issues/10240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10240/comments
https://api.github.com/repos/huggingface/transformers/issues/10240/events
https://github.com/huggingface/transformers/issues/10240
810,401,284
MDU6SXNzdWU4MTA0MDEyODQ=
10,240
CUDA memory error on increasing the number of generations
{ "login": "ShivamSharma1997", "id": 25745051, "node_id": "MDQ6VXNlcjI1NzQ1MDUx", "avatar_url": "https://avatars.githubusercontent.com/u/25745051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShivamSharma1997", "html_url": "https://github.com/ShivamSharma1997", "followers_url": "https://api.github.com/users/ShivamSharma1997/followers", "following_url": "https://api.github.com/users/ShivamSharma1997/following{/other_user}", "gists_url": "https://api.github.com/users/ShivamSharma1997/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShivamSharma1997/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShivamSharma1997/subscriptions", "organizations_url": "https://api.github.com/users/ShivamSharma1997/orgs", "repos_url": "https://api.github.com/users/ShivamSharma1997/repos", "events_url": "https://api.github.com/users/ShivamSharma1997/events{/privacy}", "received_events_url": "https://api.github.com/users/ShivamSharma1997/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "By setting `num_return_sequences` you're creating bigger batches, so it is expected to have an OOM if you ask for too much returned sequences.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
NONE
null
I am getting CUDA memory error when generating text with num_return_sequences set to more than 100 for a self-trained gpt2 model. This is not expected since after every generation there should be nothing left in the GPU.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10240/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10239/comments
https://api.github.com/repos/huggingface/transformers/issues/10239/events
https://github.com/huggingface/transformers/issues/10239
810,348,948
MDU6SXNzdWU4MTAzNDg5NDg=
10,239
Question about (no_decay = ['bias', 'LayerNorm.weight']) in BERT(Transformer-based)
{ "login": "fightnyy", "id": 55227984, "node_id": "MDQ6VXNlcjU1MjI3OTg0", "avatar_url": "https://avatars.githubusercontent.com/u/55227984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fightnyy", "html_url": "https://github.com/fightnyy", "followers_url": "https://api.github.com/users/fightnyy/followers", "following_url": "https://api.github.com/users/fightnyy/following{/other_user}", "gists_url": "https://api.github.com/users/fightnyy/gists{/gist_id}", "starred_url": "https://api.github.com/users/fightnyy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fightnyy/subscriptions", "organizations_url": "https://api.github.com/users/fightnyy/orgs", "repos_url": "https://api.github.com/users/fightnyy/repos", "events_url": "https://api.github.com/users/fightnyy/events{/privacy}", "received_events_url": "https://api.github.com/users/fightnyy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,613
1,613
1,613
NONE
null
Hi, I have a Question about the BERT model code. I saw "no_decay = ['bias', 'LayerNorm.weight']" in BERT code(especially, in Optimizer part). It seemed reasonable, however, did this prove to be better performance? Or is it just to speed up calculations?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10239/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10238/comments
https://api.github.com/repos/huggingface/transformers/issues/10238/events
https://github.com/huggingface/transformers/issues/10238
810,329,858
MDU6SXNzdWU4MTAzMjk4NTg=
10,238
ConvBert not compatible with torch v1.6
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Testing all versions from torch v1.3.0+ is indeed on the roadmap, I expect ~1 month out alongside all the other tests improvements.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
MEMBER
null
ConvBERT uses statements like `torch.multiply` which did not exist in pytorch v1.6 => ConvBERT is not compatible with v1.6 (cc @abhishekkrthakur). This can easily be checked when running: `pytest tests/test_modeling_convbert.py` @LysandreJik, @sgugger - It would be great to test all the different pytorch versions in a slow test I think.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10238/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10238/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10237/comments
https://api.github.com/repos/huggingface/transformers/issues/10237/events
https://github.com/huggingface/transformers/pull/10237
810,309,933
MDExOlB1bGxSZXF1ZXN0NTc1MDIyNjA1
10,237
TransCoder
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,613
1,619
1,619
MEMBER
null
# What does this PR do? Adds TransCoder https://github.com/facebookresearch/TransCoder
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10237/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10237/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10237", "html_url": "https://github.com/huggingface/transformers/pull/10237", "diff_url": "https://github.com/huggingface/transformers/pull/10237.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10237.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/10236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10236/comments
https://api.github.com/repos/huggingface/transformers/issues/10236/events
https://github.com/huggingface/transformers/pull/10236
810,272,523
MDExOlB1bGxSZXF1ZXN0NTc0OTkxMTg1
10,236
Add m2m100
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "id": 2669577093, "node_id": "MDU6TGFiZWwyNjY5NTc3MDkz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PR%20for%20Model%20Addition", "name": "PR for Model Addition", "color": "5319e7", "default": false, "description": "" } ]
closed
false
null
[]
[ "Sure, Patrick !", "I’ve addressed all the review comments, and all the slow/fast tests are now passing.\r\n\r\nI didn’t add fast tokenizer because `M2M100` is `sentencepiece` based tokenizer, but it uses `sentencepiece` for just tokenizing and then uses a vocab file to convert the tokens to ids and ids to tokens. So our current `SpmConverter` doesn’t work for this. I’ll try to add fast tokenizer in a follow-up PR.\r\n\r\nMerging!", "> I’ve addressed all the review comments, and all the slow/fast tests are now passing.\r\n> \r\n> I didn’t add fast tokenizer because `M2M100` is `sentencepiece` based tokenizer, but it uses `sentencepiece` for just tokenizing and then uses a vocab file to convert the tokens to ids and ids to tokens. So our current `SpmConverter` doesn’t work for this. I’ll try to add fast tokenizer in a follow-up PR.\r\n> \r\n> Merging!\r\n\r\nHey, I was wondering if there's any progress on a Fast Tokenizer for M2M or if any help can be needed?\r\nThanks :)" ]
1,613
1,631
1,615
MEMBER
null
# What does this PR do? Adds the M2M100 model https://github.com/pytorch/fairseq/tree/master/examples/m2m_100 Fixes #8054
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10236/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10236", "html_url": "https://github.com/huggingface/transformers/pull/10236", "diff_url": "https://github.com/huggingface/transformers/pull/10236.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10236.patch", "merged_at": 1615049056000 }
https://api.github.com/repos/huggingface/transformers/issues/10235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10235/comments
https://api.github.com/repos/huggingface/transformers/issues/10235/events
https://github.com/huggingface/transformers/pull/10235
810,267,710
MDExOlB1bGxSZXF1ZXN0NTc0OTg3MTM1
10,235
[file_utils] do not gobble certain kinds of requests.ConnectionError
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger Definitely on my radar at some point.\r\n\r\nFor now though it's useful for me to have a more experimental codebase where the API can change/break :)", "> LGTM but I don't have sufficient `requests` knowledge to be sure this catches all exceptions that we want to catch\r\n\r\nwe're in the same boat, sailor", "In order to validate your strategy, I took the list of every exception in the public API of requests: https://2.python-requests.org/en/master/api/#exceptions\r\n\r\nI built the inheritance tree between exceptions: https://requests.readthedocs.io/en/master/_modules/requests/exceptions/\r\n\r\nHere's a readable version:\r\n\r\n```\r\nIOError\r\n+-- RequestException \r\n +-- HTTPError\r\n +-- ConnectionError\r\n | +-- ProxyError\r\n | +-- SSLError\r\n | +-- ConnectTimeout (also inherits Timeout)\r\n +-- Timeout\r\n | +-- ConnectTimeout (also inherits ConnectionError)\r\n | +-- ReadTimeout\r\n +-- URLRequired\r\n +-- TooManyRedirects\r\n +-- MissingSchema (also inherits ValueError)\r\n +-- InvalidSchema (also inherits ValueError)\r\n +-- InvalidURL (also inherits ValueError)\r\n | +-- InvalidProxyURL\r\n +-- InvalidHeader (also inherits ValueError)\r\n +-- ChunkedEncodingError\r\n +-- ContentDecodingError (also inherits urllib3.exceptions.HTTPError)\r\n +-- StreamConsumedError (also inherits TypeError)\r\n +-- RetryError\r\n +-- UnrewindableBodyError\r\n```\r\n\r\nMultiple inheritance is most likely for backwards-compatibility when changing exception tyes. For example, if you want to raise the more accurate `ContentDecodingError` instead of a generic `urllib3.exceptions.HTTPError`, making `ContentDecodingError` inherit `urllib3.exceptions.HTTPError` ensures you don't break the code of users who used to catch `urllib3.exceptions.HTTPError`.\r\n\r\nI assume you want to tell apart situations where it's worth retrying from situations it isn't worth retrying, because there's a configuration issue that won't solve itself by retrying.\r\n\r\nBased on the above, on the documentation, and on a quick search in the source code of requests, what you're doing looks correct to me, modulo my suggestion on the style.", "Applied most of your comments @aaugustin, merging after internal review from @julien-c", "👍 " ]
1,613
1,616
1,616
MEMBER
null
Backport from https://github.com/huggingface/huggingface_hub/pull/14/commits/34b7b70d07ab1c9fc2f7da603d47cb344e256af6 might close (or at the very least provide more transparency into) #8690, #10067, and others
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10235/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10235", "html_url": "https://github.com/huggingface/transformers/pull/10235", "diff_url": "https://github.com/huggingface/transformers/pull/10235.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10235.patch", "merged_at": 1616085465000 }
https://api.github.com/repos/huggingface/transformers/issues/10234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10234/comments
https://api.github.com/repos/huggingface/transformers/issues/10234/events
https://github.com/huggingface/transformers/issues/10234
810,258,652
MDU6SXNzdWU4MTAyNTg2NTI=
10,234
Request to add Switch Transformer
{ "login": "coderpotter", "id": 33569809, "node_id": "MDQ6VXNlcjMzNTY5ODA5", "avatar_url": "https://avatars.githubusercontent.com/u/33569809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/coderpotter", "html_url": "https://github.com/coderpotter", "followers_url": "https://api.github.com/users/coderpotter/followers", "following_url": "https://api.github.com/users/coderpotter/following{/other_user}", "gists_url": "https://api.github.com/users/coderpotter/gists{/gist_id}", "starred_url": "https://api.github.com/users/coderpotter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/coderpotter/subscriptions", "organizations_url": "https://api.github.com/users/coderpotter/orgs", "repos_url": "https://api.github.com/users/coderpotter/repos", "events_url": "https://api.github.com/users/coderpotter/events{/privacy}", "received_events_url": "https://api.github.com/users/coderpotter/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ " Google released the source code for transformer-based mixture-of-experts (the switch architecture): https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/moe.py\r\n\r\nAccording to https://www.infoq.com/news/2021/02/google-trillion-parameter-ai/ the model weights are not available yet." ]
1,613
1,613
null
NONE
null
Google has come up with yet another transformer: https://arxiv.org/pdf/2101.03961.pdf
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10234/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10234/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/10233
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10233/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10233/comments
https://api.github.com/repos/huggingface/transformers/issues/10233/events
https://github.com/huggingface/transformers/pull/10233
810,238,952
MDExOlB1bGxSZXF1ZXN0NTc0OTYzMDA2
10,233
Making TF Longformer-like models compliant with AMP
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,613
1,614
1,614
CONTRIBUTOR
null
# What does this PR do? This PR makes the TF Longformer-like models compliant with AMP. All the slow tests are passing as well for these models. These two models cannot be XLA compliant for now, as it seems that `tf.where` cannot be used in XLA if the `x` and `y` parameters are `None`. See the `_get_global_attn_indices` method which has this case. I have opened [an issue](https://github.com/tensorflow/tensorflow/issues/47211) on the TF repo in order to ask if it is an expected behavior or a bug.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10233/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/10233", "html_url": "https://github.com/huggingface/transformers/pull/10233", "diff_url": "https://github.com/huggingface/transformers/pull/10233.diff", "patch_url": "https://github.com/huggingface/transformers/pull/10233.patch", "merged_at": 1614004917000 }
https://api.github.com/repos/huggingface/transformers/issues/10232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10232/comments
https://api.github.com/repos/huggingface/transformers/issues/10232/events
https://github.com/huggingface/transformers/issues/10232
810,165,137
MDU6SXNzdWU4MTAxNjUxMzc=
10,232
Multilabel Sequence Classification in trainer
{ "login": "GuillemGSubies", "id": 37592763, "node_id": "MDQ6VXNlcjM3NTkyNzYz", "avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GuillemGSubies", "html_url": "https://github.com/GuillemGSubies", "followers_url": "https://api.github.com/users/GuillemGSubies/followers", "following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}", "gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}", "starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions", "organizations_url": "https://api.github.com/users/GuillemGSubies/orgs", "repos_url": "https://api.github.com/users/GuillemGSubies/repos", "events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}", "received_events_url": "https://api.github.com/users/GuillemGSubies/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Hi @LysandreJik, \r\nIf no one is working on this can I start on this feature? \r\nI imagine this will not be that difficult and should be possible by using the sigmoid instead of the softmax, where we're calculating a probability between 0-1 for each class which will be encapsulated in a class similar to [ModelName]ForSequenceClassification. ", "Hi @vimarshc, I believe @abhishekkrthakur is working on this in https://github.com/huggingface/transformers/pull/11012", "Thanks for the update! \r\nShall try to make myself useful for some other issue. Haha. ", "#11012 is merged and 4.6.0 is released, is this feature already there?", "I believe there is now multi-label classification within the models, and changing a model configuration argument (`config.problem_type = \"multi_label_classification\"`) should enable that out of the box. Have you tried it out?", "I did but I didn't have enough time to try it in depth, I had some problems with the labels format. However I did not find any documentation or examples in the docs.", "This seems to work but I found a weird problem that might be interesting to solve. \r\n\r\nWhen you binarize the labels, if you are training with pytorch it will throw an error because `BCEWithLogitsLoss` expects floats and not ints. This is counterintuitive for me, I had to cast my labels to floats and then it worked.\r\n\r\nAlso, there is not a lot of information about the anything of this in the documentation and the notebook for multi-label sequence classification still uses the old training loops, instead of the trainer.", "Indeed, it would be nice to put this in the documentation, it's sparse on this subject right now.", "Sorry for the flood but, is there a way to instance a multilabel model on a pipeline? I'm trying it but I don't think it is working, the prediction probabilities always sum up to 1.", "I believe this is being taken care of in https://github.com/huggingface/transformers/pull/8328", "Indeed, it looks really good. Thank you", "Is the multilabel pipeline implemented in the last release `4.9.2` @LysandreJik ? If yes, are there examples on how to use it?", "Ssee the docs for the text classification pipeline [here](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.TextClassificationPipeline) (See `function_to_apply` argument)\r\n\r\nIt is currently on the `master` branch and will be in the next release (v4.10.0)", "Great thanks!\r\nHowever, I don't see a documentation which is specific to multilabel classification (I understand that choosing `function_to_apply='sigmoid'` makes it work though). With the results of the `pipeline` be prettified accordingly?" ]
1,613
1,628
null
CONTRIBUTOR
null
# 🚀 Feature request We need to be able to use the trainer for multilabel classification problems. ## Motivation Right now we create our models in the old fashioned way, with a sigmoid layer at the end so we can do multilabel. However if we could use the trainer directly, we wouldn't need to maintain different training scripts. Are there any plans for adding this to the trainer? ## Your contribution I could try to help but I don't even know where to start with. Thanks you very much for reading
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10232/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10232/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/10231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10231/comments
https://api.github.com/repos/huggingface/transformers/issues/10231/events
https://github.com/huggingface/transformers/issues/10231
810,150,822
MDU6SXNzdWU4MTAxNTA4MjI=
10,231
Wav2Vec2 finetune
{ "login": "idanmoradarthas", "id": 14873156, "node_id": "MDQ6VXNlcjE0ODczMTU2", "avatar_url": "https://avatars.githubusercontent.com/u/14873156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/idanmoradarthas", "html_url": "https://github.com/idanmoradarthas", "followers_url": "https://api.github.com/users/idanmoradarthas/followers", "following_url": "https://api.github.com/users/idanmoradarthas/following{/other_user}", "gists_url": "https://api.github.com/users/idanmoradarthas/gists{/gist_id}", "starred_url": "https://api.github.com/users/idanmoradarthas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/idanmoradarthas/subscriptions", "organizations_url": "https://api.github.com/users/idanmoradarthas/orgs", "repos_url": "https://api.github.com/users/idanmoradarthas/repos", "events_url": "https://api.github.com/users/idanmoradarthas/events{/privacy}", "received_events_url": "https://api.github.com/users/idanmoradarthas/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Patrick is working on it, see #10145 " ]
1,613
1,614
1,614
NONE
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Can you please share code to how to finetune transformers.Wav2Vec2ForCTC, or maybe on how to give labels to the model in order to get loss? ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10231/timeline
completed
null
null