url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/10230 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10230/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10230/comments | https://api.github.com/repos/huggingface/transformers/issues/10230/events | https://github.com/huggingface/transformers/pull/10230 | 810,036,646 | MDExOlB1bGxSZXF1ZXN0NTc0NzkyNjcx | 10,230 | Making TF GPT2 compliant with XLA and AMP | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
This PR makes the TF GPT2 model compliant with XLA and AMP. All the slow tests are passing as well. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10230/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10230",
"html_url": "https://github.com/huggingface/transformers/pull/10230",
"diff_url": "https://github.com/huggingface/transformers/pull/10230.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10230.patch",
"merged_at": 1613637362000
} |
https://api.github.com/repos/huggingface/transformers/issues/10229 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10229/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10229/comments | https://api.github.com/repos/huggingface/transformers/issues/10229/events | https://github.com/huggingface/transformers/pull/10229 | 810,020,839 | MDExOlB1bGxSZXF1ZXN0NTc0Nzc5NTUy | 10,229 | Introduce warmup_ratio training argument | {
"login": "tanmay17061",
"id": 32801726,
"node_id": "MDQ6VXNlcjMyODAxNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/32801726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanmay17061",
"html_url": "https://github.com/tanmay17061",
"followers_url": "https://api.github.com/users/tanmay17061/followers",
"following_url": "https://api.github.com/users/tanmay17061/following{/other_user}",
"gists_url": "https://api.github.com/users/tanmay17061/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanmay17061/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanmay17061/subscriptions",
"organizations_url": "https://api.github.com/users/tanmay17061/orgs",
"repos_url": "https://api.github.com/users/tanmay17061/repos",
"events_url": "https://api.github.com/users/tanmay17061/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanmay17061/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As per current implementation, any non-zero value given for `warmup_steps` will override any effects of `warmup_ratio`. It made sense for me to give higher precedence to `warmup_steps` as it seems to be the more inconvenient argument of the 2 to provide from user perspective. Please let me know if this default behaviour is to be changed.\r\n\r\nPS: this is my first PR, so feel free to correct me. I will be happy to accommodate 😄 ",
"Thanks for the review! I've incorporated the comments. \r\nDo let me know if anything else needs to be addressed.",
"Thanks for your comments! \r\nAgreed, the code looks much more readable now. \r\nDo let me know if there can be any more improvement. \r\n\r\nThanks.",
"It would indeed be better in `TrainingArguments.__post_init__`: the rational for that is that when instantiating an object with wrong values, we want the error to be raised as soon as possible and as close as possible to the source for easy debugging.\r\n\r\nIn this case, the problem should appear at the line that parses the `TrainingArguments` or when they are created.",
"Thanks! Taken care of it.",
"Thanks for adding this functionality!"
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | Introduce warmup_ratio training argument in both
TrainingArguments and TFTrainingArguments classes (#6673)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR will add a new argument `warmup_ratio` to both `TrainingArguments` and `TFTrainingArguments` classes. This can be used to specify the ratio of total training steps for which linear warmup will happen.
This is especially convenient when the user wants to play around with the `num_train_epochs` or `max_steps` arguments while keeping the ratio of warmup steps a constant.
Fixes #6673
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Link to the issue raised](https://github.com/huggingface/transformers/issues/6673).
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Since modifications in trainer: @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10229/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10229",
"html_url": "https://github.com/huggingface/transformers/pull/10229",
"diff_url": "https://github.com/huggingface/transformers/pull/10229.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10229.patch",
"merged_at": 1613669013000
} |
https://api.github.com/repos/huggingface/transformers/issues/10228 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10228/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10228/comments | https://api.github.com/repos/huggingface/transformers/issues/10228/events | https://github.com/huggingface/transformers/issues/10228 | 810,013,350 | MDU6SXNzdWU4MTAwMTMzNTA= | 10,228 | Converting original T5 to be used in Transformers | {
"login": "marton-avrios",
"id": 59836119,
"node_id": "MDQ6VXNlcjU5ODM2MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/59836119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marton-avrios",
"html_url": "https://github.com/marton-avrios",
"followers_url": "https://api.github.com/users/marton-avrios/followers",
"following_url": "https://api.github.com/users/marton-avrios/following{/other_user}",
"gists_url": "https://api.github.com/users/marton-avrios/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marton-avrios/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marton-avrios/subscriptions",
"organizations_url": "https://api.github.com/users/marton-avrios/orgs",
"repos_url": "https://api.github.com/users/marton-avrios/repos",
"events_url": "https://api.github.com/users/marton-avrios/events{/privacy}",
"received_events_url": "https://api.github.com/users/marton-avrios/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nthis file exists, it can be found here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/convert_t5_original_tf_checkpoint_to_pytorch.py\r\n\r\n",
"Thank you! I tried the script and it misses a `config.json` file. Where can I find this?",
"The config.json should be part of the original T5 files, which can be found [here](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints).\r\n\r\nHowever, I wonder why you want to convert the original checkpoints yourself, because this has already been done by the authors of HuggingFace. You can find all T5 checkpoints on the [hub](https://huggingface.co/models?search=google/t5). ",
"Because I finetuned them on TPU which is not possible in Transformers yet (at least not in TF) and I want to use Transformers for prediction.",
"...I think you linked this issue as location for original T5 files",
"Apologies, updated the URL. The `config.json` file should look something like [this](https://huggingface.co/google/t5-large-ssm-nq/blob/main/config.json), containg all the hyperparameter values. When you fine-tuned T5 on TPUs, do you have a configuration available?",
"Thanks! (you are a lifesaver by the way with these response times :)). I finetuned using the original repo which uses Mesh Tensorflow and it exports checkpoints in the same format as the original published checkpoints. And there is no `config.json` file, not even in the original published checkpoints you linked. For future reference: you can look at the files by going to this url: https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/small if you have a google cloud account.",
"I see that they store configurations in .gin files, like this one: https://console.cloud.google.com/storage/browser/_details/t5-data/pretrained_models/small/operative_config.gin\r\n\r\nWhen opening this on my laptop in Notepad, this looks like this:\r\n\r\n```\r\nimport t5.models.mesh_transformer\r\nimport t5.data.sentencepiece_vocabulary\r\nimport mesh_tensorflow.optimize\r\nimport mesh_tensorflow.transformer.dataset\r\nimport mesh_tensorflow.transformer.learning_rate_schedules\r\nimport mesh_tensorflow.transformer.t2t_vocabulary\r\nimport mesh_tensorflow.transformer.transformer_layers\r\nimport mesh_tensorflow.transformer.utils\r\n\r\n# Macros:\r\n# ==============================================================================\r\nd_ff = 2048\r\nd_kv = 64\r\nd_model = 512\r\ndropout_rate = 0.1\r\ninputs_length = 512\r\nmean_noise_span_length = 3.0\r\nMIXTURE_NAME = 'all_mix'\r\nnoise_density = 0.15\r\nnum_heads = 8\r\nnum_layers = 6\r\ntargets_length = 512\r\ninit_checkpoint = \"gs://t5-data/pretrained_models/small/model.ckpt-1000000\"\r\ntokens_per_batch = 1048576\r\n\r\n# Parameters for AdafactorOptimizer:\r\n# ==============================================================================\r\nAdafactorOptimizer.beta1 = 0.0\r\nAdafactorOptimizer.clipping_threshold = 1.0\r\nAdafactorOptimizer.decay_rate = None\r\nAdafactorOptimizer.epsilon1 = 1e-30\r\nAdafactorOptimizer.epsilon2 = 0.001\r\nAdafactorOptimizer.factored = True\r\nAdafactorOptimizer.min_dim_size_to_factor = 128\r\nAdafactorOptimizer.multiply_by_parameter_scale = True\r\n\r\n# Parameters for Bitransformer:\r\n# ==============================================================================\r\nBitransformer.shared_embedding = True\r\n\r\n# Parameters for denoise:\r\n# ==============================================================================\r\ndenoise.inputs_fn = @preprocessors.noise_span_to_unique_sentinel\r\ndenoise.noise_density = %noise_density\r\ndenoise.noise_mask_fn = @preprocessors.random_spans_noise_mask\r\ndenoise.targets_fn = @preprocessors.nonnoise_span_to_unique_sentinel\r\n\r\n# Parameters for decoder/DenseReluDense:\r\n# ==============================================================================\r\ndecoder/DenseReluDense.dropout_rate = %dropout_rate\r\ndecoder/DenseReluDense.hidden_size = %d_ff\r\n\r\n# Parameters for encoder/DenseReluDense:\r\n# ==============================================================================\r\nencoder/DenseReluDense.dropout_rate = %dropout_rate\r\nencoder/DenseReluDense.hidden_size = %d_ff\r\n\r\n# Parameters for decoder/EncDecAttention:\r\n# ==============================================================================\r\n# None.\r\n\r\n# Parameters for get_sentencepiece_model_path:\r\n# ==============================================================================\r\nget_sentencepiece_model_path.mixture_or_task_name = %MIXTURE_NAME\r\n\r\n# Parameters for get_variable_dtype:\r\n# ==============================================================================\r\nget_variable_dtype.activation_dtype = 'bfloat16'\r\n\r\n# Parameters for decoder/LayerStack:\r\n# ==============================================================================\r\ndecoder/LayerStack.dropout_rate = %dropout_rate\r\ndecoder/LayerStack.norm_epsilon = 1e-06\r\n\r\n# Parameters for encoder/LayerStack:\r\n# ==============================================================================\r\nencoder/LayerStack.dropout_rate = %dropout_rate\r\nencoder/LayerStack.norm_epsilon = 1e-06\r\n\r\n# Parameters for learning_rate_schedule_noam:\r\n# ==============================================================================\r\nlearning_rate_schedule_noam.linear_decay_fraction = 0.1\r\nlearning_rate_schedule_noam.multiplier = 1.0\r\nlearning_rate_schedule_noam.offset = 0\r\nlearning_rate_schedule_noam.warmup_steps = 10000\r\n\r\n# Parameters for make_bitransformer:\r\n# ==============================================================================\r\nmake_bitransformer.decoder_name = 'decoder'\r\nmake_bitransformer.encoder_name = 'encoder'\r\n\r\n# Parameters for decoder/make_layer_stack:\r\n# ==============================================================================\r\ndecoder/make_layer_stack.block_scope = True\r\ndecoder/make_layer_stack.layers = \\\r\n [@mesh_tensorflow.transformer.transformer_layers.SelfAttention,\r\n @mesh_tensorflow.transformer.transformer_layers.EncDecAttention,\r\n @mesh_tensorflow.transformer.transformer_layers.DenseReluDense]\r\ndecoder/make_layer_stack.num_layers = %num_layers\r\n\r\n# Parameters for encoder/make_layer_stack:\r\n# ==============================================================================\r\nencoder/make_layer_stack.block_scope = True\r\nencoder/make_layer_stack.layers = \\\r\n [@mesh_tensorflow.transformer.transformer_layers.SelfAttention,\r\n @mesh_tensorflow.transformer.transformer_layers.DenseReluDense]\r\nencoder/make_layer_stack.num_layers = %num_layers\r\n\r\n# Parameters for mesh_train_dataset_fn:\r\n# ==============================================================================\r\nmesh_train_dataset_fn.mixture_or_task_name = %MIXTURE_NAME\r\n\r\n\r\n# Parameters for noise_span_to_unique_sentinel:\r\n# ==============================================================================\r\n# None.\r\n\r\n# Parameters for nonnoise_span_to_unique_sentinel:\r\n# ==============================================================================\r\n# None.\r\n\r\n# Parameters for pack_dataset:\r\n# ==============================================================================\r\n\r\n\r\n# Parameters for pack_or_pad:\r\n# ==============================================================================\r\n# None.\r\n\r\n# Parameters for random_spans_helper:\r\n# ==============================================================================\r\nrandom_spans_helper.extra_tokens_per_span_inputs = 1\r\nrandom_spans_helper.extra_tokens_per_span_targets = 1\r\nrandom_spans_helper.inputs_length = %inputs_length\r\nrandom_spans_helper.mean_noise_span_length = %mean_noise_span_length\r\nrandom_spans_helper.noise_density = %noise_density\r\n\r\n# Parameters for targets_length/random_spans_helper:\r\n# ==============================================================================\r\ntargets_length/random_spans_helper.extra_tokens_per_span_inputs = 1\r\ntargets_length/random_spans_helper.extra_tokens_per_span_targets = 1\r\ntargets_length/random_spans_helper.inputs_length = %inputs_length\r\ntargets_length/random_spans_helper.mean_noise_span_length = %mean_noise_span_length\r\ntargets_length/random_spans_helper.noise_density = %noise_density\r\n\r\n# Parameters for random_spans_noise_mask:\r\n# ==============================================================================\r\nrandom_spans_noise_mask.mean_noise_span_length = %mean_noise_span_length\r\n\r\n# Parameters for targets_length/random_spans_targets_length:\r\n# ==============================================================================\r\n# None.\r\n\r\n# Parameters for random_spans_tokens_length:\r\n# ==============================================================================\r\n# None.\r\n\r\n# Parameters for rate_num_examples:\r\n# ==============================================================================\r\nrate_num_examples.maximum = 1000000.0\r\nrate_num_examples.scale = 1.0\r\nrate_num_examples.temperature = 1.0\r\n\r\n# Parameters for rate_unsupervised:\r\n# ==============================================================================\r\nrate_unsupervised.value = 710000.0\r\n\r\n# Parameters for reduce_concat_tokens:\r\n# ==============================================================================\r\nreduce_concat_tokens.batch_size = 128\r\nreduce_concat_tokens.feature_key = 'targets'\r\n\r\n# Parameters for run:\r\n# ==============================================================================\r\nrun.autostack = True\r\nrun.batch_size = ('tokens_per_batch', %tokens_per_batch)\r\nrun.dataset_split = 'train'\r\nrun.ensemble_inputs = None\r\nrun.eval_checkpoint_step = None\r\nrun.eval_dataset_fn = None\r\nrun.eval_summary_dir = None\r\nrun.export_path = ''\r\nrun.iterations_per_loop = 100\r\nrun.keep_checkpoint_max = None\r\nrun.layout_rules = \\\r\n 'ensemble:ensemble,batch:batch,d_ff:model,heads:model,vocab:model,experts:batch'\r\nrun.learning_rate_schedule = @learning_rate_schedules.learning_rate_schedule_noam\r\nrun.mesh_shape = @mesh_tensorflow.transformer.utils.tpu_mesh_shape()\r\nrun.mode = 'train' \r\nrun.init_checkpoint = %init_checkpoint\r\nrun.model_type = 'bitransformer'\r\nrun.optimizer = @optimize.AdafactorOptimizer\r\nrun.perplexity_eval_steps = 10\r\nrun.predict_fn = None\r\nrun.save_checkpoints_steps = 2400\r\nrun.sequence_length = {'inputs': %inputs_length, 'targets': %targets_length}\r\nrun.train_dataset_fn = \\\r\n @t5.models.mesh_transformer.mesh_train_dataset_fn\r\nrun.train_steps = 1000000000\r\nrun.variable_filter = None\r\nrun.vocabulary = \\\r\n @t5.data.sentencepiece_vocabulary.SentencePieceVocabulary()\r\n\r\n# Parameters for select_random_chunk:\r\n# ==============================================================================\r\nselect_random_chunk.feature_key = 'targets'\r\nselect_random_chunk.max_length = 65536\r\n\r\n# Parameters for decoder/SelfAttention:\r\n# ==============================================================================\r\ndecoder/SelfAttention.attention_kwargs = None\r\ndecoder/SelfAttention.dropout_rate = %dropout_rate\r\ndecoder/SelfAttention.key_value_size = %d_kv\r\ndecoder/SelfAttention.num_heads = %num_heads\r\ndecoder/SelfAttention.num_memory_heads = 0\r\ndecoder/SelfAttention.relative_attention_num_buckets = 32\r\ndecoder/SelfAttention.relative_attention_type = 'bias_shared'\r\ndecoder/SelfAttention.shared_kv = False\r\n\r\n# Parameters for encoder/SelfAttention:\r\n# ==============================================================================\r\nencoder/SelfAttention.attention_kwargs = None\r\nencoder/SelfAttention.dropout_rate = %dropout_rate\r\nencoder/SelfAttention.key_value_size = %d_kv\r\nencoder/SelfAttention.num_heads = %num_heads\r\nencoder/SelfAttention.num_memory_heads = 0\r\nencoder/SelfAttention.relative_attention_num_buckets = 32\r\nencoder/SelfAttention.relative_attention_type = 'bias_shared'\r\nencoder/SelfAttention.shared_kv = False\r\n\r\n# Parameters for SentencePieceVocabulary:\r\n# ==============================================================================\r\nSentencePieceVocabulary.extra_ids = 100\r\nSentencePieceVocabulary.sentencepiece_model_file = \\\r\n @t5.models.mesh_transformer.get_sentencepiece_model_path()\r\n\r\n# Parameters for serialize_num_microbatches:\r\n# ==============================================================================\r\nserialize_num_microbatches.tokens_per_microbatch_per_replica = 8192\r\n\r\n# Parameters for split_tokens:\r\n# ==============================================================================\r\nsplit_tokens.feature_key = 'targets'\r\nsplit_tokens.max_tokens_per_segment = @preprocessors.random_spans_tokens_length()\r\nsplit_tokens.min_tokens_per_segment = None\r\n\r\n# Parameters for tpu_estimator_model_fn:\r\n# ==============================================================================\r\ntpu_estimator_model_fn.init_checkpoint = %init_checkpoint\r\ntpu_estimator_model_fn.outer_batch_size = 1\r\ntpu_estimator_model_fn.tpu_summaries = False\r\n\r\n# Parameters for tpu_mesh_shape:\r\n# ==============================================================================\r\ntpu_mesh_shape.ensemble_parallelism = None\r\ntpu_mesh_shape.model_parallelism = 1\r\ntpu_mesh_shape.tpu_topology = '8x8'\r\n\r\n# Parameters for decoder/Unitransformer:\r\n# ==============================================================================\r\ndecoder/Unitransformer.d_model = %d_model\r\ndecoder/Unitransformer.ensemble = None\r\ndecoder/Unitransformer.input_full_attention = False\r\ndecoder/Unitransformer.label_smoothing = 0.0\r\ndecoder/Unitransformer.loss_denominator = None\r\ndecoder/Unitransformer.loss_fn = None\r\ndecoder/Unitransformer.loss_on_targets_only = False\r\ndecoder/Unitransformer.max_length = 512\r\ndecoder/Unitransformer.positional_embedding = False\r\ndecoder/Unitransformer.shared_embedding_and_softmax_weights = True\r\ndecoder/Unitransformer.vocab_divisor = 128\r\ndecoder/Unitransformer.z_loss = 0.0001\r\ndecoder/Unitransformer.loss_denominator = 233472\r\n\r\n# Parameters for encoder/Unitransformer:\r\n# ==============================================================================\r\nencoder/Unitransformer.d_model = %d_model\r\nencoder/Unitransformer.ensemble = None\r\nencoder/Unitransformer.input_full_attention = False\r\nencoder/Unitransformer.label_smoothing = 0.0\r\nencoder/Unitransformer.loss_denominator = None\r\nencoder/Unitransformer.loss_fn = None\r\nencoder/Unitransformer.loss_on_targets_only = False\r\nencoder/Unitransformer.max_length = 512\r\nencoder/Unitransformer.positional_embedding = False\r\nencoder/Unitransformer.shared_embedding_and_softmax_weights = True\r\nencoder/Unitransformer.vocab_divisor = 128\r\nencoder/Unitransformer.z_loss = 0.0001\r\n\r\n# Parameters for unsupervised:\r\n# ==============================================================================\r\nunsupervised.preprocessors = \\\r\n [@preprocessors.select_random_chunk,\r\n @preprocessors.reduce_concat_tokens,\r\n @preprocessors.split_tokens,\r\n @preprocessors.denoise]\r\n```\r\n\r\n=> the relevant part here seems to be only the model hyperparameters:\r\n```\r\nd_ff = 2048\r\nd_kv = 64\r\nd_model = 512\r\ndropout_rate = 0.1\r\ninputs_length = 512\r\nmean_noise_span_length = 3.0\r\nMIXTURE_NAME = 'all_mix'\r\nnoise_density = 0.15\r\nnum_heads = 8\r\nnum_layers = 6\r\ntargets_length = 512\r\ninit_checkpoint = \"gs://t5-data/pretrained_models/small/model.ckpt-1000000\"\r\ntokens_per_batch = 1048576\r\n```\r\nSo maybe you can create a config.json based on those?\r\n\r\n\r\n\r\n> Thanks! (you are a lifesaver by the way with these response times :)).\r\n\r\nAnd happy to hear this :) you're welcome",
"...actually the link you sent for the example config file proved to be extremely useful! Starting from there I've found all related files. Here is everything (including the config file) for T5 Small: https://huggingface.co/t5-small. Also an example workflow for future reference:\r\n```\r\nmkdir t5\r\ngsutil -m cp -r gs://t5-data/pretrained_models/small t5\r\npython ~/transformers/src/transformers/models/t5/convert_t5_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path t5/small/model.ckpt-1000000 --pytorch_dump_path t5-small-pt --config_file t5/small_config.json\r\n```"
] | 1,613 | 1,613 | 1,613 | NONE | null | I want to use original T5 checkpoint in Transformers library. I found multiple answers referring to `convert_t5_original_tf_checkpoint_to_pytorch.py` which does not seem to exist. Any other way? Or where can I find a (currently working) version of that file? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10228/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10227 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10227/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10227/comments | https://api.github.com/repos/huggingface/transformers/issues/10227/events | https://github.com/huggingface/transformers/issues/10227 | 809,961,223 | MDU6SXNzdWU4MDk5NjEyMjM= | 10,227 | Showing individual token and corresponding score during beam search | {
"login": "monmanuela",
"id": 35655790,
"node_id": "MDQ6VXNlcjM1NjU1Nzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/35655790?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monmanuela",
"html_url": "https://github.com/monmanuela",
"followers_url": "https://api.github.com/users/monmanuela/followers",
"following_url": "https://api.github.com/users/monmanuela/following{/other_user}",
"gists_url": "https://api.github.com/users/monmanuela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monmanuela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monmanuela/subscriptions",
"organizations_url": "https://api.github.com/users/monmanuela/orgs",
"repos_url": "https://api.github.com/users/monmanuela/repos",
"events_url": "https://api.github.com/users/monmanuela/events{/privacy}",
"received_events_url": "https://api.github.com/users/monmanuela/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @monmanuela,\r\n\r\nThanks for checking out the post! We try to keep the repository for github issues and kindly ask you to post these kinds of questions on the [forum](https://discuss.huggingface.co/). Feel free to tag me there (@patrickvonplaten) :-)",
"@patrickvonplaten thanks for your quick reply! Posted a topic on the forum, closing this issue."
] | 1,613 | 1,613 | 1,613 | NONE | null | ## Who can help
@patrickvonplaten
## Information
Hello,
I am using beam search with a pre-trained T5 model for summarization. I would like to visualize the beam search process by showing the tokens with the highest scores, and eventually the chosen beam like this diagram:

(Taken from https://huggingface.co/blog/how-to-generate)
**I am unsure how I can show the tokens and their corresponding scores.**
I followed the discussion https://discuss.huggingface.co/t/announcement-generationoutputs-scores-attentions-and-hidden-states-now-available-as-outputs-to-generate/3094 and https://github.com/huggingface/transformers/pull/9150.
Following the docs, upon calling `generate`, I have set `return_dict_in_generate=True`, `output_scores=True`
```
generated_outputs = model_t5summary.generate(
input_ids=input_ids.to(device),
attention_mask=features['attention_mask'].to(device),
max_length=input_ids.shape[-1] + 2,
return_dict_in_generate=True,
output_scores=True,
output_hidden_states=True,
output_attentions=True,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=3,
num_beams=5,
)
```
Now I have an instance of `BeamSearchEncoderDecoderOutput`.
If I understand the docs (https://huggingface.co/transformers/master/internal/generation_utils.html#generate-outputs) correctly, `scores` will provide me with what I want but I am unsure on how to use the `scores`.
Any help/pointers from the community would be greatly appreciated, thank you 🙏 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10227/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10226 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10226/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10226/comments | https://api.github.com/repos/huggingface/transformers/issues/10226/events | https://github.com/huggingface/transformers/issues/10226 | 809,955,336 | MDU6SXNzdWU4MDk5NTUzMzY= | 10,226 | Trainer.train() is stuck | {
"login": "saichandrapandraju",
"id": 41769919,
"node_id": "MDQ6VXNlcjQxNzY5OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saichandrapandraju",
"html_url": "https://github.com/saichandrapandraju",
"followers_url": "https://api.github.com/users/saichandrapandraju/followers",
"following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}",
"gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions",
"organizations_url": "https://api.github.com/users/saichandrapandraju/orgs",
"repos_url": "https://api.github.com/users/saichandrapandraju/repos",
"events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}",
"received_events_url": "https://api.github.com/users/saichandrapandraju/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nFor training-related issues, it might be better to ask your question on the [forum](https://discuss.huggingface.co/).\r\n\r\nThe authors of HuggingFace (and community members) are happy to help you there!\r\n"
] | 1,613 | 1,613 | 1,613 | NONE | null | Hi,
I'm training roberta-base using HF Trainer, but it's stuck at the starting itself. Here's my code -
```
train_dataset[0]
{'input_ids': tensor([ 0, 100, 657, ..., 1, 1, 1]),
'attention_mask': tensor([1, 1, 1, ..., 0, 0, 0]),
'labels': tensor(0)}
val_dataset[0]
{'input_ids': tensor([ 0, 11094, 14, ..., 1, 1, 1]),
'attention_mask': tensor([1, 1, 1, ..., 0, 0, 0]),
'labels': tensor(0)}
## simple test
model(train_dataset[:2]['input_ids'], attention_mask = train_dataset[:2]['attention_mask'], labels=train_dataset[:2]['labels'])
SequenceClassifierOutput(loss=tensor(0.6995, grad_fn=<NllLossBackward>), logits=tensor([[ 0.0438, -0.1893],
[ 0.0530, -0.1786]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
train_args = transformers.TrainingArguments(
output_dir='test_1',
overwrite_output_dir=True,
evaluation_strategy="epoch",
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=3e-5,
weight_decay=0.01,
num_train_epochs=2,
load_best_model_at_end=True,
)
trainer = transformers.Trainer(
model=model,
args=train_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
tokenizer=tok,
)
trainer.train()
```
I saw memory consumption and it is stuck at -
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:62:00.0 Off | 0 |
| N/A 49C P0 60W / 300W | 1756MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:8A:00.0 Off | 0 |
| N/A 50C P0 61W / 300W | 1376MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
Plz let me know how to proceed further.. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10226/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10225 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10225/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10225/comments | https://api.github.com/repos/huggingface/transformers/issues/10225/events | https://github.com/huggingface/transformers/pull/10225 | 809,777,966 | MDExOlB1bGxSZXF1ZXN0NTc0NTgyMTE0 | 10,225 | [Trainer] memory tracker metrics | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Thanks for adding this functionality! One general comment I have is on the type of the `stage` argument. Since it has only four possible values from what I can see, it would be better to create an enum for those (to avoid typos and have auto-complete in an IDE).\r\n\r\nOh, let me make it absolutely automatic with `inspect` so it won't need a `stage` argument at all. \r\n\r\nAnd I will collapse the two calls into one in all but `__init__`, so it'll be less noisy.\r\n\r\n",
"So, the API has been simplified to remove the need for naming the stages in the caller, tests added. \r\n\r\nI'm sure we will think of further improvements down the road, please let me know if this is good for the first iteration.\r\n\r\nI'm not sure if anybody else wants to review before we merge this. "
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | This PR introduced memory usage metrics in Trainer:
* [x] adds `TrainerMemoryTracker` (pytorch only, no-op for tf), which records deltas of the first gpu and cpu of the main process - and records them for `init|train|eval|test` stages - if there is no gpu it reports cpu only.
* [x] adds `--skip_memory_metrics` to disable this new behavior - i.e. by default it'll print the memory metrics
* [x] adds `trainer.metrics_format` which will intelligently reformat the metrics to do the right thing - this is only for logger - moves manual rounding from the scripts into that helper method.
* [x] formats GFlops as GF number, so ` 2285698228224.0`, which is very unreadable and now it will be a nice `2128GF` (similar to `100MB`)
* [x] as a sample changes `run_seq2seq.py` to use `trainer.metrics_format` - can replicate to other scripts in another PR.
* [x] changes the metrics logger in `run_seq2seq.py` to align data, so that it's easy to read the relative numbers e.g. allocated plus peak memory should be in the same column to make a quick read of the situation.
* [x] adds a new file_utils helper function `is_torch_cuda_available` to detect no gpu setups in one call.
* [x] adds a test
* [x] consistently use the strange `train/eval/test` trio - it's very confusing - but at least it's consistent - I proposed to fix this `examples`-wide in https://github.com/huggingface/transformers/issues/10165
Request: I beg you to allow me to restore the original refactored metrics dump logic in `run_seq2seq.py` - the current repetition doesn't help the readability and it's just dumping a dict - nothing ML/NLP specific here, there is nothing to understand there IMHO. and then it'd be easy to replicate this to other examples. Thanks. This is the original (and will need to add to it a few formatting entries I added in this PR):
https://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/examples/legacy/seq2seq/finetune_trainer.py#L132-L145
A picture is worth a thousand words:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 100 --max_val_samples 100 --max_test_samples 100 --dataset_name wmt16-en-ro-pre-processed --source_prefix "translate English to Romanian: "
```
gives:
```
02/16/2021 17:06:39 - INFO - __main__ - ***** train metrics *****
02/16/2021 17:06:39 - INFO - __main__ - epoch = 1.0
02/16/2021 17:06:39 - INFO - __main__ - init_mem_cpu_alloc_delta = 2MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_gpu_alloc_delta = 230MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_gpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - total_flos = 2128GF
02/16/2021 17:06:39 - INFO - __main__ - train_mem_cpu_alloc_delta = 55MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_gpu_alloc_delta = 692MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_gpu_peaked_delta = 661MB
02/16/2021 17:06:39 - INFO - __main__ - train_runtime = 2.3114
02/16/2021 17:06:39 - INFO - __main__ - train_samples = 100
02/16/2021 17:06:39 - INFO - __main__ - train_samples_per_second = 3.028
02/16/2021 17:06:43 - INFO - __main__ - ***** val metrics *****
02/16/2021 17:13:05 - INFO - __main__ - epoch = 1.0
02/16/2021 17:13:05 - INFO - __main__ - eval_bleu = 24.6502
02/16/2021 17:13:05 - INFO - __main__ - eval_gen_len = 32.9
02/16/2021 17:13:05 - INFO - __main__ - eval_loss = 3.7533
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_cpu_alloc_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_cpu_peaked_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_gpu_alloc_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_gpu_peaked_delta = 510MB
02/16/2021 17:13:05 - INFO - __main__ - eval_runtime = 3.9266
02/16/2021 17:13:05 - INFO - __main__ - eval_samples = 100
02/16/2021 17:13:05 - INFO - __main__ - eval_samples_per_second = 25.467
02/16/2021 17:06:48 - INFO - __main__ - ***** test metrics *****
02/16/2021 17:06:48 - INFO - __main__ - test_bleu = 27.146
02/16/2021 17:06:48 - INFO - __main__ - test_gen_len = 41.37
02/16/2021 17:06:48 - INFO - __main__ - test_loss = 3.6682
02/16/2021 17:06:48 - INFO - __main__ - test_mem_cpu_alloc_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_gpu_alloc_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_gpu_peaked_delta = 645MB
02/16/2021 17:06:48 - INFO - __main__ - test_runtime = 5.1136
02/16/2021 17:06:48 - INFO - __main__ - test_samples = 100
02/16/2021 17:06:48 - INFO - __main__ - test_samples_per_second = 19.556
```
To understand the memory reports:
- `alloc_delta` - is the difference in the used/allocated memory counter between the end and the start of the stage - it can be negative if a function released more memory than it allocated
- `peaked_delta` - is any extra memory that was consumed and then freed - relative to the current allocated memory counter - it is never negative - this is the mysterious cause of OOM, since normally it doesn't register when everything fits into the memory.
- so when you look at the metrics of any stage you add up `alloc_delta` + `peaked_delta` and you know how much memory was needed to complete that stage. But the two numbers need to be separate.
We can change the names if you'd like, but if we do, let's make sure that allocated/used shows up before peaked when alphabetically sorted - as they should be read in that order.
Also it would be useful to have them of the same length so it's less noisy vertically. I was thinking perhaps to add `m` to `alloc`? Then it becomes perfect:
```
test_mem_cpu_malloc_delta = 0MB
test_mem_cpu_peaked_delta = 0MB
```
Logic behind `init`:
- since Trainer's `__init__` can consume a lot of memory, it's important that we trace it too, but since any of the stages can be skipped, I basically push it into the metrics of whichever stage gets to update metrics first, so it gets tacked on to that group of metrics. In the above example it happens to be `train`.
Logic behind nested calls:
- since eval calls may be intermixed with train calls, we can't handle nested invocations because `torch.cuda.max_memory_allocated` is a single counter, so if it gets reset by a nested eval call, train will report incorrect info. One day pytorch will fix this issue: https://github.com/pytorch/pytorch/issues/16266 and then it will be possible to be re-entrant, for now we will only track the outer level `train` / `evaluation` / `predict` functions.
After this addition we can already profile/detect regressions for specific training stages. But this doesn't give us the full picture as there other allocations outside of the trainer - i.e. in user's code. It's a start.
Down the road I may code a different version, based on pynvml, which gives somewhat different numbers, and has its own complications. But it gives you the exact gpu memory usage, so you know exactly how much memory is used or left. PyTorch only reports its internal allocations on the other hand.
@patrickvonplaten, this feature should give us already a partial way to track memory regression. So this could be the low hanging fruit you and I were discussing.
It also should be possible to extend the tracker to use TF, but I don't know anything about TF.
@sgugger, @patil-suraj, @LysandreJik, @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10225/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10225/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10225",
"html_url": "https://github.com/huggingface/transformers/pull/10225",
"diff_url": "https://github.com/huggingface/transformers/pull/10225.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10225.patch",
"merged_at": 1613669252000
} |
https://api.github.com/repos/huggingface/transformers/issues/10224 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10224/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10224/comments | https://api.github.com/repos/huggingface/transformers/issues/10224/events | https://github.com/huggingface/transformers/issues/10224 | 809,726,134 | MDU6SXNzdWU4MDk3MjYxMzQ= | 10,224 | No module named 'tasks' | {
"login": "daniel-z-kaplan",
"id": 48258016,
"node_id": "MDQ6VXNlcjQ4MjU4MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/48258016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daniel-z-kaplan",
"html_url": "https://github.com/daniel-z-kaplan",
"followers_url": "https://api.github.com/users/daniel-z-kaplan/followers",
"following_url": "https://api.github.com/users/daniel-z-kaplan/following{/other_user}",
"gists_url": "https://api.github.com/users/daniel-z-kaplan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daniel-z-kaplan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daniel-z-kaplan/subscriptions",
"organizations_url": "https://api.github.com/users/daniel-z-kaplan/orgs",
"repos_url": "https://api.github.com/users/daniel-z-kaplan/repos",
"events_url": "https://api.github.com/users/daniel-z-kaplan/events{/privacy}",
"received_events_url": "https://api.github.com/users/daniel-z-kaplan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you did not clone the repository properly or are not running the command from the folder `examples/legacy/token-classification`, since that folder does have a task.py file.",
"Ah, I was searching for \"tasks.py\" instead. Just user error, thanks for the fast reply."
] | 1,613 | 1,613 | 1,613 | NONE | null | ## Environment info
- `transformers` version: 4.3.2
- Platform: 5.10.8-200.fc33.x86_64
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (GPU)
- Tensorflow version (GPU?): 2.4.1 (GPU)
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No.
@sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): allenai/scibert_scivocab_uncased
The problem arises when using:
* [X ] the official example scripts: (give details below)
I'm using the old NER script, since the model I'm using doesn't support Fast Tokenizers.
https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py
However, I get an error on trying to import "tasks.py". I did find a previous git issue for this, which recommended downloading said file.... but it no longer exists.
The tasks I am working on is:
* [X ] my own task or dataset: (give details below)
I'm using the bc2gm-corpus dataset.
## To reproduce
Steps to reproduce the behaviour:
1. Download the script from the provided github link.
2. Run it with any real model name, a real directory for data (doesn't need to include data), and an output directory.
I used the following command: python3 run_ner.py --model_name_or_path allenai/scibert_scivocab_uncased --data_dir bc2gm-corpus/conll --output_dir ./output
` File "run_ner.py", line 323, in <module>
main()
File "run_ner.py", line 122, in main
module = import_module("tasks")
File "/home/dkaplan/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'tasks' `
## Expected behavior
I expect the NER script to run, finetuning a model on the provided dataset.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10224/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10223 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10223/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10223/comments | https://api.github.com/repos/huggingface/transformers/issues/10223/events | https://github.com/huggingface/transformers/issues/10223 | 809,714,942 | MDU6SXNzdWU4MDk3MTQ5NDI= | 10,223 | Slow Multi-GPU DDP training with run_clm.py and GPT2 | {
"login": "ktrapeznikov",
"id": 4052002,
"node_id": "MDQ6VXNlcjQwNTIwMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4052002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ktrapeznikov",
"html_url": "https://github.com/ktrapeznikov",
"followers_url": "https://api.github.com/users/ktrapeznikov/followers",
"following_url": "https://api.github.com/users/ktrapeznikov/following{/other_user}",
"gists_url": "https://api.github.com/users/ktrapeznikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ktrapeznikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ktrapeznikov/subscriptions",
"organizations_url": "https://api.github.com/users/ktrapeznikov/orgs",
"repos_url": "https://api.github.com/users/ktrapeznikov/repos",
"events_url": "https://api.github.com/users/ktrapeznikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/ktrapeznikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The answers to those two points is not necessarily yes.\r\n- DDP training is only faster if you have NVLinks between your GPUs, otherwise the slow communication between them can slow down training.\r\n- DDP training takes more space on GPU then a single-process training since there is some gradients caching.\r\n\r\nBoth issues come from PyTorch and not us, the only thing we can check on our side is if there is something in our script that would introduce a CPU-bottleneck, but I doubt this is the reason here (all tokenization happens before the training so there is nothing I could think of there).\r\n\r\nYou should also try regular `DataParalell` which does not have the memory problem IIRC, but I don't remember the comparison in terms of speed. I think @stas00 may have more insight there.",
"Thanks. I’ll give DP a try.",
"What @sgugger said, \r\n\r\nplease see this excellent benchmark: https://github.com/huggingface/transformers/issues/9371#issuecomment-768656711 You can see that a single GPU beats DDP over 2 gpus there if it's not NVLink-connected. I haven't tried it on 3 gpus though. Surely it should be somewhat faster at least. \r\n\r\nDid you check you're feeding the gpus fast enough? - i.e. check their utilization %, it they are under 90% then you probably have an issue with loading - add more dataloader workers.\r\n\r\nAlso please consider using DeepSpeed ZeRO-DP, which should be even faster. https://huggingface.co/blog/zero-deepspeed-fairscale",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0
- Platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes (using official `run_clm.py`)
- Using distributed or parallel set-up in script?: using DDP with `run_clm.py`
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
- gpt2: @patrickvonplaten, @LysandreJik
- trainer, maintained examples: @sgugger
## Information
Model I am using (Bert, XLNet ...): gpt2-medium
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Generate some dummy text data (3000 examples) and save a csv:
```python
import pandas as pd
text = " ".join(100*["Here is some very long text."])
text = 3000*[text]
pd.Series(text).to_frame("text").to_csv("data_temp.csv",index=False)
```
2. Run official [`run_clm.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) from examples using 3 gpus and DDP:
```shell
CUDA_VISIBLE_DEVICES="1,2,3" python -m torch.distributed.launch --nproc_per_node 3 run_clm.py \
--model_name_or_path gpt2-medium \
--do_train \
--output_dir /proj/semafor/kirill/tmp \
--per_device_train_batch_size 4 \
--block_size 128 \
--train_file data_temp.csv \
--fp16
```
3. Using 3 GeForce RTX 2080Ti with 11Gbs, tqdm says it should approximately take 1 hour: `48/4101 [00:38<53:51, 1.25it/s`. The memory in each GPU is maxed out: ` 10782MiB / 11019MiB `
4. Now, if I just run the same script on a single GPU:
```shell
CUDA_VISIBLE_DEVICES="3" python run_clm.py \
--model_name_or_path gpt2-medium \
--do_train \
--output_dir /proj/semafor/kirill/tmp \
--per_device_train_batch_size 4 \
--block_size 128 \
--train_file data_temp.csv \
--fp16
```
It's actually a little faster: `260/12303 [00:57<44:02, 4.56it/s` and the GPU memory is not maxed out: `9448MiB / 11019MiB`
I can actually double the `--per_device_train_batch_size` from `4 -> 8` and get it down to under 30 mins per epoch: `138/6153 [00:36<26:30, 3.78it/s`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
So I expected that:
- DDP training on 3 GPUs to be faster than a single GPU (It's actually a little slower).
- If I can load a batch of 8 on a device in a single GPU mode then it should work in multi-GPU mode as well (it doesn't, I get OOM error). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10223/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10222 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10222/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10222/comments | https://api.github.com/repos/huggingface/transformers/issues/10222/events | https://github.com/huggingface/transformers/pull/10222 | 809,682,524 | MDExOlB1bGxSZXF1ZXN0NTc0NTAzODA3 | 10,222 | the change from single mask to multi mask support for pytorch | {
"login": "naveenjafer",
"id": 7025448,
"node_id": "MDQ6VXNlcjcwMjU0NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7025448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naveenjafer",
"html_url": "https://github.com/naveenjafer",
"followers_url": "https://api.github.com/users/naveenjafer/followers",
"following_url": "https://api.github.com/users/naveenjafer/following{/other_user}",
"gists_url": "https://api.github.com/users/naveenjafer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naveenjafer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naveenjafer/subscriptions",
"organizations_url": "https://api.github.com/users/naveenjafer/orgs",
"repos_url": "https://api.github.com/users/naveenjafer/repos",
"events_url": "https://api.github.com/users/naveenjafer/events{/privacy}",
"received_events_url": "https://api.github.com/users/naveenjafer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"@Narsil @LysandreJik How do you suggest we go about with the targets param? At the moment, targets can either be a list of strings or a string. In case of multiple masks, there are 2 ways to go about with it.\r\n\r\n1. Provide a way for the user to define targets for each mask. \r\n2. One single target list that can be uniformly applied across all the positions. \r\n\r\nThe first method would be best implemented by expecting a dict as argument in the keyword param. Something like\r\n{ \"0\" : \"str or list of strings\" , \"2\" : \"str or list of strings\" ... } \r\n\r\nThis way the user can decide to skip explicitly defining candidate keywords in some of the mask positions if needed ( skipped mask 1 in the example above). \r\n\r\n",
"Tough question indeed regarding targets! Switching to a dict sounds a bit non intuitive to me, but I don't see any other choice. I guess eventually the API would be the following:\r\n\r\nGiven a single input string, with a single mask:\r\n- A candidate as a string returns the candidate score for the mask\r\n- A candidate list of strings returns the candidate scores for the mask\r\n\r\nGiven a single input string, with multiple masks:\r\n- A candidate as a string returns the candidate scores for all masks\r\n- A candidate list of strings returns the candidate scores for all masks, on all candidates\r\n- A candidate dict of strings returns the candidate scores for the masks which are concerned by the dictionary keys. Their candidates is the dictionary value linked to that dictionary key.\r\n- A candidate dict of list of strings returns the candidate scores for the masks which are concerned by the dictionary keys. Their candidates are the dictionary values linked to that dictionary key.\r\n\r\nThen there are also lists of input strings, with single masks, and lists of input strings, with multiple masks. This results in a very large amount of possibilities, with different returns, which sounds overwhelming. I'm not too sure that's the best way to handle the issue, I'll give it a bit more thought.",
"@LysandreJik I had a question. From what I can understand, one can only define a single set of targets at the moment irrespective of how many input texts are given right? For both the case of a single input text and multiple input texts for even the base case of a single mask, we can only define a single target or a list of targets that applies across them all right? Essentially, it is a many to one relation for the input texts to the target. If that is the case, targets functionality is currently not designed in a useful manner right?",
"Hi, sorry for getting back to you so late on this. I agree with you that we can improve the `targets`. I'm pinging @joeddav as he's the author of the PR that added them.\r\n\r\n@joeddav your input on this PR would be more than welcome! Thank you.",
"Personally, I think the simplest solution would be best: only support `targets` in the single-mask case. If `targets` is passed and there are multiple mask tokens, raise a `ValueError`. It's a pretty narrow use case to need to pass a string with multiple masked tokens while also needing to evaluate possible target tokens for each. In my opinion, that's a complicated and rare use case and we don't need to muddle pipelines code by attempting to support it. It can always be accomplished by using the core modules instead of a pipeline.",
"@joeddav That does make sense to me! The objective of a pipeline should only be to accommodate for some quick use test cases. Making it cumbersome misses the point altogether. @LysandreJik What do you think? ",
"Yes, I agree with @joeddav as well!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> [...] If you think this still needs to be addressed please comment on this thread.\r\n\r\nThis feature would have many applications and would enable comparison of MLMs in gloze tests beyond the restricted setting of targeting words in the intersection of the vocabularies of the models to be compared. There are some open questions how `top_k` predictions should be made, see issue #3609, so I think it would be good to wait a few more weeks to give everybody time to read the linked paper and discuss ideas.",
"@jowagner Just to clarify it for others who might be following, the paper you are referring to is this one https://arxiv.org/abs/2002.03079 right?",
"> @jowagner Just to clarify it for others who might be following, the paper you are referring to is this one https://arxiv.org/abs/2002.03079 right?\r\n\r\nYes. I hope to read it soon and get a more clear picture what is needed here. I tend to think that producing `top_k` predictions for multiple masked tokens is outside the scope of the BERT model and really needs an extra model on top of it, e.g. a model that predicts a ranked list of best crystallisation points and can then be used to perform a beam search, fixing on subword unit at a time and producing a k-best list of best crystallisation processes.\r\n\r\n",
"@jowagner I have a doubt in that case coming back to the basics of BERT. when some of the words are masked and a prediction is to be made on multiple masks during pre-training step in BERT, does BERT not face the same issue? Or are the masks predicted one mask at a time in each training sentence fed to BERT?",
"Looking at Devlin et al 2018 again, I don't see the pre-training objective stated but certainly they try to push as much probability mass as possible to the one completion attested in the training data. BERT is trained to get the top prediction right. Good secondary predictions for individual tokens are only a by-product. Nothing pushes the model to make the k-th predictions consistent across multiple masked subword units for k > 1.\r\n\r\nYes, making predictions can be expected to be harder when there are multiple masked subword units but that also happens in pre-training and BERT therefore learns to do this. Maybe BERT does this in steps, crystallising only a few decisions in each layer. A way to find out would be to fix the BERT layers, add MLM heads to each layer, tune these heads and then see how the predictions (and probabilities) change from layer to layer. (This would make a nice paper, or maybe somebody has done this already.)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Do we have a final verdict yet on the approach to be followed? @mitramir55 had suggested a code proposal I believe in #3609 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@LysandreJik Shall i replace this with the implementation suggested earlier in #3609 and raise a PR? Though I dont quite think we have discussed on what scoring would be ideal for the beam search used to sort the predictions.\r\n\r\n",
"@Narsil had good insights about your previous implementation - @Narsil could you let us know what you think of the solution proposed here https://github.com/huggingface/transformers/issues/3609#issuecomment-854005760?",
"The design in https://github.com/huggingface/transformers/issues/3609#issuecomment-854005760 seems very interesting !\r\n\r\nMain comments: \r\n- I would be curious to see (and probably it would need to become a test) to prove that doing `n` inference instead of 1 will produce better results (because it should be close to the real joint probabilities) that's the main interest of this proposed approach.\r\n\r\n- I think it should output the same tokens as fill-mask pipeline in the degenerate case (when there's only 1 mask).\r\n I don't think it's correct right now (see below what I tried)\r\n\r\n- Because we iteratively do `topk` for each `mask` it's a bit of an exponential if I understand correctly. I would probably add some kind of cleanup to limit the number of \"beams\" to topk (I may have overlooked but it seems to be currently missing)\r\n\r\n- the proposed code could probably be refactored a bit for clarity and avoid integer indexing and deep nesting.\r\n\r\n\r\n```python\r\nimport torch \r\nfrom transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline \r\nimport random \r\n \r\n \r\ndef predict_seqs_dict(sequence, model, tokenizer, top_k=5, order=\"right-to-left\"): \r\n \r\n ids_main = tokenizer.encode(sequence, return_tensors=\"pt\", add_special_tokens=False) \r\n \r\n ids_ = ids_main.detach().clone() \r\n position = torch.where(ids_main == tokenizer.mask_token_id) \r\n \r\n positions_list = position[1].numpy().tolist() \r\n \r\n if order == \"left-to-right\": \r\n positions_list.reverse() \r\n \r\n elif order == \"random\": \r\n random.shuffle(positions_list) \r\n \r\n # print(positions_list) \r\n predictions_ids = {} \r\n predictions_detokenized_sents = {} \r\n \r\n for i in range(len(positions_list)): \r\n predictions_ids[i] = [] \r\n predictions_detokenized_sents[i] = [] \r\n \r\n # if it was the first prediction, \r\n # just go on and predict the first predictions \r\n \r\n if i == 0: \r\n model_logits = model(ids_main)[\"logits\"][0][positions_list[0]] \r\n top_k_tokens = torch.topk(model_logits, top_k, dim=0).indices.tolist() \r\n \r\n for j in range(len(top_k_tokens)): \r\n # print(j) \r\n ids_t_ = ids_.detach().clone() \r\n ids_t_[0][positions_list[0]] = top_k_tokens[j] \r\n predictions_ids[i].append(ids_t_) \r\n \r\n pred = tokenizer.decode(ids_t_[0]) \r\n predictions_detokenized_sents[i].append(pred) \r\n \r\n # append the sentences and ids of this masked token \r\n \r\n # if we already have some predictions, go on and fill the rest of the masks \r\n # by continuing the previous predictions \r\n if i != 0: \r\n for pred_ids in predictions_ids[i - 1]: \r\n \r\n # get the logits \r\n model_logits = model(pred_ids)[\"logits\"][0][positions_list[i]] \r\n # get the top 5 of this prediction and masked token \r\n top_k_tokens = torch.topk(model_logits, top_k, dim=0).indices.tolist() \r\n \r\n for top_id in top_k_tokens: \r\n \r\n ids_t_i = pred_ids.detach().clone() \r\n ids_t_i[0][positions_list[i]] = top_id \r\n \r\n pred = tokenizer.decode(ids_t_i[0]) \r\n \r\n # append the sentences and ids of this masked token \r\n \r\n predictions_ids[i].append(ids_t_i) \r\n predictions_detokenized_sents[i].append(pred) \r\n \r\n return predictions_detokenized_sents \r\n \r\n \r\nsequence = \"This is some super neat [MASK] !\" \r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\") \r\nmodel = AutoModelForMaskedLM.from_pretrained(\"bert-base-uncased\") \r\n \r\npipe = pipeline(task=\"fill-mask\", tokenizer=tokenizer, model=model) \r\nprint(predict_seqs_dict(sequence, model, tokenizer)) \r\nprint(pipe(sequence)) \r\n```",
"> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\r\n\r\nYes, we need more time or help from somebody with time to review the discussion and make recommendations.\r\n\r\nMy thoughts re-reading a few comments, including some of my own:\r\n\r\n- Producing k-best predictions for multiple masked tokens requires choices, i.e. a model, separate from the underlying transformer model. This is where the PR is stalled. A quick way forward would be to support only `k=1` when there are multiple masked tokens for the time being. For `k=1`, it is undisputed that the prediction should be the transformer's top prediction for each token.\r\n- This PR/feature does not directly allow comparison of cloze test predictions of models with different vocabularies. Users would have to probe with continuous sequences of masked tokens of varying length and somehow decide between the candidate predictions.",
"After reading this thread and skimming through #14716, I must confess I still a little unsure how the scores for multi-masked prompts are computed. Based on my understanding, for a prompt with k-masks, it seems like you want to do a beam search over over the Cartesian product `mask_1_targets x mask_2_targets x ... x mask_k_targets` and return the top-n most likely tuples maximizing `P(mask_1=token_i_k, mask_2=token_i_2, ... m_k=token_i_k)`, i.e.:\r\n\r\n```\r\n{\r\n T_1=[(token_1_1, ..., token_1_k), score_t_1],\r\n T_2=[(token_2_1, ..., token_2_k), score_t_2],\r\n ...\r\n T_n=[(token_n_1, ..., token_n_k), score_t_n]\r\n}\r\n```\r\n\r\nIs this accurate? Perhaps you could try to clarify the design intent and limitations of the current API in the documentation somewhere. If you intend to eventually support computing the joint probability, I think would be beneficial to provide a way for consumers to supply a set of per-mask targets and configure the beam search parameters, e.g. beam width. Thanks!",
"> After reading this thread and skimming through #14716, I must confess I still a little unsure how the scores for multi-masked prompts are computed. Based on my understanding, for a prompt with k-masks, it seems like you want to do a beam search over over the Cartesian product `mask_1_targets x mask_2_targets x ... x mask_k_targets` and return the top-n most likely tuples maximizing `P(mask_1=token_i_k, mask_2=token_i_2, ... m_k=token_i_k)`, i.e.:\r\n> \r\n> ```\r\n> {\r\n> T_1=[(token_1_1, ..., token_1_k), score_t_1],\r\n> T_2=[(token_2_1, ..., token_2_k), score_t_2],\r\n> ...\r\n> T_n=[(token_n_1, ..., token_n_k), score_t_n]\r\n> }\r\n> ```\r\n> \r\n> Is this accurate? \r\n\r\nActually no, this was the intent of *this* PR which never got merged. Instead of trying to make educated guess about mask combinations, https://github.com/huggingface/transformers/pull/14716 added what seems the most appropriate, which is what the models really answers, which is various tokens at mask locations, without ANY information about correlations.\r\n\r\nThis is how the model is built, and as such, we return it raw.\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\n\r\npipe = pipeline(model=\"bert-base-uncased\")\r\n\r\nprint(pipe(\"This is a [MASK] and a [MASK]\", top_k=3))\r\n```\r\n```\r\n[[{'score': 0.5048776268959045,\r\n 'sequence': '[CLS] this is a. and a [MASK] [SEP]',\r\n 'token': 1012,\r\n 'token_str': '.'},\r\n {'score': 0.07435218244791031,\r\n 'sequence': '[CLS] this is a ; and a [MASK] [SEP]',\r\n 'token': 1025,\r\n 'token_str': ';'},\r\n {'score': 0.05109349265694618,\r\n 'sequence': '[CLS] this is a, and a [MASK] [SEP]',\r\n 'token': 1010,\r\n 'token_str': ','}],\r\n [{'score': 0.8665121793746948,\r\n 'sequence': '[CLS] this is a [MASK] and a. [SEP]',\r\n 'token': 1012,\r\n 'token_str': '.'},\r\n {'score': 0.05160374939441681,\r\n 'sequence': '[CLS] this is a [MASK] and a | [SEP]',\r\n 'token': 1064,\r\n 'token_str': '|'},\r\n {'score': 0.046446096152067184,\r\n 'sequence': '[CLS] this is a [MASK] and a ; [SEP]',\r\n 'token': 1025,\r\n 'token_str': ';'}]]\r\n```\r\n\r\nYou are then free to do all the complex attempts to make the suggestions combined. But we don't attempt to hide it since, the model really doesn't model that.\r\n\r\n",
"I appreciate this implementation for the support of multiple [MASK] tokens in the input.\r\nHowever, I cannot figure out why the pipeline output is kept nested only in those cases. It forces me to do some additional coding to make it unnested.\r\nIs there any specific reason for this?\r\n\r\nhttps://github.com/huggingface/transformers/blob/4975002df50c472cbb6f8ac3580e475f570606ab/src/transformers/pipelines/fill_mask.py#L142-L144",
"> Is there any specific reason for this?\r\n\r\nBackward compatibility, the first pipeline wasn't built with that option in mind making it harder to support multi mask seamlessly like you would expect. The removal of such quirks might happen in 5.0 though. We know it's not convenient as it is, but breaking user code is even less convenient.\r\n"
] | 1,613 | 1,649 | null | NONE | null | # What does this PR do?
A draft PR for the Feature request to change from single mask to multi mask support for the fill mask pipeline.
As discussed this is one a draft PR to discuss the changes that need to be made to the output format to jointly support multiple and single mask in one pipeline call. The PR implements the change for Pytorch and code has not been pushed in yet for when the keyword argument is called.
The pipeline tests are expected to fail since the output format changed.
#10158
Example code that tests this feature is below.
```
import json
from transformers import pipeline
unmasker = pipeline('fill-mask', model='bert-base-uncased')
t = unmasker("hi [MASK] morning I'm a [MASK] model.")
print(json.dumps(t, indent=4))
```
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10222/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10222/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10222",
"html_url": "https://github.com/huggingface/transformers/pull/10222",
"diff_url": "https://github.com/huggingface/transformers/pull/10222.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10222.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10221 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10221/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10221/comments | https://api.github.com/repos/huggingface/transformers/issues/10221/events | https://github.com/huggingface/transformers/issues/10221 | 809,640,318 | MDU6SXNzdWU4MDk2NDAzMTg= | 10,221 | T5 relative attention bias: Discrepancy to original implementation | {
"login": "maurice-g",
"id": 2892585,
"node_id": "MDQ6VXNlcjI4OTI1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2892585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maurice-g",
"html_url": "https://github.com/maurice-g",
"followers_url": "https://api.github.com/users/maurice-g/followers",
"following_url": "https://api.github.com/users/maurice-g/following{/other_user}",
"gists_url": "https://api.github.com/users/maurice-g/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maurice-g/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maurice-g/subscriptions",
"organizations_url": "https://api.github.com/users/maurice-g/orgs",
"repos_url": "https://api.github.com/users/maurice-g/repos",
"events_url": "https://api.github.com/users/maurice-g/events{/privacy}",
"received_events_url": "https://api.github.com/users/maurice-g/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The implementation of T5 is based on the original implementation which can be found [here](https://github.com/google-research/text-to-text-transfer-transformer).\r\n\r\nThe implementation you are referring to is a general Transformer implementation from Tensorflow Mesh. It seems like this repo does not implement T5, but other models like the Funnel Transformer and the Evolved Transformer.\r\n\r\n",
"But the T5 repo actually calls the Tensorflow Mesh implementation internally. AFAIK there is no standalone t5 implementation (apart from the HF one).",
"Ok I see. In that case I'll leave it on to Patrick to help you. ",
"Hey @maurice-g, we compute it ones and then forward it to all the follow-up layers since the result will be the same for all layers. 2 month ago, I made sure that our implementation is exactly the same as the original implementation - see those tests: https://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/tests/test_modeling_t5.py#L734 and https://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/tests/test_modeling_tf_t5.py#L492 \r\nSo I'm confident that the model behaves as expected, but in case you have a reproducible code snippet showcasing differences between the 2 implementations, I'm more than happy to take a look :-)",
"You're right, missed that they were returned and forwarded to the other layers. Sorry for that.",
"No worries! Thanks for checking in-detail. I think it's always a very good practice to check things in-detail or more often than not you will find subtle bugs in Transformers that will help us improve the code :-)"
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | ### Who can help
@patrickvonplaten
## Information
Model I am using: T5
In the huggingface TF T5 implementation, the relative attention bias only seems to be applied to the first layer of the stack. If I understand the original implementation correctly, though, it is applied to all layers there.
HF: https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_tf_t5.py#L570
Mesh: https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/transformer_layers.py#L263 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10221/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10220 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10220/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10220/comments | https://api.github.com/repos/huggingface/transformers/issues/10220/events | https://github.com/huggingface/transformers/pull/10220 | 809,615,232 | MDExOlB1bGxSZXF1ZXN0NTc0NDQ3NjU1 | 10,220 | fix deprecated reference `tokenizer.max_len` in glue.py | {
"login": "poedator",
"id": 24738311,
"node_id": "MDQ6VXNlcjI0NzM4MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poedator",
"html_url": "https://github.com/poedator",
"followers_url": "https://api.github.com/users/poedator/followers",
"following_url": "https://api.github.com/users/poedator/following{/other_user}",
"gists_url": "https://api.github.com/users/poedator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poedator/subscriptions",
"organizations_url": "https://api.github.com/users/poedator/orgs",
"repos_url": "https://api.github.com/users/poedator/repos",
"events_url": "https://api.github.com/users/poedator/events{/privacy}",
"received_events_url": "https://api.github.com/users/poedator/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,614 | 1,614 | CONTRIBUTOR | null | This is to fix deprecated reference to `tokenizer.max_len` with `tokenizer.model_max_length` - similar to [issue 8739](https://github.com/huggingface/transformers/issues/8739) and [PR 8604](https://github.com/huggingface/transformers/pull/8604).
See error example [in Colab here](https://colab.research.google.com/gist/poedator/f8776349e5c625ce287fc6fcd312fa1e/tokenizer-max_len-error-in-transformers_glue.ipynb). it causes `AttributeError: 'BertTokenizer' object has no attribute 'max_len'`
The error happens when `glue_convert_examples_to_features()` is called without `max_length` parameter specified. In that case [line 119](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py#L119) with wrong reference gets called. This simple fix should do it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10220/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10220",
"html_url": "https://github.com/huggingface/transformers/pull/10220",
"diff_url": "https://github.com/huggingface/transformers/pull/10220.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10220.patch",
"merged_at": 1614175289000
} |
https://api.github.com/repos/huggingface/transformers/issues/10219 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10219/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10219/comments | https://api.github.com/repos/huggingface/transformers/issues/10219/events | https://github.com/huggingface/transformers/pull/10219 | 809,571,923 | MDExOlB1bGxSZXF1ZXN0NTc0NDExOTY5 | 10,219 | [trainer] fix ignored columns logger | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | This PR fixes a confusing log entry that says:
```
The following columns in the evaluation set don't have a corresponding argument in `T5ForConditionalGeneration.forward` and have been ignored: .
```
when everything is in order.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10219/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10219",
"html_url": "https://github.com/huggingface/transformers/pull/10219",
"diff_url": "https://github.com/huggingface/transformers/pull/10219.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10219.patch",
"merged_at": 1613511339000
} |
https://api.github.com/repos/huggingface/transformers/issues/10218 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10218/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10218/comments | https://api.github.com/repos/huggingface/transformers/issues/10218/events | https://github.com/huggingface/transformers/issues/10218 | 809,555,024 | MDU6SXNzdWU4MDk1NTUwMjQ= | 10,218 | discrepancy between the Huggingface T5Tokenizer and the original T5tokenizer | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, \r\n\r\nT5 is an encoder-decoder Transformer. The `run_mlm.py` script can only be used for encoder-only models, such as BERT, RoBERTa, DeBERTa, etc. \r\n\r\nBesides this, T5 does not use the regular [MASK] token as BERT. Rather than masked language modeling, T5 is pre-trained on \"unsupervised denoising training\". This is explained [here](https://huggingface.co/transformers/model_doc/t5.html#training).",
"Hi @NielsRogge thanks, but I have checked the paper of T5 and this seems to be a unique token:\r\n\r\n\"We consider two strategies to achieve this: First, instead of replacing each corrupted token with a mask token, we replace\r\nthe entirety of each consecutive span of corrupted tokens with a unique mask token.\"\r\n\r\nAs for using run_mlm.py script, I do not think T5 model can be an issue as if we could add T5ForConditionalGeneration it could work to me in run_mlm.py out of the box.\r\n\r\nIs there any place I could look see how to create datasets the way you mentioned to do T5 pretraining with huggingface codes? thanks\r\n",
"Hi \r\n@patrickvonplaten, @patil-suraj could you give me some advice how to do T5 pretraining with denoising objective? thanks ",
"Here is an old issue on this subject: https://github.com/huggingface/transformers/issues/5079\r\n\r\nAlso @NielsRogge is correct - T5 replaces each span of tokens with a unique mask token -> the so-called sentinel tokens.\r\nCurrently, there is sadly no script showcasing pertaining for T5. Maybe you have some luck when asking this question on the [forum](https://discuss.huggingface.co/)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> Hi @NielsRogge thanks, but I have checked the paper of T5 and this seems to be a unique token:\r\n> \r\n> \"We consider two strategies to achieve this: First, instead of replacing each corrupted token with a mask token, we replace\r\n> the entirety of each consecutive span of corrupted tokens with a unique mask token.\"\r\n> \r\n> As for using run_mlm.py script, I do not think T5 model can be an issue as if we could add T5ForConditionalGeneration it could work to me in run_mlm.py out of the box.\r\n> \r\n> Is there any place I could look see how to create datasets the way you mentioned to do T5 pretraining with huggingface codes? thanks\r\n\r\nHI, Did you successfully run the Huggingface T5 pretraining? Can you give me some advice?\r\n"
] | 1,613 | 1,624 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `Transformers` version: 4.3.2
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?): -
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
- tokenizers: @n1t0, @LysandreJik
## Information
Model I am using T5Tokenizer, and I adapted the code of run_mlm.py [1] to use it with T5 tokenizer, when I run the code I am getting
```
This tokenizer does not have a mask token which is necessary for masked language modeling. "
ValueError: This tokenizer does not have a mask token which is necessary for masked language modeling. You should pass `mlm=False` to train on causal language modeling instead.
```
I checked the error and this is because: tokenizer.mask_token is None for T5Tokenizer, checking T5 paper, they use a masked language modeling with their seq2seq objective as the pretraining objective, so they must have trained a masked token as their paper says, could you give me some insight why masked token does not exist in huggingface implementation of T5Tokenizer and how I can correct this to be able to run run_mlm codes ? thank you
[1] https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
## To reproduce
```
tokenizer = AutoTokenizer.from_pretrained("t5-small")
print(tokenizer.mask_token) => this is None
```
## Expected behavior
The masked token as per T5 paper should exist in T5Tokenizer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10218/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10217 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10217/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10217/comments | https://api.github.com/repos/huggingface/transformers/issues/10217/events | https://github.com/huggingface/transformers/pull/10217 | 809,495,804 | MDExOlB1bGxSZXF1ZXN0NTc0MzQ5MTI1 | 10,217 | Fix add_token_positions in custom datasets tutorial | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | Discussed in #10210. The example `add_token_positions` function incorrectly converts `answers[i]['answer_end']` to its corresponding tokenized index rather than `answers[i]['answer_end'] - 1`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10217/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10217/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10217",
"html_url": "https://github.com/huggingface/transformers/pull/10217",
"diff_url": "https://github.com/huggingface/transformers/pull/10217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10217.patch",
"merged_at": 1613502005000
} |
https://api.github.com/repos/huggingface/transformers/issues/10216 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10216/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10216/comments | https://api.github.com/repos/huggingface/transformers/issues/10216/events | https://github.com/huggingface/transformers/pull/10216 | 809,483,522 | MDExOlB1bGxSZXF1ZXN0NTc0MzM4NTQ3 | 10,216 | Making TF Funnel compliant with AMP | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
This PR makes the TF Funnel model compliant with AMP. All the slow tests are passing as well.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10216/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10216",
"html_url": "https://github.com/huggingface/transformers/pull/10216",
"diff_url": "https://github.com/huggingface/transformers/pull/10216.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10216.patch",
"merged_at": 1613647783000
} |
https://api.github.com/repos/huggingface/transformers/issues/10215 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10215/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10215/comments | https://api.github.com/repos/huggingface/transformers/issues/10215/events | https://github.com/huggingface/transformers/pull/10215 | 809,463,604 | MDExOlB1bGxSZXF1ZXN0NTc0MzIxNjMw | 10,215 | Factor out methods | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger @patrickvonplaten verified this fixed the issue in #10214"
] | 1,613 | 1,613 | 1,613 | MEMBER | null | With PyTorch's DataParallel, it is not possible to simply iterate over parameters in order to find the `nn.Module`'s dtype or device.
Some efforts were made to catch the error (`StopIteration`) in most cases, but the some were forgotten. This PR factors the try/except in a method, which is applied everywhere instead.
Closes #10214 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10215/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10215",
"html_url": "https://github.com/huggingface/transformers/pull/10215",
"diff_url": "https://github.com/huggingface/transformers/pull/10215.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10215.patch",
"merged_at": 1613573624000
} |
https://api.github.com/repos/huggingface/transformers/issues/10214 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10214/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10214/comments | https://api.github.com/repos/huggingface/transformers/issues/10214/events | https://github.com/huggingface/transformers/issues/10214 | 809,443,349 | MDU6SXNzdWU4MDk0NDMzNDk= | 10,214 | StopIteration Error when running beam search for squad 2.0 | {
"login": "neufang",
"id": 689203,
"node_id": "MDQ6VXNlcjY4OTIwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/689203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neufang",
"html_url": "https://github.com/neufang",
"followers_url": "https://api.github.com/users/neufang/followers",
"following_url": "https://api.github.com/users/neufang/following{/other_user}",
"gists_url": "https://api.github.com/users/neufang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neufang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neufang/subscriptions",
"organizations_url": "https://api.github.com/users/neufang/orgs",
"repos_url": "https://api.github.com/users/neufang/repos",
"events_url": "https://api.github.com/users/neufang/events{/privacy}",
"received_events_url": "https://api.github.com/users/neufang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed! I can reproduce, will fix.",
"Can you tell me if https://github.com/huggingface/transformers/pull/10215 fixes it? You can try by installing the following:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers@parameter-device-dtype\r\n```",
"Thanks a lot for the quick fix. I'm running it right now. I will post whether the training is done in ca. 7 hours.",
"> Can you tell me if #10215 fixes it? You can try by installing the following:\r\n> \r\n> ```\r\n> pip install git+https://github.com/huggingface/transformers@parameter-device-dtype\r\n> ```\r\n\r\nHi, the issue is fixed. Thanks a lot. "
] | 1,613 | 1,613 | 1,613 | NONE | null | I'm using `huggingface/transformers-pytorch-gpu:4.3.0` on Ubuntu DGX1 server with 8 V100 GPUs.
`NVIDIA-SMI 418.126.02 Driver Version: 418.126.02 CUDA Version: 10.1`
When running the step in `examples/question_answering/README.md` for beam search for squad 2.0
```
python run_qa_beam_search.py \
--model_name_or_path xlnet-large-cased \
--dataset_name squad_v2 \
--do_train \
--do_eval \
--version_2_with_negative \
--learning_rate 3e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./wwm_cased_finetuned_squad/ \
--per_device_eval_batch_size=2 \
--per_device_train_batch_size=2 \
--save_steps 5000
```
Error
```
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/xlnet/modeling_xlnet.py", line 1978, in forward
start_logits = self.start_logits(hidden_states, p_mask=p_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 1241, in forward
if next(self.parameters()).dtype == torch.float16:
StopIteration
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10214/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10213 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10213/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10213/comments | https://api.github.com/repos/huggingface/transformers/issues/10213/events | https://github.com/huggingface/transformers/pull/10213 | 809,431,838 | MDExOlB1bGxSZXF1ZXN0NTc0Mjk1MTIz | 10,213 | Store FLOS as floats to avoid overflow. | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | COLLABORATOR | null | # What does this PR do?
As pointed out in #10212, the `total_flos` stored as ints can result in overflowing errors: when in Python ints there is no risk, but when in distributed training, we use torch.int64 to gather all FLOS on all processes which can trigger that error.
Fixes #10212 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10213/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10213",
"html_url": "https://github.com/huggingface/transformers/pull/10213",
"diff_url": "https://github.com/huggingface/transformers/pull/10213.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10213.patch",
"merged_at": 1613492115000
} |
https://api.github.com/repos/huggingface/transformers/issues/10212 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10212/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10212/comments | https://api.github.com/repos/huggingface/transformers/issues/10212/events | https://github.com/huggingface/transformers/issues/10212 | 809,350,471 | MDU6SXNzdWU4MDkzNTA0NzE= | 10,212 | RuntimeError: Overflow when unpacking long | {
"login": "manchandasahil",
"id": 32937046,
"node_id": "MDQ6VXNlcjMyOTM3MDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/32937046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manchandasahil",
"html_url": "https://github.com/manchandasahil",
"followers_url": "https://api.github.com/users/manchandasahil/followers",
"following_url": "https://api.github.com/users/manchandasahil/following{/other_user}",
"gists_url": "https://api.github.com/users/manchandasahil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manchandasahil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manchandasahil/subscriptions",
"organizations_url": "https://api.github.com/users/manchandasahil/orgs",
"repos_url": "https://api.github.com/users/manchandasahil/repos",
"events_url": "https://api.github.com/users/manchandasahil/events{/privacy}",
"received_events_url": "https://api.github.com/users/manchandasahil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This comes from the `_total_flos` being stored as long and overflowing in a big training. Will fix this by storing them as floats (hoping for a PR by the end of today)."
] | 1,613 | 1,613 | 1,613 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0.dev0
- Platform: linux
- Python version: 3.6.10
- PyTorch version (GPU?): 1.7.0a0. (gpu)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed, 4 nodes with 4 GPU's each
Models:
- albert, bert, xlm: @LysandreJik
running language modelling on a large 335 million token sequences
Library:
- trainer: @sgugger
- Fairscale
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I am running run_mlm with just small changes as my datasets are already tokenized
2. Getting an error while saving checkpoint
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
Traceback (most recent call last):
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 661, in <module>
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
main()
File "/data/atc_tenant/bert_data/smancha5/run_mlm.py", line 465, in main
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
train_result = trainer.train(model_path=model_path)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 924, in train
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1003, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1026, in _save_checkpoint
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.store_flos()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1361, in store_flos
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item()
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 140, in distributed_broadcast_scalars
tensorized_scalar = torch.tensor(scalars).cuda()
tensorized_scalar = torch.tensor(scalars).cuda()
tensorized_scalar = torch.tensor(scalars).cuda()
tensorized_scalar = torch.tensor(scalars).cuda()
RuntimeError: Overflow when unpacking long
RuntimeError: Overflow when unpacking long
RuntimeError: Overflow when unpacking long
RuntimeError: Overflow when unpacking long
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Saving the checkpoint normally. It occurs only at some checkpoints randomly!
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10212/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10211 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10211/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10211/comments | https://api.github.com/repos/huggingface/transformers/issues/10211/events | https://github.com/huggingface/transformers/pull/10211 | 809,330,723 | MDExOlB1bGxSZXF1ZXN0NTc0MjExMzIx | 10,211 | Making TF XLM-like models XLA and AMP compliant | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
This PR makes the TF XLM-like models compliant with XLA and AMP. All the slow tests are passing as well for these models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10211/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10211",
"html_url": "https://github.com/huggingface/transformers/pull/10211",
"diff_url": "https://github.com/huggingface/transformers/pull/10211.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10211.patch",
"merged_at": 1613581368000
} |
https://api.github.com/repos/huggingface/transformers/issues/10210 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10210/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10210/comments | https://api.github.com/repos/huggingface/transformers/issues/10210/events | https://github.com/huggingface/transformers/issues/10210 | 809,300,209 | MDU6SXNzdWU4MDkzMDAyMDk= | 10,210 | QA Documentation: I got error just copy and pasting documentation | {
"login": "andreabac3",
"id": 36055796,
"node_id": "MDQ6VXNlcjM2MDU1Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/36055796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andreabac3",
"html_url": "https://github.com/andreabac3",
"followers_url": "https://api.github.com/users/andreabac3/followers",
"following_url": "https://api.github.com/users/andreabac3/following{/other_user}",
"gists_url": "https://api.github.com/users/andreabac3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andreabac3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreabac3/subscriptions",
"organizations_url": "https://api.github.com/users/andreabac3/orgs",
"repos_url": "https://api.github.com/users/andreabac3/repos",
"events_url": "https://api.github.com/users/andreabac3/events{/privacy}",
"received_events_url": "https://api.github.com/users/andreabac3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @joeddav on this one, since he wrote this tutorial :-)",
"Thank you @sgugger for the reply.\r\nOk I can wait for the answer from @joeddav.\r\n\r\nHave a nice day. ",
"Figured it out. `answer_end` is the character position immediately _after_ the answer, so end_position should be derived from `answer_end - 1`. I'm not sure why I was able to run it without this error previously (perhaps a resolved tokenizer bug?), but this should be correct.\r\n\r\n```python\r\ndef add_token_positions(encodings, answers):\r\n start_positions = []\r\n end_positions = []\r\n for i in range(len(answers)):\r\n start_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))\r\n end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1))\r\n \r\n # if start position is None, the answer passage has been truncated\r\n if start_positions[-1] is None:\r\n start_positions[-1] = tokenizer.model_max_length\r\n end_positions[-1] = tokenizer.model_max_length\r\n\r\n encodings.update({'start_positions': start_positions, 'end_positions': end_positions})\r\n```",
"Closed by #10217 ",
"Thank you @joeddav the posted code works perfectly. \r\n",
"Sorry for bothering you @joeddav again, I have a question related to the code posted by you here. \r\nI am still getting None with the dataset built by myself using this code. My dataset works perfectly with the run_squad original script.\r\nIn this snipped posted by you I encounter None in the vector of end_positions and I don't know how fix it. I saw the condition in which there's a None the start_positions but what I have to do in the case the None is only in the end_positions vector?\r\n\r\nKind regards,\r\nAndrea"
] | 1,613 | 1,614 | 1,613 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.1
- Platform:Manjaro Linux
- Python version: 1.5.1
- PyTorch version (GPU?): Yes
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
## Information
I am trying the to train a QA model following the huggingface documentation, I just copied and pasted the code in my machine (and in Colab) but I was not able to proceed in the training phase because I got None value.
## To reproduce
Steps to reproduce the behavior:
1. Go to the documentation: https://huggingface.co/transformers/custom_datasets.html at Squad training section
2. Copy and paste the code as you can see from my pastebin: https://pastebin.com/hZvq7Zs7
3. And you got the following error
`File "/home/andrea/PycharmProjects/qa-srl/test.py", line 78, in __getitem__
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
RuntimeError: Could not infer dtype of NoneType
`
4. My naive solution was modifying the __getitem__ method from the SquadDataset class in order to avoid to serve the val[idx] == None
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10210/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10209 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10209/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10209/comments | https://api.github.com/repos/huggingface/transformers/issues/10209/events | https://github.com/huggingface/transformers/pull/10209 | 809,247,683 | MDExOlB1bGxSZXF1ZXN0NTc0MTQxNjA5 | 10,209 | Make TF CTRL compliant with XLA and AMP | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
This PR makes the TF CTRL model compliant with XLA and AMP. All the slow tests are passing as well.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10209/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10209",
"html_url": "https://github.com/huggingface/transformers/pull/10209",
"diff_url": "https://github.com/huggingface/transformers/pull/10209.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10209.patch",
"merged_at": 1613584455000
} |
https://api.github.com/repos/huggingface/transformers/issues/10208 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10208/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10208/comments | https://api.github.com/repos/huggingface/transformers/issues/10208/events | https://github.com/huggingface/transformers/issues/10208 | 809,236,920 | MDU6SXNzdWU4MDkyMzY5MjA= | 10,208 | different behavior for get_input_embeddings() between 4.2.x and 4.3.x in Tensorflow | {
"login": "AndyTheFactory",
"id": 863810,
"node_id": "MDQ6VXNlcjg2MzgxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/863810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndyTheFactory",
"html_url": "https://github.com/AndyTheFactory",
"followers_url": "https://api.github.com/users/AndyTheFactory/followers",
"following_url": "https://api.github.com/users/AndyTheFactory/following{/other_user}",
"gists_url": "https://api.github.com/users/AndyTheFactory/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndyTheFactory/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndyTheFactory/subscriptions",
"organizations_url": "https://api.github.com/users/AndyTheFactory/orgs",
"repos_url": "https://api.github.com/users/AndyTheFactory/repos",
"events_url": "https://api.github.com/users/AndyTheFactory/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndyTheFactory/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nThis is because in the 4.3 version the implementation of the embedding have changed and `get_input_embeddings()` returns only the word embeddings layer, hence only `input_ids` can be passed.\r\n\r\nThis was an unexpected behavior and will be fixed for the next release (the fix is already in master if you want). Sorry for the inconvenience.",
"thank you for the quick reply!\r\n\r\nOk, i was unaware it was already fixed in master\r\n\r\nThank you!"
] | 1,613 | 1,613 | 1,613 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.x vs 4.3.x
- Platform: Colab
- Python version: 3.6
- Tensorflow version (GPU?): 2.4.1
@jplu
## Information
Model I am using (Bert, XLNet ...): Bert
## To reproduce
Steps to reproduce the behavior:
In version 4.3.x the following code
```
model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
input = tokenizer.batch_encode_plus(['this is a test'])
embeddings = model.get_input_embeddings()
embeddings(input_ids=np.array(input['input_ids']), token_type_ids=np.array(input['token_type_ids']))
```
Throws an error:
> TypeError: call() got an unexpected keyword argument 'token_type_ids'
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
In version 4.2.x the code returned a tensor:
> <tf.Tensor: shape=(1, 6, 768), dtype=float32, [...]
Test also as [colab](https://colab.research.google.com/drive/1z5rboqdz8y8IM90FX8fDzixez9K_Efdy?usp=sharing)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10208/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10207 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10207/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10207/comments | https://api.github.com/repos/huggingface/transformers/issues/10207/events | https://github.com/huggingface/transformers/pull/10207 | 809,214,852 | MDExOlB1bGxSZXF1ZXN0NTc0MTEzOTA2 | 10,207 | Unlock XLA test for TF ConvBert | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
This PR allows the XLA test for TF ConvBert.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10207/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10207",
"html_url": "https://github.com/huggingface/transformers/pull/10207",
"diff_url": "https://github.com/huggingface/transformers/pull/10207.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10207.patch",
"merged_at": 1613480381000
} |
https://api.github.com/repos/huggingface/transformers/issues/10206 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10206/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10206/comments | https://api.github.com/repos/huggingface/transformers/issues/10206/events | https://github.com/huggingface/transformers/issues/10206 | 809,028,481 | MDU6SXNzdWU4MDkwMjg0ODE= | 10,206 | Tokenizer is working different from expected functionality. | {
"login": "pyturn",
"id": 25935364,
"node_id": "MDQ6VXNlcjI1OTM1MzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/25935364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pyturn",
"html_url": "https://github.com/pyturn",
"followers_url": "https://api.github.com/users/pyturn/followers",
"following_url": "https://api.github.com/users/pyturn/following{/other_user}",
"gists_url": "https://api.github.com/users/pyturn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pyturn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pyturn/subscriptions",
"organizations_url": "https://api.github.com/users/pyturn/orgs",
"repos_url": "https://api.github.com/users/pyturn/repos",
"events_url": "https://api.github.com/users/pyturn/events{/privacy}",
"received_events_url": "https://api.github.com/users/pyturn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi! Could you provide a reproducible example? I don't understand what's your `TOKENIZER` here. Thanks",
"Hi @LysandreJik , \r\n\r\nThanks for your response , I have done deep-dive and modified code a bit to reproduce the same issue. I am extending the vocabulary of tokenizer and using some automated logic to add the new words. I don't not have complete control on the words which are adding (but, I am making sure those are not the noise.)\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\r\nmodel_checkpoint_tok = \"microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint_tok)\r\nenc_ids = tokenizer.encode('vancomycin')\r\ntokenizer.decode(enc_ids)\r\n```\r\n```out\r\n[CLS] vancomycin [SEP]\r\n```\r\n\r\nAdding the new vocabulary through some automated logic, I added thousands of words, To reproduce the same issue, I am manually adding one of the word in next line\r\n\r\n```python\r\ntokenizer.add_tokens(['vanco'])\r\nenc_ids = tokenizer.encode('vancomycin')\r\ntokenizer.decode(enc_ids)\r\n```\r\n```out\r\n`Output : '[CLS] vanco mycin [SEP]'\r\n```\r\n\r\nThe word 'vancomycin' is present in vocabulary, The word 'vanco' got added in vocabulary (by some automated logic), Now when I am again tokenizing \"vancomycin\" , It is splitting it into token id's of 'vanco' and 'mycin' (Mycin is not exactly present, So it got splitted into subwords). \r\n\r\nMy question is - \r\n\r\n- Is it the expected functionality of tokenizer ( I know that its a Subword tokenization technique) , I am not sure but what I guess is there should be some word boundary detection technique and if the exact word in word boundary is not present, then only sub word tokenization should happen? \r\n\r\n\r\n \r\nPlease suggest how I can handle these scenarios ?\r\n\r\nPlease Find Below Screenshot for the same - \r\n\r\n\r\n",
"Hi @LysandreJik , Are there any updates on this ? ",
"Hi! That isn't a bug of itself, but expected behavior (which could be better documented). The tokens added to the tokenizers don't get added to the \"vocabulary\" that resulted from the tokenizer training.\r\n\r\nInstead, they get added to the \"added tokens\", and these tokens take priority over the vocabulary tokens. It is, unfortunately, complex or near impossible to add tokens to the vocabulary itself in a way that all tokenizers could benefit. This is particularly complex in the case of subwords tokenizers, which is the case of the BERT tokenizer here.",
"You can check this thread https://github.com/huggingface/tokenizers/issues/370 for a similar issue.",
"Hi @LysandreJik , So, if we add the tokens and then continually pretrain the Bert Model (By Fine Tuning on Masked Language Modeling) on specific corpus , Then , Does the model learn embeddings for these added tokens ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | NONE | null | Hi,
I updated the vocabulary for pre-trained tokenizer. Pretrained tokenizer was taken from this model - https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract
When I am using the updated tokenizer , it is creating the sub tokens of the word which are present in vocabulary dictionary. And those sub-tokens are also not starting with '##' , which is creating confusion.
`input_ids = TOKENIZER.encode('vancomycin')`
`TOKENIZER.decode(input_ids)`
`TOKENIZER.decode([16100]) #checked through manual rules that same word is present in vocabulary.`
**Sanpshot of the Code**

## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Windows
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 , Yes, Cuda 10.1
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10206/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10205 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10205/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10205/comments | https://api.github.com/repos/huggingface/transformers/issues/10205/events | https://github.com/huggingface/transformers/pull/10205 | 809,019,053 | MDExOlB1bGxSZXF1ZXN0NTczOTU0MTUw | 10,205 | set tgt_lang of MBart Tokenizer for summarization | {
"login": "HeroadZ",
"id": 17962682,
"node_id": "MDQ6VXNlcjE3OTYyNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/17962682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HeroadZ",
"html_url": "https://github.com/HeroadZ",
"followers_url": "https://api.github.com/users/HeroadZ/followers",
"following_url": "https://api.github.com/users/HeroadZ/following{/other_user}",
"gists_url": "https://api.github.com/users/HeroadZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HeroadZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HeroadZ/subscriptions",
"organizations_url": "https://api.github.com/users/HeroadZ/orgs",
"repos_url": "https://api.github.com/users/HeroadZ/repos",
"events_url": "https://api.github.com/users/HeroadZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/HeroadZ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for fixing this!",
"Other models/tasks have this issue as well https://github.com/huggingface/transformers/issues/10292\r\n\r\nThese features require tests. Without tests this is an endless work-work-work - otherwise we keep on breaking what was working before.\r\n"
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
To set tgt_lang of MBart Tokenizer for summarization.
Otherwise, the error `AttributeError: 'MBartTokenizerFast' object has no attribute 'tgt_lang'` occurred.
I have read your discussion and know that you will modify the part of MBart later. So this PR will be meaningless at that time.
But at least it will be useful now :)
Sorry that I didn't take any tests, but it works well on my machine for summarization using MBart.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@stas00 @patil-suraj @patrickvonplaten @ @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10205/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10205",
"html_url": "https://github.com/huggingface/transformers/pull/10205",
"diff_url": "https://github.com/huggingface/transformers/pull/10205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10205.patch",
"merged_at": 1613486377000
} |
https://api.github.com/repos/huggingface/transformers/issues/10204 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10204/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10204/comments | https://api.github.com/repos/huggingface/transformers/issues/10204/events | https://github.com/huggingface/transformers/issues/10204 | 809,007,443 | MDU6SXNzdWU4MDkwMDc0NDM= | 10,204 | 1.3GB dataset creates over 107GB of cache file! | {
"login": "DarshanDeshpande",
"id": 39432636,
"node_id": "MDQ6VXNlcjM5NDMyNjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/39432636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DarshanDeshpande",
"html_url": "https://github.com/DarshanDeshpande",
"followers_url": "https://api.github.com/users/DarshanDeshpande/followers",
"following_url": "https://api.github.com/users/DarshanDeshpande/following{/other_user}",
"gists_url": "https://api.github.com/users/DarshanDeshpande/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DarshanDeshpande/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DarshanDeshpande/subscriptions",
"organizations_url": "https://api.github.com/users/DarshanDeshpande/orgs",
"repos_url": "https://api.github.com/users/DarshanDeshpande/repos",
"events_url": "https://api.github.com/users/DarshanDeshpande/events{/privacy}",
"received_events_url": "https://api.github.com/users/DarshanDeshpande/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @lhoestq ",
"Related to https://github.com/huggingface/datasets/issues/861\r\nMaybe on-the-fly tokenization can help.\r\nOr if we stick to having the tokenization in the preprocessing, at least reduce the precision of the integers stored on disk and maybe do the padding on the fly.",
"@lhoestq Are there any minor changes that could fix this temporarily? Will changing the map function to set transform as mentioned [here](https://github.com/huggingface/datasets/issues/1825) help?",
"Currently the Trainer doesn't handle `set_transform` but this will be supported soon.\r\n\r\nAnother think you could try is specify `features=` in the parameters of the map function to specify the precision of the integers that are written on disk. For example\r\n```python\r\nfrom datasets import Features, Sequence, Value\r\n\r\nfeatures = Features({\r\n \"input_ids\": Sequence(Value(\"int32\")),\r\n \"token_type_ids\": Sequence(Value(\"bool\")),\r\n \"attention_mask\": Sequence(Value(\"bool\")),\r\n \"special_tokens_mask\": Sequence(Value(\"bool\")),\r\n})\r\ntokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n remove_columns=[text_column_name],\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n features=features,\r\n)\r\n```\r\nThe tokenization will still be done during the preprocessing and store the tokenized texts on disk, but this time it will take much less space since you'll store int32 and booleans instead of int64 by default.",
"@lhoestq Nope. I get a casting error as attached below\r\n```\r\nException in device=TPU:4: Could not convert 1 with type int: tried to convert to boolean\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 330, in _mp_start_fn\r\n _start_fn(index, pf_cfg, fn, args)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 324, in _start_fn\r\n fn(gindex, *args)\r\n File \"/kaggle/working/run_mlm_custom.py\", line 461, in _mp_fn\r\n main()\r\n File \"/kaggle/working/run_mlm_custom.py\", line 355, in main\r\n features=features\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py\", line 386, in map\r\n for k, dataset in self.items()\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py\", line 386, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1140, in map\r\n update_data=update_data,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 167, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py\", line 312, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1411, in _map_single\r\n writer.write_batch(batch)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 279, in write_batch\r\n pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n File \"pyarrow/table.pxi\", line 1474, in pyarrow.lib.Table.from_pydict\r\n File \"pyarrow/array.pxi\", line 322, in pyarrow.lib.asarray\r\n File \"pyarrow/array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 100, in __arrow_array__\r\n out = pa.array(self.data, type=type)\r\npyarrow.lib.ArrowInvalid: Could not convert 1 with type int: tried to convert to boolean\r\nTraceback (most recent call last):\r\n File \"transformers/examples/xla_spawn.py\", line 85, in <module>\r\n main()\r\n File \"transformers/examples/xla_spawn.py\", line 81, in main\r\n xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 395, in spawn\r\n start_method=start_method)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 157, in start_processes\r\n while not context.join():\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py\", line 112, in join\r\n (error_index, exitcode)\r\nException: process 5 terminated with exit code 17\r\n```\r\n\r\nFixed this using \r\nfeatures = Features({\r\n \"input_ids\": Sequence(Value(\"int32\")),\r\n \"attention_mask\": Sequence(Value(\"int32\")),\r\n \"special_tokens_mask\": Sequence(Value(\"int32\")),\r\n })\r\nbut it still ends up taking a lot of space. Runs out of space again on kaggle notebook with 19GB free",
"Looks like PyArrow doesn't know how to convert 1 to True ^^'\r\nI created an issue on Apache Arrow's JIRA [here](https://issues.apache.org/jira/browse/ARROW-11646) to track the issue.\r\nAlso I think we can add support for uint16 in the pytorch integration of `datasets` to reduce the size even more. Feel free to open an issue on the `datasets` repo about this if you want.\r\n\r\nIn the end I think the best solution is probably to not do padding during preprocessing and do it on the fly in the Trainer. This can use `dataset.set_transform` or a data collator. As long as padding is done before sending the data to the TPU it should be good.",
"@lhoestq \r\nYou said in the comment before that ```set-transform``` support hasn't been added to the trainer yet. How exactly do I do it on the fly then?",
"Yes you're right, we first need to make the Trainer support `set_transform`, sorry if it was confusing.",
"`Trainer` in master completely supports `set_transform`. If there are some columns removed that should not be, you just have to set the training arguments `remove_unused_columns` to `False` for the time being.",
"Nice ! so one could try to replace the .map with .set_transform(tokenize_function)",
"> `Trainer` in master completely supports `set_transform`. If there are some columns removed that should not be, you just have to set the training arguments `remove_unused_columns` to `False` for the time being.\r\n\r\nI tried changing remove_unused_columns to False but it gives me an error during the Trainer call. The set_transform function returns a NoneType object and so the Trainer complains of getting a None instead of training data. The run_mlm_custom.py file below is the same run_mlm file, just with set_transform instead of map (you can have a look at it [here](https://drive.google.com/file/d/1--ijV3UK-Rq9TnzkcWFAjkk2A-0j_XWK/view?usp=sharing))\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_mlm_custom.py\", line 452, in <module>\r\n main()\r\n File \"run_mlm_custom.py\", line 397, in main\r\n train_dataset=tokenized_datasets[\"train\"] if training_args.do_train else None, # print(tokenized_datasets) gives None\r\nTypeError: 'NoneType' object is not subscriptable\r\n```\r\n\r\nDo you have any template code for passing the data to the trainer?",
"Indeed `set_transform` is in-place. For example you can do\r\n```python\r\ndataset.set_transform(tokenize) # return None, but sets the transform of the current dataset\r\n```\r\n\r\nIf you want to use a non in-place function like what was doing map, you can do\r\n```python\r\ndataset = dataset.with_transform(tokenize) # return a new dataset object with the specified transform\r\n```\r\n\r\nAlso I'm not a big fan of having two functions that does the same thing (except one is in-place) so we might deprecate one or the other in the future. I guess the second one is more convenient and is more aligned with the other Dataset functions. Let me know what you think",
"Returning the dataset is more intuitive I feel. Anyway, this is some really good news. I will try to modify the script and make it work. If it does then maybe, if you want, I can clean the code and create a pull request for the same.",
"@DarshanDeshpande did it work for you after you made the changes? \r\nI have the same issue, trying to train a roberta mlm on 1.3G of text data on cloud TPU and got the no space on device error (the code works with 300M of data though). I changed the run_mlm.py code based on your PR to do tokenization on the fly, but now I get this error:\r\n\r\n```\r\n[INFO|trainer.py:946] 2021-03-16 17:51:34,455 >> ***** Running training *****\r\n[INFO|trainer.py:947] 2021-03-16 17:51:34,456 >> Num examples = 14602056\r\n[INFO|trainer.py:948] 2021-03-16 17:51:34,456 >> Num Epochs = 2\r\n[INFO|trainer.py:949] 2021-03-16 17:51:34,456 >> Instantaneous batch size per device = 8\r\n[INFO|trainer.py:950] 2021-03-16 17:51:34,456 >> Total train batch size (w. parallel, distributed & accumulation) = 256\r\n[INFO|trainer.py:951] 2021-03-16 17:51:34,456 >> Gradient Accumulation steps = 4\r\n[INFO|trainer.py:952] 2021-03-16 17:51:34,456 >> Total optimization steps = 114078\r\n 0%| | 0/114078 [00:00<?, ?it/s]Exception in thread Thread-2:\r\nTraceback (most recent call last):\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/threading.py\", line 916, in _bootstrap_inner\r\n self.run()\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/threading.py\", line 864, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py\", line 141, in _loader_worker\r\n _, data = next(data_iter)\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 435, in __next__\r\n data = self._next_data()\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/utils/data/dataloader.py\", line 475, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1130, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1117, in _getitem\r\n pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/datasets/formatting/formatting.py\", line 375, in format_table\r\n return formatter(pa_table, query_type=query_type)\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/datasets/formatting/formatting.py\", line 173, in __call__\r\n return self.format_row(pa_table)\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/datasets/formatting/formatting.py\", line 239, in format_row\r\n formatted_batch = self.format_batch(pa_table)\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/datasets/formatting/formatting.py\", line 268, in format_batch\r\n return self.transform(batch)\r\n File \"/home/aida_delfan/pretrain/run_mlm.py\", line 363, in tokenize_function\r\n return_special_tokens_mask=True,\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 2266, in __call__\r\n **kwargs,\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 2451, in batch_encode_plus\r\n **kwargs,\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py\", line 163, in _batch_encode_plus\r\n return super()._batch_encode_plus(*args, **kwargs)\r\n File \"/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py\", line 411, in _batch_encode_plus\r\n for key in tokens_and_encodings[0][0].keys():\r\nIndexError: list index out of range\r\n\r\n```\r\nwondering if you saw a similar error and if there is a fix for it.\r\n\r\nI have transformers 4.4.0 and datasets 1.4.0.\r\n\r\nThanks!",
"@aidad This looks like a tokenizer issue. The script in the PR works for me. Try checking your tokenizer files or altering the script to use the Roberta tokenizer specifically instead of the included AutoTokenizer"
] | 1,613 | 1,615 | 1,613 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0 dev0
- Platform: Google Colab
- Python version: 3.6
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): None
- Using GPU in script?: None. Colab TPU is used
- Using distributed or parallel set-up in script?: Using default ```run_mlm.py``` script
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): DistilBert
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
!python /content/transformers/examples/xla_spawn.py --num_cores 8 /content/transformers/examples/language-modeling/run_mlm.py
--model_type distilbert --config_name /content/TokenizerFiles \
--tokenizer_name /content/drive/TokenizerFiles \
--train_file Corpus.txt \
--mlm_probability 0.15 \
--output_dir "/content/TrainingCheckpoints" \
--do_train \
--per_device_train_batch_size 32 \
--save_steps 500 --disable_tqdm False \
--line_by_line True \
--max_seq_length 128 \
--pad_to_max_length True \
--cache_dir /content/cache_dir --save_total_limit 2
```
The script ends up creating more than 107GB of cache files only with 54% processing done which crashes the Colab environment
This means that 200+ GB of space is required to cache and preprocess a mere 1GB file. Am I doing something wrong here? I ran the same script a few days ago and it didn't give me any such "Out of disk space" error. Because I wanted to use the TPU, I changed pad_to_max_length=True [(10192)](https://github.com/huggingface/transformers/issues/10192) . That's all I changed and it does this. Let me know if anyone requires any more data to help me out with this
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The dataset should cache in a minimum amount of disk space. It currently occupies over 150-200x the space of the actual dataset | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10204/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/10204/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10203/comments | https://api.github.com/repos/huggingface/transformers/issues/10203/events | https://github.com/huggingface/transformers/pull/10203 | 809,002,194 | MDExOlB1bGxSZXF1ZXN0NTczOTQwMTUy | 10,203 | [run_glue] Add MNLI compatible mode | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The CI failed but it seems irrelevant. Could you please give it a check? @LysandreJik ",
"The CI issue is with `test_hf_api`, which is fixed on master now, rebasing should make the CI green!",
"@sgugger Sorry but I can't really agree with you here. It is a problem introduced by mistake and we shouldn't just try to ignore or downplay that. It just adds a few lines of if-else and I don't think it'll add too much burden for users. I can add some comments if it helps explain the code.\n\nI am never a fan of trainer-style code since IMO in the context of ML, capsulation is not simplicity, instead, transparency is simplicity (I believe our user survey supports my claim here). Thus, I don't think our example here is meant for beginners at all. Besides, I do think back-compatibility matters especially when our success depends heavily on the prosperity of the model hub. This PR is meant to fix a bug though it's harsh. I want the users won't feel confused when they load a community model and wonder why it doesn't work. (Because I myself spent one whole day debugging this)\n\n\nHappy to have more discussion here.",
"So we discussed it a bit more internally. This particular example script will stay as is with the backward-compatibility problem (which isn't one IMO since the problem is in the model config not having the right labels). As I said before it's an example aimed at data scientists that shouldn't necessarily have all the functionality.\r\n\r\nThere will be another script for GLUE (probably by the end of the month) very soon that doesn't use `Trainer` and has the training loop exposed, where we can integrate your fix.",
"Looking forward to the new script you mentioned!"
] | 1,613 | 1,651 | 1,618 | CONTRIBUTOR | null | In this PR:
- Upgrade `datasets` to `1.3.0`
- Rename `datasets` variable to `task_datasets` in `run_glue.py` to avoid confusion with the library `datasets`
- Add a `--mnli_compat_mode` option to use the old label assignment for MNLI | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10203/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10203",
"html_url": "https://github.com/huggingface/transformers/pull/10203",
"diff_url": "https://github.com/huggingface/transformers/pull/10203.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10203.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10202/comments | https://api.github.com/repos/huggingface/transformers/issues/10202/events | https://github.com/huggingface/transformers/issues/10202 | 809,000,058 | MDU6SXNzdWU4MDkwMDAwNTg= | 10,202 | Fast Tokenizers instantiated via vocab/merge files do not respect skip_special_tokens=True | {
"login": "minimaxir",
"id": 2179708,
"node_id": "MDQ6VXNlcjIxNzk3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2179708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minimaxir",
"html_url": "https://github.com/minimaxir",
"followers_url": "https://api.github.com/users/minimaxir/followers",
"following_url": "https://api.github.com/users/minimaxir/following{/other_user}",
"gists_url": "https://api.github.com/users/minimaxir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minimaxir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minimaxir/subscriptions",
"organizations_url": "https://api.github.com/users/minimaxir/orgs",
"repos_url": "https://api.github.com/users/minimaxir/repos",
"events_url": "https://api.github.com/users/minimaxir/events{/privacy}",
"received_events_url": "https://api.github.com/users/minimaxir/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, I can reproduce! Do you know what might be causing this @n1t0?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I think this is happening because when you load it from the vocab and merge files, it doesn't know `<|endoftext|>` is a special token. For the `skip_special_tokens` to work, I believe it would be necessary to add them to the tokenizer:\r\n```python\r\ntokenizer_fast.add_special_tokens({\r\n \"additional_special_tokens\": \"<|endoftext|>\"\r\n})\r\n```\r\n\r\nThe `tokenizer.json` file on the hub, available for `gpt2` does have this special token registered, that's why it works in this case.",
"That workaround is sufficient for my needs and appears to have done the trick. Thanks!"
] | 1,613 | 1,618 | 1,618 | NONE | null | ## Environment info
- `transformers` version: 4.3.2
- Platform: macOS-11.2.1-x86_64-i386-64bit
- Python version: 3.9.1
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
See title; this issue does not reproduce with slow tokenizers. Does not reproduce with serialized tokenizers.
Found while investigating https://github.com/minimaxir/aitextgen/issues/88
## To reproduce
Using [gpt2_merges.txt](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_merges.txt) and [gpt2_vocab.json](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_vocab.json) as linked:
```py
from transformers import AutoModelForCausalLM, GPT2Tokenizer, GPT2TokenizerFast
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
outputs = model.generate(max_length=40)
# tensor([[50256, 383, 471, 13, 50, 13, 2732, 286, 4796, 468,
# 587, 10240, 262, 1918, 286, 257, 1966, 5349, 5797, 508,
# 373, 2823, 290, 2923, 416, 257, 23128, 287, 262, 471,
# 13, 50, 13, 13241, 319, 3583, 13, 198, 198, 198]])
tokenizer_fast = GPT2TokenizerFast(vocab_file="gpt2_vocab.json", merges_file="gpt2_merges.txt")
tokenizer_fast.decode(outputs[0], skip_special_tokens=True)
# '<|endoftext|> The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\n\n\n'
tokenizer_slow = GPT2Tokenizer(vocab_file="gpt2_vocab.json", merges_file="gpt2_merges.txt")
tokenizer_slow.decode(outputs[0], skip_special_tokens=True)
# ' The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\n\n\n'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10202/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10201/comments | https://api.github.com/repos/huggingface/transformers/issues/10201/events | https://github.com/huggingface/transformers/issues/10201 | 808,961,103 | MDU6SXNzdWU4MDg5NjExMDM= | 10,201 | Better Fine-Tuning by Reducing Representational Collapse | {
"login": "mingruimingrui",
"id": 18568364,
"node_id": "MDQ6VXNlcjE4NTY4MzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18568364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingruimingrui",
"html_url": "https://github.com/mingruimingrui",
"followers_url": "https://api.github.com/users/mingruimingrui/followers",
"following_url": "https://api.github.com/users/mingruimingrui/following{/other_user}",
"gists_url": "https://api.github.com/users/mingruimingrui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingruimingrui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingruimingrui/subscriptions",
"organizations_url": "https://api.github.com/users/mingruimingrui/orgs",
"repos_url": "https://api.github.com/users/mingruimingrui/repos",
"events_url": "https://api.github.com/users/mingruimingrui/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingruimingrui/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Would be lovely to see it, seems promising!",
"Any progress? @LysandreJik "
] | 1,613 | 1,656 | null | CONTRIBUTOR | null | # 🚀 Feature request
Add r3f/r4f to some popular objective functions as suggested by [Armen et. al](https://arxiv.org/abs/2008.03156).
## Motivation
Finetuning is a primary use case of many users of the transformers library.
We can use r3f/r4f to reduce representation collapse by vocabulary inefficiencies.
This is also a relatively cheap feature to implement.
## Your contribution
Understand that this is a relatively new paper with not too much benchmarking done. Will do PR if requested. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10201/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10200/comments | https://api.github.com/repos/huggingface/transformers/issues/10200/events | https://github.com/huggingface/transformers/pull/10200 | 808,955,258 | MDExOlB1bGxSZXF1ZXN0NTczOTAxMTAw | 10,200 | Bugfix: Removal of padding_idx in BartLearnedPositionalEmbedding | {
"login": "mingruimingrui",
"id": 18568364,
"node_id": "MDQ6VXNlcjE4NTY4MzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18568364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingruimingrui",
"html_url": "https://github.com/mingruimingrui",
"followers_url": "https://api.github.com/users/mingruimingrui/followers",
"following_url": "https://api.github.com/users/mingruimingrui/following{/other_user}",
"gists_url": "https://api.github.com/users/mingruimingrui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingruimingrui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingruimingrui/subscriptions",
"organizations_url": "https://api.github.com/users/mingruimingrui/orgs",
"repos_url": "https://api.github.com/users/mingruimingrui/repos",
"events_url": "https://api.github.com/users/mingruimingrui/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingruimingrui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the PR @mingruimingrui !\r\nBut I'm not sure how filling the embeddings with 0 will avoid them being learnable, and `nn.Embedding` actually handles this itself, if pad index is specified then the output of the embedding layer at that index will be all zeros.\r\n\r\nPlus BART's offest is very specific to the padding index, so I'm not sure if it's a good idea to change the padding index of BART.",
"@patil-suraj\r\n\r\nZeroing of weights at positions 0 and 1 is merely code sugar, they do not carry any implication to model behavior.\r\nMy point is that the current implementation causes some weights to be untrainable.\r\n\r\nThe issue this PR is attempting to solve is `padding_idx`.\r\n\r\nHere's a demonstration of the effects of `padding_idx` on the training of `torch.nn.Embedding`.\r\n```python\r\nimport torch\r\n\r\nfor padding_idx in [None, 0, 1]:\r\n print(f'padding_idx: {padding_idx}')\r\n\r\n module = torch.nn.Embedding(2, 1, padding_idx=padding_idx)\r\n print(f'Starting weight: {module.weight.data.tolist()}')\r\n\r\n x = torch.LongTensor([0, 1])\r\n y = torch.FloatTensor([1, 1]).reshape(2, 1)\r\n\r\n optimizer = torch.optim.Adam(module.parameters(), lr=1.0)\r\n for _ in range(10):\r\n optimizer.zero_grad()\r\n pred = module(x)\r\n loss = torch.sum((pred - y) ** 2)\r\n loss.backward()\r\n optimizer.step()\r\n\r\n print(f'Ending weight: {module.weight.data.tolist()}', end='\\n\\n')\r\n```\r\n\r\nYou can expect output similar to the following.\r\n```txt\r\npadding_idx: None\r\nStarting weight: [[-0.696786642074585], [1.2698755264282227]]\r\nEnding weight: [[0.39832496643066406], [0.7758995294570923]]\r\n\r\npadding_idx: 0\r\nStarting weight: [[0.0], [0.7031512260437012]]\r\nEnding weight: [[0.0], [1.2048938274383545]]\r\n\r\npadding_idx: 1\r\nStarting weight: [[-0.8316971659660339], [0.0]]\r\nEnding weight: [[0.48533281683921814], [0.0]]\r\n```\r\n\r\nNotice the weight stays 0 at the respective `padding_idx` position.\r\n\r\n`padding_idx` can easily be 2 or greater as this is dependent on how the user train/config their tokenizer.",
"Hey @mingruimingrui, \r\n\r\nThe positional embedding for the padding token is not really relevant since padding tokens are only used in training for tokens that are discarded anyways during training/evaluation (all padding tokens by design cannot influence the attention mechanism). Could you give us an example/use case which would require your fix here?",
"@patrickvonplaten \r\n\r\nI am aware that the masks used for attention and loss computation allow model output to not get affected by the positions containing the padding token.\r\n\r\nBut positional embedding is not gathered using token id but rather sequence position.\r\n\r\nGiven padding_idx = 2, the positional embedding of the first token of every sequence will be untrainable regardless of what it is.",
"@patrickvonplaten try this out. This is tested on transformers 4.3.2 (latest release as time of writing)\r\n\r\n```python\r\nimport torch\r\nimport transformers\r\nfrom transformers.models.bart.modeling_bart import \\\r\n BartLearnedPositionalEmbedding\r\n\r\nprint(f'Running script on transformers=={transformers.__version__}')\r\n\r\n# Init positional embedding with padding_idx = 2\r\npe = BartLearnedPositionalEmbedding(128, 1, padding_idx=2)\r\n\r\n# Fix input embedding to a tensor of seq_len = 5\r\ninput_ids = torch.randint(0, 32000, (4, 5))\r\n\r\n# Print out pos_embs of input_ids\r\npos_embs = pe.forward(input_ids.shape)\r\nprint('Initial pos_embs:', pos_embs.tolist())\r\n\r\n# Backprop to make positional embeddings = 1.0\r\noptimizer = torch.optim.Adam(pe.parameters(), lr=1.0)\r\nfor _ in range(100):\r\n optimizer.zero_grad()\r\n\r\n pos_embs = pe.forward(input_ids.shape)\r\n target = torch.ones_like(pos_embs)\r\n loss = torch.sum((pos_embs - target) ** 2)\r\n\r\n loss.backward()\r\n optimizer.step()\r\n\r\n# Print out pos_embs of input_ids after optimization\r\n# Expectation: A tensor of arppox. 1.0\r\n# Result: A tensor arppox. 1.0 but first embedding has a value of 0.0\r\npos_embs = pe.forward(input_ids.shape)\r\nprint('Initial pos_embs:', pos_embs.tolist())\r\n```\r\n\r\nstdout\r\n```txt\r\nRunning script on transformers==4.3.2\r\nInitial pos_embs: [[0.0], [-0.2676548659801483], [-0.31950631737709045], [-0.9886524081230164], [0.6115532517433167]]\r\nInitial pos_embs: [[0.0], [1.0013480186462402], [0.9954316020011902], [1.0088629722595215], [0.9995125532150269]]\r\n```",
"A major problem with the current behavior is when a user uploads a model with padding_idx >= 2, the position embedding at the padding_idx will be zeroed out.\r\n\r\nMulti-head-attention-based encoder and decoders can be hugely affected (since the concept of the sequence is represented using this embedding).\r\n\r\nUsing an extreme but realistic example, an English to German translation model translates \"hello world\" to \"halo welt\" correctly.\r\nHowever, when exported to huggingface BART, the model can produce \"welt halo\" instead (due to the mix-up of position).\r\n\r\nWhen this happens, the root cause (this issue) can be extremely difficult to discover/debug.",
"Hey @mingruimingrui,\r\n\r\nThanks for clarifying! And yes, I agree with you now - sorry I missed your point the first time! This bug is then actually in multiple spots in the library...It would be awesome if you could fix the bug also for the following models:\r\n\r\n- `modeling_mbart.py`: https://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/src/transformers/models/mbart/modeling_mbart.py#L117\r\n- `modeling_blenderbot.py`: https://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/src/transformers/models/blenderbot/modeling_blenderbot.py#L115\r\n- `modeling_blenderbot_small.py`:\r\nhttps://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/src/transformers/models/blenderbot_small/modeling_blenderbot_small.py#L115\r\n- `modleing_led.py`:\r\nhttps://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/src/transformers/models/led/modeling_led.py#L115\r\n- The cookie-cutter:\r\nhttps://github.com/huggingface/transformers/blob/e94d63f6cbf5efe288e41d9840a96a5857090617/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/modeling_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py#L1619\r\n\r\nGreat catch :-)",
"Thanks @patrickvonplaten\r\n\r\nBut before I make the changes, I thought it might be good to do something about backward compatibility when users use `XXLearnedPositionalEmbedding` directly.\r\nI suggest that we raise a warning when `padding_idx` is not `None` so that the function interface can be kept as is.\r\n\r\n```python\r\ndef __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None):\r\n assert padding_idx is not None, \"`padding_idx` should not be None, but of type int\"\r\n if padding_idx is not None:\r\n warnings.warn(\r\n f'padding_idx should not be provided for {self.__class__.__name__}. '\r\n 'An exception will be raised in future versions of transformers.'\r\n )\r\n\r\n ...\r\n```\r\n\r\nLet me know if this is alright.",
"@patrickvonplaten Btw can I also ask how to quote code chunks from repos in github comments like above?\r\n> It is very cool and useful",
"> Thanks @patrickvonplaten\r\n> \r\n> But before I make the changes, I thought it might be good to do something about backward compatibility when users use `XXLearnedPositionalEmbedding` directly.\r\n> I suggest that we raise a warning when `padding_idx` is not `None` so that the function interface can be kept as is.\r\n> \r\n> ```python\r\n> def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None):\r\n> assert padding_idx is not None, \"`padding_idx` should not be None, but of type int\"\r\n> if padding_idx is not None:\r\n> warnings.warn(\r\n> f'padding_idx should not be provided for {self.__class__.__name__}. '\r\n> 'An exception will be raised in future versions of transformers.'\r\n> )\r\n> \r\n> ...\r\n> ```\r\n> \r\n> Let me know if this is alright.\r\n\r\n`XXLearnedPositionalEmbedding` is not available in init, *e.g.* one cannot do:\r\n\r\n```python\r\nfrom transformers import BartLearnedPositionalEmbedding\r\n```\r\n\r\ndoesn't work, so in this case we don't have to care about backwards compatibility. As you pointed out, it is simply wrong to pass a padding_idx to the position embeddings.\r\n",
"@patrickvonplaten That should be all the changes that are required.",
"Hey @mingruimingrui,\r\n\r\nThanks a lot for applying the changes also to the other models - it looks very nice already :-) Could you also completely remove the `padding_idx` from the call to the `PositionalEmbeddings`, *e.g.* `BartLearnedPositionalEmbedding`",
"> Hey @mingruimingrui,\r\n> \r\n> Thanks a lot for applying the changes also to the other models - it looks very nice already :-) Could you also completely remove the `padding_idx` from the call to the `PositionalEmbeddings`, _e.g._ `BartLearnedPositionalEmbedding`\r\n\r\nNoted, what do you say about leaving `TFLearnedPositionalEmbeddings` as is?\r\nThis way, padding_idx wouldn't sneak its way into `kwargs`.",
"Anything else I've missed out on?",
"Looks great to me! I'll run the slow tests of the respective models just to be sure and then I think we are good to go! ",
"Slow tests are all passing. \r\n\r\nPinging @LysandreJik @sgugger @patil-suraj for review. To be concise, this PR removes the dependency of the `LearnedPositionalEmbedding` on the `pad_token_id`. After discussion with @mingruimingrui , I think positional encodings should **not** be dependent on the padding idx and the padding idx should not get a special positional embedding. The reason is the following:\r\n- The padding idx can be at every position so it doesn't make sense to have one positional embedding be reserved for the padding idx\r\n- For a padded input - let's say `<pad><pad> Hello <pad> <pad>` my name, the position ids should be `0 0 0 1 1 1 2` IMO and not `pad_pos pad_pos 0 pad_pos pad_pos 1 2`\r\n- Forcing one token to be a padding token, let's say 3 means that the positional embedding for 3 doesn't exist anymore which can then be very awkward when fine-tuning a model or training it from scratch\r\n\r\n=> This PR fixes this behavior without any breaking changes."
] | 1,613 | 1,614 | 1,614 | CONTRIBUTOR | null | # What does this PR do?
This PR removes the unnecessary padding_idx argument from positional embedding and instead uses the pre-determined offset.
In the event that padding_idx > 2, positional embedding at some position can be fixed to 0 instead of being learnable.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Please help 😄 @patrickvonplaten @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10200/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10200",
"html_url": "https://github.com/huggingface/transformers/pull/10200",
"diff_url": "https://github.com/huggingface/transformers/pull/10200.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10200.patch",
"merged_at": 1614252793000
} |
https://api.github.com/repos/huggingface/transformers/issues/10199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10199/comments | https://api.github.com/repos/huggingface/transformers/issues/10199/events | https://github.com/huggingface/transformers/issues/10199 | 808,750,705 | MDU6SXNzdWU4MDg3NTA3MDU= | 10,199 | StopIteration error happened | {
"login": "brightbsit",
"id": 75205503,
"node_id": "MDQ6VXNlcjc1MjA1NTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/75205503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brightbsit",
"html_url": "https://github.com/brightbsit",
"followers_url": "https://api.github.com/users/brightbsit/followers",
"following_url": "https://api.github.com/users/brightbsit/following{/other_user}",
"gists_url": "https://api.github.com/users/brightbsit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brightbsit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brightbsit/subscriptions",
"organizations_url": "https://api.github.com/users/brightbsit/orgs",
"repos_url": "https://api.github.com/users/brightbsit/repos",
"events_url": "https://api.github.com/users/brightbsit/events{/privacy}",
"received_events_url": "https://api.github.com/users/brightbsit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! It seems that you're on a very old transformers library version. I would recommend you upgrade to a more recent versions, as this particular has been patched several months ago.",
"@LysandreJik Hi! thank you for answering. But I'm using version 4.3.2",
"Are you sure? The error happens on the following line:\r\n\r\n```py\r\n extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility\r\n```\r\n\r\nThis does not exist in recent Transformers versions.",
"It was changed in may: https://github.com/huggingface/transformers/pull/4300",
"@LysandreJik Oh.. sorry. i was confused. I installed and uninstalled because it interupted run the code, saying it can't import pytorch_transformers.\r\n\r\nThe code can run with transformers from \"https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e.\"\r\n\r\nIs there any way that i can run?\r\nthis is the full traceback when i install transformers\r\n```\r\nTraceback (most recent call last):\r\n File \"oscar/run_captioning.py\", line 20, in <module>\r\n from oscar.utils.cbs import ConstraintFilter, ConstraintBoxesReader\r\n File \"/home/u2/바탕화면/Oscar/oscar/utils/cbs.py\", line 13, in <module>\r\n from oscar.modeling.modeling_utils import BeamHypotheses\r\n File \"/home/u2/바탕화면/Oscar/oscar/modeling/modeling_utils.py\", line 8, in <module>\r\n from transformers.pytorch_transformers.modeling_bert import (BertConfig,\r\nModuleNotFoundError: No module named 'transformers.pytorch_transformers'\r\n```\r\n\r\n",
"Oh that's a very old version indeed! Unfrotunately, without knowing what's in your `oscar` folder it's a bit complicated to help you.\r\n\r\nFor example, the following line is erroneous:\r\n\r\n```\r\nfrom transformers.pytorch_transformers.modeling_bert import (BertConfig,\r\n```\r\n\r\nIt should simply be \r\n\r\n```\r\nfrom transformers import BertConfig\r\n```",
"@LysandreJik It imports many .py files from pytorch_transformers/modeling_bert.\r\n\r\nDo you think if i chage below code, I can fix it?\r\n` extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility`\r\n\r\n\r\n\r\n\r\n\r\n",
"This is the try/except we implemented to prevent that error from happening. You can add it to your file:\r\n\r\n```py\r\n try:\r\n dtype = next(self.parameters()).dtype\r\n except StopIteration:\r\n # For nn.DataParallel compatibility in PyTorch 1.5\r\n\r\n def find_tensor_attributes(module: nn.Module) -> List[Tuple[str, Tensor]]:\r\n tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]\r\n return tuples\r\n\r\n gen = self._named_members(get_members_fn=find_tensor_attributes)\r\n first_tuple = next(gen)\r\n dtype = first_tuple[1].dtype\r\n\r\n extended_attention_mask = extended_attention_mask.to(dtype)\r\n```",
"@LysandreJik I added it on my modeling_bert.py. \r\n\r\nI got this traceback.\r\nI guess it's hard to fix it just editing some code from original file.\r\n\r\nI really appreciate your help!\r\n\r\nif you have any idea to solve it, please share with me. \r\nThank you again.\r\n```\r\nTraceback (most recent call last):\r\n File \"oscar/run_captioning.py\", line 886, in <module>\r\n main()\r\n File \"oscar/run_captioning.py\", line 865, in main\r\n global_step, avg_loss = train(args, train_dataset, val_dataset, model, tokenizer)\r\n File \"oscar/run_captioning.py\", line 436, in train\r\n outputs = model(**inputs)\r\n File \"/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 161, in forward\r\n outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n File \"/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 171, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n File \"/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 86, in parallel_apply\r\n output.reraise()\r\n File \"/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/_utils.py\", line 428, in reraise\r\n raise self.exc_type(msg)\r\nNameError: Caught NameError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/u2/바탕화면/Oscar/oscar/modeling/modeling_bert.py\", line 224, in forward\r\n dtype = next(self.parameters()).dtype\r\nStopIteration\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 61, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/u2/바탕화면/Oscar/oscar/modeling/modeling_bert.py\", line 453, in forward\r\n return self.encode_forward(*args, **kwargs)\r\n File \"/home/u2/바탕화면/Oscar/oscar/modeling/modeling_bert.py\", line 461, in encode_forward\r\n encoder_history_states=encoder_history_states)\r\n File \"/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/u2/바탕화면/Oscar/oscar/modeling/modeling_bert.py\", line 227, in forward\r\n def find_tensor_attributes(module: nn.Module) -> List[Tuple[str, Tensor]]:\r\nNameError: name 'List' is not defined\r\n\r\n```\r\n",
"These are the type hints. Just remove them if you don't need them or don't want to import them:\r\n\r\n```py\r\n try:\r\n dtype = next(self.parameters()).dtype\r\n except StopIteration:\r\n # For nn.DataParallel compatibility in PyTorch 1.5\r\n\r\n def find_tensor_attributes(module):\r\n tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)]\r\n return tuples\r\n\r\n gen = self._named_members(get_members_fn=find_tensor_attributes)\r\n first_tuple = next(gen)\r\n dtype = first_tuple[1].dtype\r\n\r\n extended_attention_mask = extended_attention_mask.to(dtype)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | NONE | null | I'm using cuda11.0 with RTX3090.(using ubuntu 18.04)
I don't know how can I solve this problem.
I saw some people solve this StopIteration error with downgrade torch.
But rtx3090 is only compatible with cuda 11.
Please help!
[pip list]
anytree 2.8.0
apex 0.1
boto3 1.16.63
botocore 1.19.63
certifi 2020.12.5
chardet 4.0.0
cycler 0.10.0
decorator 4.4.2
future 0.18.2
idna 2.10
imageio 2.9.0
jmespath 0.10.0
kiwisolver 1.3.1
matplotlib 3.3.4
mkl-fft 1.2.0
mkl-random 1.1.1
mkl-service 2.3.0
networkx 2.5
numpy 1.20.0
olefile 0.46
oscar 0.1.0 /home/u2/바탕화면/Kim_Project/Oscar
Pillow 8.1.0
pip 20.3.3
pyparsing 2.4.7
python-dateutil 2.8.1
PyWavelets 1.1.1
PyYAML 5.4.1
regex 2020.11.13
requests 2.25.1
s3transfer 0.3.4
scikit-image 0.18.1
scipy 1.6.0
setuptools 52.0.0.post20210125
six 1.15.0
tifffile 2021.1.14
torch 1.7.1+cu110
torchaudio 0.7.2
torchvision 0.8.2+cu110
tqdm 4.56.0
typing-extensions 3.7.4.3
urllib3 1.26.3
wheel 0.36.2
here is the full traceback.
```
2021-02-16 03:37:03,483 vlpretrain WARNING: Device: cuda, n_gpu: 2
2021-02-16 03:37:07,380 vlpretrain INFO: Training/evaluation parameters Namespace(adam_epsilon=1e-08, add_od_labels=True, config_name='', data_dir='', device=device(type='cuda'), do_eval=False, do_lower_case=True, do_test=False, do_train=True, drop_out=0.1, eval_model_dir='', evaluate_during_training=True, gradient_accumulation_steps=1, img_feature_dim=2054, img_feature_type='frcnn', learning_rate=3e-05, length_penalty=1, logging_steps=20, loss_type='sfmx', mask_prob=0.15, max_gen_length=20, max_grad_norm=1.0, max_img_seq_length=50, max_masked_tokens=3, max_seq_a_length=40, max_seq_length=70, max_steps=-1, min_constraints_to_satisfy=2, model_name_or_path='pre_trained/base-vg-labels/ep_67_588997', n_gpu=2, no_cuda=False, num_beams=5, num_keep_best=1, num_labels=2, num_return_sequences=1, num_train_epochs=30, num_workers=4, output_dir='output/', output_hidden_states=False, output_mode='classification', per_gpu_eval_batch_size=128, per_gpu_train_batch_size=64, repetition_penalty=1, save_steps=5000, scheduler='linear', scst=False, seed=88, temperature=1, test_yaml='oscar/coco_caption/test.yaml', tokenizer_name='', top_k=0, top_p=1, train_yaml='oscar/coco_caption/train.yaml', use_cbs=False, val_yaml='oscar/coco_caption/val.yaml', warmup_steps=0, weight_decay=0.05)
/home/u2/바탕화면/Kim_Project/Oscar/oscar/utils/misc.py:33: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
return yaml.load(fp)
2021-02-16 03:37:09,671 vlpretrain INFO: ***** Running training *****
INFO:vlpretrain:***** Running training *****
2021-02-16 03:37:09,672 vlpretrain INFO: Num examples = 566747
INFO:vlpretrain: Num examples = 566747
2021-02-16 03:37:09,672 vlpretrain INFO: Num Epochs = 30
INFO:vlpretrain: Num Epochs = 30
2021-02-16 03:37:09,672 vlpretrain INFO: Batch size per GPU = 64
INFO:vlpretrain: Batch size per GPU = 64
2021-02-16 03:37:09,672 vlpretrain INFO: Total train batch size (w. parallel, & accumulation) = 128
INFO:vlpretrain: Total train batch size (w. parallel, & accumulation) = 128
2021-02-16 03:37:09,672 vlpretrain INFO: Gradient Accumulation steps = 1
INFO:vlpretrain: Gradient Accumulation steps = 1
2021-02-16 03:37:09,672 vlpretrain INFO: Total optimization steps = 132840
INFO:vlpretrain: Total optimization steps = 132840
oscar/run_captioning.py:110: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return torch.Tensor(features)
oscar/run_captioning.py:110: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return torch.Tensor(features)
oscar/run_captioning.py:110: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return torch.Tensor(features)
oscar/run_captioning.py:110: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return torch.Tensor(features)
Traceback (most recent call last):
File "oscar/run_captioning.py", line 884, in <module>
main()
File "oscar/run_captioning.py", line 863, in main
global_step, avg_loss = train(args, train_dataset, val_dataset, model, tokenizer)
File "oscar/run_captioning.py", line 434, in train
outputs = model(**inputs)
File "/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/u2/바탕화면/Kim_Project/Oscar/oscar/modeling/modeling_bert.py", line 440, in forward
return self.encode_forward(*args, **kwargs)
File "/home/u2/바탕화면/Kim_Project/Oscar/oscar/modeling/modeling_bert.py", line 448, in encode_forward
encoder_history_states=encoder_history_states)
File "/home/u2/anaconda3/envs/oscar/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/u2/바탕화면/Kim_Project/Oscar/oscar/modeling/modeling_bert.py", line 223, in forward
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
StopIteration
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10199/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10199/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10198/comments | https://api.github.com/repos/huggingface/transformers/issues/10198/events | https://github.com/huggingface/transformers/issues/10198 | 808,747,886 | MDU6SXNzdWU4MDg3NDc4ODY= | 10,198 | ONNX Export - cannot resolve operator 'Shape' with opsets: ai.onnx v11 | {
"login": "biro-mark",
"id": 58680214,
"node_id": "MDQ6VXNlcjU4NjgwMjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/58680214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/biro-mark",
"html_url": "https://github.com/biro-mark",
"followers_url": "https://api.github.com/users/biro-mark/followers",
"following_url": "https://api.github.com/users/biro-mark/following{/other_user}",
"gists_url": "https://api.github.com/users/biro-mark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/biro-mark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/biro-mark/subscriptions",
"organizations_url": "https://api.github.com/users/biro-mark/orgs",
"repos_url": "https://api.github.com/users/biro-mark/repos",
"events_url": "https://api.github.com/users/biro-mark/events{/privacy}",
"received_events_url": "https://api.github.com/users/biro-mark/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry I think I misinterpreted the operators table https://github.com/onnx/onnx/blob/master/docs/Operators.md and `Shape` should have been available since opset 1, getting an update in opset 13. \r\n\r\nThis seems to be an issue with `onnxjs` not implementing the full set of operators in opset 11."
] | 1,613 | 1,613 | 1,613 | NONE | null | ## Environment info
- `transformers` version: 4.3.2
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.8.7
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
## Information
Model I am using: DistilBertForTokenClassification
The problem arises when using:
Exporting the model to ONNX and trying to load it with onnxjs in NodeJS.
The tasks I am working on is:
Classifying tokens with DistilBertForTokenClassification
## To reproduce
Try to convert DistilBertForTokenClassification to ONNX and load it with onnxjs. I have prepared this minimal repo https://github.com/biro-mark/transformers-onnx-shape-operator-issue
1. python save_model.py
2. python -m transformers.convert_graph_to_onnx --model ./model --framework pt --tokenizer distilbert-base-uncased onnx/out.onnx
3. node index.js
This outputs ```RuntimeError: abort(TypeError: cannot resolve operator 'Shape' with opsets: ai.onnx v11). Build with -s ASSERTIONS=1 for more info.
at process.abort (C:\Users\marki\node_modules\onnxjs\dist\onnx-wasm.js:9:13921)
at process.emit (events.js:314:20)
at processPromiseRejections (internal/process/promises.js:209:33)
at processTicksAndRejections (internal/process/task_queues.js:98:32)```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Expected no error to be thrown. ONNX export should only use operators that are valid for the opset. `Shape` operator looks like it was added in opset v13 but I might also be misinterpreting this table https://github.com/onnx/onnx/blob/master/docs/Operators.md. Adding `--opset 13` flag to the `convert_graph_to_onnx` gives
```
====== Converting model to ONNX ======
ONNX opset version set to: 13
Loading pipeline (model: ./model, tokenizer: distilbert-base-uncased)
Creating folder C:\Users\marki\eg\onnx
Using framework PyTorch: 1.7.1+cu110
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
head_mask is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
Error while converting the model: Unsupported ONNX opset version: 13
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10198/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10197 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10197/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10197/comments | https://api.github.com/repos/huggingface/transformers/issues/10197/events | https://github.com/huggingface/transformers/issues/10197 | 808,733,108 | MDU6SXNzdWU4MDg3MzMxMDg= | 10,197 | Fine-tuning Seq2Seq models for Machine translation | {
"login": "MorenoLaQuatra",
"id": 10062811,
"node_id": "MDQ6VXNlcjEwMDYyODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10062811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MorenoLaQuatra",
"html_url": "https://github.com/MorenoLaQuatra",
"followers_url": "https://api.github.com/users/MorenoLaQuatra/followers",
"following_url": "https://api.github.com/users/MorenoLaQuatra/following{/other_user}",
"gists_url": "https://api.github.com/users/MorenoLaQuatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MorenoLaQuatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MorenoLaQuatra/subscriptions",
"organizations_url": "https://api.github.com/users/MorenoLaQuatra/orgs",
"repos_url": "https://api.github.com/users/MorenoLaQuatra/repos",
"events_url": "https://api.github.com/users/MorenoLaQuatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MorenoLaQuatra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @patrickvonplaten or @patil-suraj can chime in here!",
"Hey @MorenoLaQuatra \r\n\r\nThe `run_seq2seq.py` supports translation with custom dataset. And you can use T5, mT5, MarianMT, mBART, mBART-50 for fine-tuning.\r\nhttps://github.com/huggingface/transformers/tree/master/examples/seq2seq\r\n\r\nAnd the best place to ask this question is our forum https://discuss.huggingface.co/\r\n\r\nHope this helps :) ",
"Hi @patil-suraj, thank you for the feedback.\r\n\r\nActually we are trying to train it in a python script (the overall architecture is more complex than the single network). We were asking about the translation loss. If we do as reported in the above snippet, do we optimize the translation loss?\r\nThe finetuning for MarianMT use the same head and loss of the training phase?\r\n\r\nThank you for your time.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | CONTRIBUTOR | null | Good morning,
@micheledaddetta1
We were experimenting with Seq2Seq models such as MarianMT or T5.
I was wondering if there is a common way to fine-tune those models with custom datasets for the machine translation task.
Specifically we did the following:
```python
embeddings = self.batch_encode_plus(sentences, padding=True, verbose=False)
embeddings = embeddings.to(self.model.device)
labels = self.batch_encode_plus(target_sentences, padding=True, verbose=False).input_ids.to(self.model.device) # expected output tokens
outputs = self.model(input_ids=embeddings.input_ids, labels=decoder_input_ids, return_dict=True)
output_sentences = self.model.generate(**embeddings)
output_sentences = self.decode(output_sentences)
# we also compute sentence embeddings for embedding alignment purposes.
return output_sentences, outputs.loss
```
Is it correct to directly use `outputs.loss` to optimize the model for the machine translation task? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10197/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10196/comments | https://api.github.com/repos/huggingface/transformers/issues/10196/events | https://github.com/huggingface/transformers/pull/10196 | 808,729,550 | MDExOlB1bGxSZXF1ZXN0NTczNzE5NTIx | 10,196 | [CI] make the examples sub-group of tests run always | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | the examples tests weren't running if some previous job failed. this fixes it to always run.
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10196/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10196",
"html_url": "https://github.com/huggingface/transformers/pull/10196",
"diff_url": "https://github.com/huggingface/transformers/pull/10196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10196.patch",
"merged_at": 1613412095000
} |
https://api.github.com/repos/huggingface/transformers/issues/10195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10195/comments | https://api.github.com/repos/huggingface/transformers/issues/10195/events | https://github.com/huggingface/transformers/pull/10195 | 808,723,338 | MDExOlB1bGxSZXF1ZXN0NTczNzE0NDc5 | 10,195 | Specify dataset dtype | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | MEMBER | null | There was an issue in `datasets` <1.3.0 where the datasets type wouldn't be kept. Since this bug was patched, we have to specify the correct type for the dataset.
Co-authored-by: Quentin Lhoest <[email protected]>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10195/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10195",
"html_url": "https://github.com/huggingface/transformers/pull/10195",
"diff_url": "https://github.com/huggingface/transformers/pull/10195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10195.patch",
"merged_at": 1613411837000
} |
https://api.github.com/repos/huggingface/transformers/issues/10194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10194/comments | https://api.github.com/repos/huggingface/transformers/issues/10194/events | https://github.com/huggingface/transformers/issues/10194 | 808,691,866 | MDU6SXNzdWU4MDg2OTE4NjY= | 10,194 | Uploaded a new model but is not found on the hub. | {
"login": "zolekode",
"id": 25635679,
"node_id": "MDQ6VXNlcjI1NjM1Njc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25635679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zolekode",
"html_url": "https://github.com/zolekode",
"followers_url": "https://api.github.com/users/zolekode/followers",
"following_url": "https://api.github.com/users/zolekode/following{/other_user}",
"gists_url": "https://api.github.com/users/zolekode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zolekode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zolekode/subscriptions",
"organizations_url": "https://api.github.com/users/zolekode/orgs",
"repos_url": "https://api.github.com/users/zolekode/repos",
"events_url": "https://api.github.com/users/zolekode/events{/privacy}",
"received_events_url": "https://api.github.com/users/zolekode/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi @zolekode ,\r\n\r\nthe folder structure is not quite correct:\r\n\r\nhttps://huggingface.co/flexudy/t5-small-wav2vec2-grammar-fixer/tree/main\r\n\r\nYou just need to move everything from the `t5-small-wav2vec2-grammar-fixer` folder to the root folder. Then it should work :hugs: ",
"Ah I see. awesome. Thanks alot @stefan-it "
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # 🌟 New model addition
I recently added this model: https://huggingface.co/flexudy/t5-small-wav2vec2-grammar-fixer
However, I get this error whilst trying to download it.
```
Can't load tokenizer for 'flexudy/t5-small-wav2vec2-grammar-fixer'
```
How can I fix it please?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10194/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10193/comments | https://api.github.com/repos/huggingface/transformers/issues/10193/events | https://github.com/huggingface/transformers/issues/10193 | 808,632,319 | MDU6SXNzdWU4MDg2MzIzMTk= | 10,193 | Make use of our copy-consistency script for task-specific models | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | closed | false | null | [] | [
"Hi @sgugger I'm up for the mission. After looking though the code I think I have a basic understanding of what you mean. However, would you be able to provide an example on one model just for clarification? I'm a bit confused on the second step\r\n\r\n\r\n>modeling_bert.py\r\n\r\n```\r\n_CONFIG_FOR_DOC = \"BertConfig\"\r\n_TOKENIZER_FOR_DOC = \"BertTokenizer\"\r\n_CHECKPOINT_FOR_DOC = \"bert-base-uncased\"\r\n```\r\n...\r\n\r\n```\r\nclass BertForSequenceClassification(BertPreTrainedModel):\r\n...\r\n...\r\n @add_code_sample_docstrings(\r\n tokenizer_class=_TOKENIZER_FOR_DOC,\r\n checkpoint=_CHECKPOINT_FOR_DOC,\r\n output_type=SequenceClassifierOutput,\r\n config_class=_CONFIG_FOR_DOC,\r\n )\r\n\r\n\r\n```\r\n\r\nThank you",
"I've given an example of the first step in the issue above. Sadly the second step can't fully work with our utils just yet, I need to make some adjustments to our internal tooling. If you want to begin on step 1 though, don't hesitate!"
] | 1,613 | 1,614 | 1,614 | COLLABORATOR | null | This is an intermediate issue, which is why it gets both the good first issue and good second issue tags.
We have an automated script to check when copies of the same code are consistent inside the library, which allows us to avoid subclassing and keep all code for one model's forward pass inside one file (see our [philosophy]() for more details on this).
The XxxModelForYyy are very similar to one another and should be able to leverage that functionality, so we can easily change only one file when there is a bug/docstring to tweak and all the others are updated. More precisely, models that have a pooler layer could probably base themselves on BERT and models that don't could be based on ELECTRA. The Seq2Seq models that are a bit particular could be based on BART.
To enable this, the checkpoint use in the decorator `@add_code_sample_docstrings` needs to be defined in a constant (otherwise it will end up being copied which we don't want) so to tackle this issue, your mission, should you accept it, will have two steps:
1. Define in all modeling files a `_CHECKPOINT_FOR_DOC` at the beginning (with `_TOKENIZER_FOR_DOC` and `_CONFIG_FOR_DOC`) that should then be used in all the XxxModelForYyy.
2. Adds the relevant `# Copied from xxx with Xxx -> Yyy` whenever possible. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10193/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10192/comments | https://api.github.com/repos/huggingface/transformers/issues/10192/events | https://github.com/huggingface/transformers/issues/10192 | 808,610,222 | MDU6SXNzdWU4MDg2MTAyMjI= | 10,192 | run_mlm.py not utilizing TPU | {
"login": "DarshanDeshpande",
"id": 39432636,
"node_id": "MDQ6VXNlcjM5NDMyNjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/39432636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DarshanDeshpande",
"html_url": "https://github.com/DarshanDeshpande",
"followers_url": "https://api.github.com/users/DarshanDeshpande/followers",
"following_url": "https://api.github.com/users/DarshanDeshpande/following{/other_user}",
"gists_url": "https://api.github.com/users/DarshanDeshpande/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DarshanDeshpande/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DarshanDeshpande/subscriptions",
"organizations_url": "https://api.github.com/users/DarshanDeshpande/orgs",
"repos_url": "https://api.github.com/users/DarshanDeshpande/repos",
"events_url": "https://api.github.com/users/DarshanDeshpande/events{/privacy}",
"received_events_url": "https://api.github.com/users/DarshanDeshpande/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`--pad_to_max_length False` is the reason you have a very slow training: this creates batches of different sequence lengths but TPUs need fixed shapes to be efficient.\r\n\r\nThere was a bug in our argument parser before that ignored bool setting like this, so it may be the reason you are seeing that slow down now instead of before (but it was applying `pad_to_max_length=True` before because of that bug, even if you said the opposite). If you remove that option, you should see a faster training.",
"Perfect! Thank you so much! Closing this issue"
] | 1,613 | 1,613 | 1,613 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2 and Latest version forked from github
- Platform: Linux (Colab env)
- Python version: 3.6
- PyTorch version (GPU?): XLA 1.7
- Tensorflow version (GPU?): None
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Colab TPU with xla_spawn.py
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): DistilBert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
!python /content/transformers/examples/xla_spawn.py --num_cores 8 /content/transformers/examples/language-modeling/run_mlm.py \
--model_type distilbert \
--config_name /content/TokenizerFiles \
--tokenizer_name /content/TokenizerFiles \
--train_file Files/file_aa.txt \
--mlm_probability 0.15 \
--output_dir "/content/TrainingCheckpoints" \
--do_train --per_device_train_batch_size 32 \
--save_steps 500 --disable_tqdm False \
--line_by_line True --max_seq_length 150 \
--pad_to_max_length False \
--cache_dir /content/cache_dir \
--save_total_limit 2
```
My tokenizer and config files are both just {model_type: "distilbert"} and are present in TokenizerFiles folder along with my vocab.txt
The output I get is
```
WARNING:root:TPU has started up successfully with version pytorch-1.7
2021-02-15 14:40:37.816883: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
WARNING:root:TPU has started up successfully with version pytorch-1.7
2021-02-15 14:40:57.239070: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
2021-02-15 14:40:57.283838: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
2021-02-15 14:40:57.446951: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
2021-02-15 14:40:57.470266: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
2021-02-15 14:40:57.473336: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
2021-02-15 14:40:57.686903: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
2021-02-15 14:40:57.863940: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
2021-02-15 14:40:58.555214: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
WARNING:run_mlm:Process rank: -1, device: xla:1, n_gpu: 0distributed training: False, 16-bits training: False
INFO:run_mlm:Training/evaluation parameters TrainingArguments(output_dir=/content/TrainingCheckpoints, overwrite_output_dir=False, do_train=True, do_eval=None, do_predict=False, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs/Feb15_14-41-21_34a4105ebd5a, logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=2, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, local_rank=-1, tpu_num_cores=8, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=/content/TrainingCheckpoints, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, _n_gpu=0)
Using custom data configuration default
Downloading and preparing dataset text/default-e939092a7eff14a8 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab...
02/15/2021 14:41:22 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
Dataset text downloaded and prepared to /root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab. Subsequent calls will reuse this data.
[INFO|configuration_utils.py:447] 2021-02-15 14:41:22,465 >> loading configuration file /content/TokenizerFiles/config.json
[INFO|configuration_utils.py:485] 2021-02-15 14:41:22,466 >> Model config DistilBertConfig {
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"transformers_version": "4.3.2",
"vocab_size": 30522
}
[INFO|configuration_utils.py:447] 2021-02-15 14:41:22,467 >> loading configuration file /content/TokenizerFiles/config.json
[INFO|configuration_utils.py:485] 2021-02-15 14:41:22,476 >> Model config DistilBertConfig {
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"transformers_version": "4.3.2",
"vocab_size": 30522
}
[INFO|tokenization_utils_base.py:1688] 2021-02-15 14:41:22,476 >> Model name '/content/TokenizerFiles' not found in model shortcut name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). Assuming '/content/TokenizerFiles' is a path, a model identifier, or url to a directory containing tokenizer files.
[INFO|tokenization_utils_base.py:1721] 2021-02-15 14:41:22,477 >> Didn't find file /content/TokenizerFiles/tokenizer.json. We won't load it.
[INFO|tokenization_utils_base.py:1721] 2021-02-15 14:41:22,478 >> Didn't find file /content/TokenizerFiles/added_tokens.json. We won't load it.
[INFO|tokenization_utils_base.py:1721] 2021-02-15 14:41:22,478 >> Didn't find file /content/special_tokens_map.json. We won't load it.
[INFO|tokenization_utils_base.py:1784] 2021-02-15 14:41:22,479 >> loading file /content/TokenizerFiles/vocab.txt
[INFO|tokenization_utils_base.py:1784] 2021-02-15 14:41:22,479 >> loading file None
[INFO|tokenization_utils_base.py:1784] 2021-02-15 14:41:22,480 >> loading file None
[INFO|tokenization_utils_base.py:1784] 2021-02-15 14:41:22,480 >> loading file None
[INFO|tokenization_utils_base.py:1784] 2021-02-15 14:41:22,480 >> loading file /content/TokenizerFiles/tokenizer_config.json
INFO:run_mlm:Training new model from scratch
Using custom data configuration default
Reusing dataset text (/root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab)
02/15/2021 14:41:22 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
Using custom data configuration default
Reusing dataset text (/root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab)
02/15/2021 14:41:23 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
Using custom data configuration default
Reusing dataset text (/root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab)
02/15/2021 14:41:23 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
02/15/2021 14:41:23 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
02/15/2021 14:41:23 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
Using custom data configuration default
Reusing dataset text (/root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab)
Using custom data configuration default
Reusing dataset text (/root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab)
Using custom data configuration default
Reusing dataset text (/root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab)
02/15/2021 14:41:24 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
Using custom data configuration default
Reusing dataset text (/root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab)
100% 2/2 [00:01<00:00, 1.72ba/s]
100% 2/2 [00:01<00:00, 1.65ba/s]
Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab/cache-0028d6bfc2eb6117.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab/cache-0028d6bfc2eb6117.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab/cache-0028d6bfc2eb6117.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab/cache-0028d6bfc2eb6117.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab/cache-0028d6bfc2eb6117.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/text/default-e939092a7eff14a8/0.0.0/daf90a707a433ac193b369c8cc1772139bb6cca21a9c7fe83bdd16aad9b9b6ab/cache-0028d6bfc2eb6117.arrow
[INFO|trainer.py:432] 2021-02-15 14:41:59,875 >> The following columns in the training set don't have a corresponding argument in `DistilBertForMaskedLM.forward` and have been ignored: special_tokens_mask.
[INFO|trainer.py:837] 2021-02-15 14:41:59,879 >> ***** Running training *****
[INFO|trainer.py:838] 2021-02-15 14:41:59,879 >> Num examples = 2000
[INFO|trainer.py:839] 2021-02-15 14:41:59,879 >> Num Epochs = 3
[INFO|trainer.py:840] 2021-02-15 14:41:59,879 >> Instantaneous batch size per device = 32
[INFO|trainer.py:841] 2021-02-15 14:41:59,879 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:842] 2021-02-15 14:41:59,879 >> Gradient Accumulation steps = 1
[INFO|trainer.py:843] 2021-02-15 14:41:59,879 >> Total optimization steps = 24
17% 4/24 [03:56<17:13, 51.67s/it] # <------------------- HERE ------------------------>
Traceback (most recent call last):
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/popen_fork.py", line 28, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
```
The file used here is only for testing and has a total of 2000 lines of text. It almost seems like the training is taking place on the CPU instead of the TPU.
The installation of xla was done using
```!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.7-cp36-cp36m-linux_x86_64.whl```
I ran the same script a couple of days back and it worked fine so I don't know what is wrong now. At that time I had saved the tokenizer using ```.save()``` but due to some recent changes in the library, that doesn't work anymore. So I saved it using ```save_model()``` and it works fine now. Can this issue be because of that?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The training should be faster. The last time I ran run_mlm.py, I got almost 3 iterations per second | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10192/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10191/comments | https://api.github.com/repos/huggingface/transformers/issues/10191/events | https://github.com/huggingface/transformers/pull/10191 | 808,595,707 | MDExOlB1bGxSZXF1ZXN0NTczNjEwMzE0 | 10,191 | Making TF BART-like models XLA and AMP compliant | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I succeed to fix Marian and Pegasus, and my first guess was the good one. I basically reworked a bit how the embedding was created, and now it works in XLA_GPU. Of course, all the corresponding slow tests are passing, and the weights are properly loaded."
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
This PR makes the TF BART-like models compliant with AMP and XLA. The main issue for XLA was all the asserts, XLA is not compliant with them (see the [TF doc](https://www.tensorflow.org/xla/known_issues)), so I had to disable them if the model is run with another mode than eager.
TF Marian and Pegasus have still their XLA test locked because they are not working for XLA_GPU. I need to investigate more in order to better understand why. My first guess is because of the `TFXSinusoidalPositionalEmbedding` class. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10191/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10191",
"html_url": "https://github.com/huggingface/transformers/pull/10191",
"diff_url": "https://github.com/huggingface/transformers/pull/10191.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10191.patch",
"merged_at": 1613580536000
} |
https://api.github.com/repos/huggingface/transformers/issues/10190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10190/comments | https://api.github.com/repos/huggingface/transformers/issues/10190/events | https://github.com/huggingface/transformers/issues/10190 | 808,555,015 | MDU6SXNzdWU4MDg1NTUwMTU= | 10,190 | 0% GPU usage when using `hyperparameter_search` | {
"login": "neel04",
"id": 11617870,
"node_id": "MDQ6VXNlcjExNjE3ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/11617870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neel04",
"html_url": "https://github.com/neel04",
"followers_url": "https://api.github.com/users/neel04/followers",
"following_url": "https://api.github.com/users/neel04/following{/other_user}",
"gists_url": "https://api.github.com/users/neel04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neel04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neel04/subscriptions",
"organizations_url": "https://api.github.com/users/neel04/orgs",
"repos_url": "https://api.github.com/users/neel04/repos",
"events_url": "https://api.github.com/users/neel04/events{/privacy}",
"received_events_url": "https://api.github.com/users/neel04/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I tried decorating a function that contains the tranier command like this:-\r\n```\r\n\r\[email protected](num_cpus=3, num_gpus=1, accelerator_type=ray.accelerators.NVIDIA_TESLA_V100)\r\ndef search():\r\n trainer.hyperparameter_search(n_trials=100, compute_objective='accuracy', direction=\"maximize\", backend='ray',\r\n scheduler=pbt)\r\n\r\nsearch.remote()\r\n```\r\nbut I am constantly getting:\r\n`AttributeError: module 'ray' has no attribute 'accelerators'`\r\nwhich I think is there because I may have written it the wrong way. can anyone shed any light on this?\r\n",
"Hi @neel04 can you add this arg to `trainer.hyperparameter_search`:\r\n\r\n```\r\nresources_per_trial={\"cpu\": 1, \"gpu\": 1}\r\n```\r\n\r\nThis will let Tune know to reserve 1 CPU and 1 GPU for each trial.\r\n\r\nAlso, after instantiating your training_args, but before passing it into the `Trainer` can you also add this: `training_args._n_gpu = 1`.\r\n\r\nHere is a more up to date example if you want to try it out and see if it works for you: https://github.com/amogkam/ray/blob/hf-pbt/python/ray/tune/examples/pbt_transformers/pbt_transformers.py\r\n",
"@amogkam Bless you sir! I am now getting mostly 100% GPU usage (but only around 6-7GB VRAM usage out of available 16Gb).\r\nHowever, each of my trial is failing with this error:-\r\n```\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trial_runner.py\", line 586, in _process_trial\r\n results = self.trial_executor.fetch_result(trial)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/ray_trial_executor.py\", line 609, in fetch_result\r\n result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/_private/client_mode_hook.py\", line 47, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/worker.py\", line 1456, in get\r\n raise value.as_instanceof_cause()\r\nray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=415, ip=172.28.0.2)\r\n File \"python/ray/_raylet.pyx\", line 480, in ray._raylet.execute_task\r\n File \"python/ray/_raylet.pyx\", line 432, in ray._raylet.execute_task.function_executor\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py\", line 167, in train_buffered\r\n result = self.train()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py\", line 226, in train\r\n result = self.step()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 366, in step\r\n self._report_thread_runner_error(block=True)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 513, in _report_thread_runner_error\r\n (\"Trial raised an exception. Traceback:\\n{}\".format(err_tb_str)\r\nray.tune.error.TuneError: Trial raised an exception. Traceback:\r\nray::ImplicitFunc.train_buffered() (pid=415, ip=172.28.0.2)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 248, in run\r\n self._entrypoint()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 316, in entrypoint\r\n self._status_reporter.get_checkpoint())\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 576, in _trainable_func\r\n output = fn()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 651, in _inner\r\n inner(config, checkpoint_dir=None)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 645, in inner\r\n fn(config, **fn_kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/integrations.py\", line 160, in _objective\r\n local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 983, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 1059, in _maybe_log_save_evaluate\r\n self._report_to_hp_search(trial, epoch, metrics)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 640, in _report_to_hp_search\r\n self.objective = self.compute_objective(metrics.copy())\r\nTypeError: 'str' object is not callable\r\n```\r\nIs it something related to `ray/tune` or is this a wrong argument on my part?\r\nBTW would you also happen to know how to set a fixed batch size for all trials? for some reason, it is overriding the `batch_size` provided in the Trainer_arguments and trying out it's own random ones",
"Ah this is because you are passing in `'accuracy'` as the `compute_objective` to `hyperparameter_search`. The `compute_objective` should actually be a function that computes the objective to minimize or maximize from the metrics returned by `evaluate`. You can also not pass one in, and it will default to `trainer_utils.default_compute_objective`. ",
"@amogkam MIssed that :( thanx a lot for taking the time out to help me!! :+1: :100: :1st_place_medal: \r\nThanx to amogkam's comment, the issuse described here has been resolved, so I am closing this. but it is still giving the error:-\r\n```\r\n\r\n2021-02-15 18:58:56,343\tERROR worker.py:1053 -- Possible unhandled error from worker: ray::ImplicitFunc.train_buffered() (pid=2319, ip=172.28.0.2)\r\n File \"python/ray/_raylet.pyx\", line 480, in ray._raylet.execute_task\r\n File \"python/ray/_raylet.pyx\", line 432, in ray._raylet.execute_task.function_executor\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py\", line 167, in train_buffered\r\n result = self.train()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py\", line 226, in train\r\n result = self.step()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 366, in step\r\n self._report_thread_runner_error(block=True)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 513, in _report_thread_runner_error\r\n (\"Trial raised an exception. Traceback:\\n{}\".format(err_tb_str)\r\nray.tune.error.TuneError: Trial raised an exception. Traceback:\r\nray::ImplicitFunc.train_buffered() (pid=2319, ip=172.28.0.2)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 248, in run\r\n self._entrypoint()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 316, in entrypoint\r\n self._status_reporter.get_checkpoint())\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 576, in _trainable_func\r\n output = fn()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 651, in _inner\r\n inner(config, checkpoint_dir=None)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 645, in inner\r\n fn(config, **fn_kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/integrations.py\", line 160, in _objective\r\n local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 983, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 1059, in _maybe_log_save_evaluate\r\n self._report_to_hp_search(trial, epoch, metrics)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 640, in _report_to_hp_search\r\n self.objective = self.compute_objective(metrics.copy())\r\n File \"<ipython-input-20-4880cfa49052>\", line 5, in compute_metrics\r\nAttributeError: 'dict' object has no attribute 'label_ids'\r\n```\r\nIt's not clear to me why there is a whole string of errors on code copy/pasted from training models (which executes successfully). but there are 2 issues:-\r\n\r\n1. Firstly, the `train_bs` is randomly selected and almost always causes OOM's - arguments do not override this behavior\r\n2. The above \"dict\" error happens, if OOM does not get it.\r\n\r\ncan anyone also explain why the method in the official docs doesn't work even though it is the same task with the same libraries? Has the `ray` framework changed and the docs do not reflect it? Honestly, it is becoming difficult exactly where to post these errors - ray OR HuggingFace. They require a person intimately familiar with both.",
"@neel04 are you running this example- https://github.com/amogkam/ray/blob/hf-pbt/python/ray/tune/examples/pbt_transformers/pbt_transformers.py? This is the most up to date one and should work with transformers v4.",
"I think the example is a bit verbose and some of it goes over my head :) So I am having difficulty in identifying what steps I have configured wrong and what exactly needs to be corrected. Understanding things like passing the `tune_configs` are easy to get, but the errors are much more difficult to track",
"OK got it. The stack trace you just posted is coming from the `compute_metrics` that is passed into your `Trainer`. What does that look like?",
"`compute_metrics` is standard code from the official example:-\r\n```\r\nfrom sklearn.metrics import accuracy_score, precision_recall_fscore_support\r\n\r\ndef compute_metrics(pred):\r\n labels = pred.label_ids\r\n preds = pred.predictions.argmax(-1)\r\n precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted') #none gives score for each class\r\n acc = accuracy_score(labels, preds)\r\n return {\r\n 'accuracy': acc,\r\n 'f1': f1,\r\n 'precision': precision,\r\n 'recall': recall\r\n }\r\n```\r\nIt worked perfectly when training the model without any tuning, so I doubt the true error originates from here.\r\nAlso, would you mind telling me how you tracked the problem to `compute_metrics` since I couldn't sniff any clues to there in the error?",
"The stack trace says `line 5, in compute_metrics` so I thought it was coming from there. Do you mind sharing your full code to reproduce this?",
"Sure. Sorry if the code is a bit verbose.\r\n```\r\n\r\n%%capture\r\n!pip install ray[tune]\r\n!pip install ray\r\n!pip install -q transformers\r\n\r\nfrom sklearn.metrics import accuracy_score, precision_recall_fscore_support\r\nfrom transformers import RobertaForSequenceClassification, Trainer, TrainingArguments\r\n\r\ndef compute_metrics(pred):\r\n labels = pred.label_ids\r\n preds = pred.predictions.argmax(-1)\r\n precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted') #none gives score for each class\r\n acc = accuracy_score(labels, preds)\r\n return {\r\n 'accuracy': acc,\r\n 'f1': f1,\r\n 'precision': precision,\r\n 'recall': recall\r\n }\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir='./results', # output directory\r\n overwrite_output_dir = True,\r\n num_train_epochs=20, # total number of training epochs\r\n per_device_train_batch_size=8, # batch size per device during training\r\n per_device_eval_batch_size=8, # batch size for evaluation,\r\n warmup_steps=500, # number of warmup steps for learning rate scheduler\r\n weight_decay=0.01, # strength of weight decay\r\n logging_dir='./logs', # directory for storing logs\r\n logging_steps=10,\r\n evaluation_strategy='steps',\r\n learning_rate=2e-5,\r\n fp16 = True,\r\n load_best_model_at_end = True,\r\n metric_for_best_model = 'accuracy',\r\n greater_is_better = True,\r\n seed = 101,\r\n do_eval = True,\r\n do_train = True,\r\n adam_beta1=0.9,\r\n adam_beta2=0.999,\r\n adam_epsilon=1e-8,\r\n max_grad_norm=1.0,\r\n adafactor = False\r\n\r\n)\r\n\r\ntraining_args._n_gpu = 1\r\n\r\ndef model_init():\r\n return RobertaForSequenceClassification.from_pretrained('/content/drive/MyDrive/checkpoint-2700', num_labels=20)\r\ntrainer = Trainer(\r\n model_init=model_init,\r\n args=training_args,\r\n train_dataset=train_dataset, # Indexing\r\n eval_dataset=val_dataset,\r\n tokenizer=tokenizer,\r\n compute_metrics=compute_metrics)\r\n\r\nfrom ray.tune.suggest.hyperopt import HyperOptSearch\r\nfrom ray.tune.schedulers import PopulationBasedTraining\r\nfrom ray.tune import CLIReporter\r\nfrom ray import tune\r\nimport random\r\n\r\npbt = PopulationBasedTraining(\r\n time_attr=\"training_iteration\",\r\n metric=\"accuracy\",\r\n mode=\"max\",\r\n perturbation_interval=10, # every 10 `time_attr` units\r\n # (training_iterations in this case)\r\n hyperparam_mutations={\r\n\r\n \"weight_decay\": tune.uniform(1, 0.0001),\r\n \"seed\": tune.uniform(1,20000),\r\n \"learning_rate\": tune.choice([1e-5, 2e-5, 3e-5, 4e-5, 5e-5, 6e-5, 2e-7, 1e-7, 3e-7, 2e-8]),\r\n \"adafactor\": tune.choice(['True','False']),\r\n \"adam_beta1\": tune.uniform(1.0, 0.0),\r\n \"adam_beta2\": tune.uniform(1.0, 0),\r\n \"adam_epsilon\": tune.choice([1e-8, 2e-8, 3e-8, 1e-9, 2e-9, 3e-10]),\r\n \"max_grad_norm\": tune.uniform(1.0, 0),\r\n\r\n })\r\n\r\n\r\nreporter = CLIReporter(\r\n parameter_columns={\r\n \"weight_decay\": \"w_decay\",\r\n \"learning_rate\": \"lr\",\r\n \"per_device_train_batch_size\": \"train_bs/gpu\",\r\n \"num_train_epochs\": \"num_epochs\"},\r\n\r\n metric_columns=[\"eval_acc\", \"eval_loss\", \"epoch\", \"training_iteration\"])\r\n\r\ntune_config = {\r\n \"per_device_train_batch_size\": 8,\r\n \"per_device_eval_batch_size\": 16,\r\n \"num_train_epochs\": tune.choice([15,20,25])\r\n }\r\n \r\nbest = trainer.hyperparameter_search(hp_space = lambda _: tune_config, \r\n n_trials=100, compute_objective=compute_metrics, direction=\"maximize\", backend='ray', #search_alg=HyperOptSearch(metric='accuracy', mode='max', use_early_stopped_trials=True)\r\n scheduler=pbt, resources_per_trial={\"cpu\": 2, \"gpu\": 1}, keep_checkpoints_num=1,\r\n name = \"tune_transformer_pbt\", progress_reporter=reporter)\r\n```",
"Ah @neel04, the error message is happening because `compute_metrics` is being passed as the `compute_objective` arg in `trainer.hyperparameter_search`. If you remove this arg your code runs fine.\r\n\r\n`compute_objective` should be a function that takes in the output of `evaluate` (which is the dictionary returned `compute_metrics`) as an input and returns a single float value (see the docstring). It is not the same as `compute_metrics`. So here you should just be returning the \"accuracy\" value from the input dictionary. Something like this should work I believe:\r\n```\r\ndef compute_objective(metrics):\r\n return metrics[\"accuracy\"]\r\n```",
"So I tried that above, but apparently `evaluate` does not return \"accuracy\", so as a workaround I switched to `eval_accuracy`.\r\nBut this creates a new problem; this error comes in the first trial **but** it doesn't go on to the next trial. Could be that it is training? GPU usage seems to be 0, so I doubt it is training but it is not terminating the process or moving on. Strange.\r\n```\r\n\r\n2021-02-17 10:57:12,244\tERROR worker.py:1053 -- Possible unhandled error from worker: ray::ImplicitFunc.train_buffered() (pid=1340, ip=172.28.0.2)\r\n File \"python/ray/_raylet.pyx\", line 480, in ray._raylet.execute_task\r\n File \"python/ray/_raylet.pyx\", line 432, in ray._raylet.execute_task.function_executor\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py\", line 167, in train_buffered\r\n result = self.train()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/trainable.py\", line 226, in train\r\n result = self.step()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 366, in step\r\n self._report_thread_runner_error(block=True)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 513, in _report_thread_runner_error\r\n (\"Trial raised an exception. Traceback:\\n{}\".format(err_tb_str)\r\nray.tune.error.TuneError: Trial raised an exception. Traceback:\r\nray::ImplicitFunc.train_buffered() (pid=1340, ip=172.28.0.2)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 248, in run\r\n self._entrypoint()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 316, in entrypoint\r\n self._status_reporter.get_checkpoint())\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 576, in _trainable_func\r\n output = fn()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 651, in _inner\r\n inner(config, checkpoint_dir=None)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/function_runner.py\", line 645, in inner\r\n fn(config, **fn_kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/integrations.py\", line 160, in _objective\r\n local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 925, in train\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py\", line 435, in __next__\r\n data = self._next_data()\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py\", line 475, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"<ipython-input-12-cd510628f360>\", line 10, in __getitem__\r\nTypeError: new(): invalid data type 'str'\r\n```\r\n\r\nIt looks like it Is pointing to 'objective', which is the same function you wrote above:-\r\n\r\n```\r\ndef compute_objective(metrics):\r\n return metrics[\"eval_accuracy\"] #does not return accuracy\r\n```\r\nInterestingly, removing the args `compute_objective`and `direction` does not yield anything, so I figured the problem must be elsewhere. \r\n\r\nPutting `eval_accuracy` in the PBT parameters and making the `compute_objective` solves the issue.\r\n\r\nThanks a lot @amogkam for your support!! we need more people like you :+1: :rocket: :partying_face: \r\n"
] | 1,613 | 1,613 | 1,613 | NONE | null | _##_ Environment info
- `transformers` version: 4.4.0.dev0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No (Single GPU) --> Colab
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
Models:
- ray/raytune: @richardliaw, @amogkam
- trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): RoBERTa
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
This is in continuation with #10055 where the underlying code is the same, and it is more or less the same as the official example. The problem is that when I start `hyperparameter_search` then it just keeps running with 0% GPU usage (memory is occupied) and the CPU also remains relatively idle:-
```
== Status ==
Memory usage on this node: 5.9/25.5 GiB
PopulationBasedTraining: 0 checkpoints, 0 perturbs
Resources requested: 1/4 CPUs, 1/1 GPUs, 0.0/14.99 GiB heap, 0.0/5.18 GiB objects (0/1.0 accelerator_type:P100)
Result logdir: /root/ray_results/_inner_2021-02-15_11-45-33
Number of trials: 1/100 (1 RUNNING)
+--------------------+----------+-------+-------------+--------------+--------------+----------------+-----------------+-----------------+--------------------+-------------------------------+---------+----------------+
| Trial name | status | loc | adafactor | adam_beta1 | adam_beta2 | adam_epsilon | learning_rate | max_grad_norm | num_train_epochs | per_device_train_batch_size | seed | weight_decay |
|--------------------+----------+-------+-------------+--------------+--------------+----------------+-----------------+-----------------+--------------------+-------------------------------+---------+----------------|
| _inner_4fd43_00000 | RUNNING | | True | 0.862131 | 0.813033 | 1e-09 | 2.34754e-05 | 0.0056821 | 2 | 16 | 21.1968 | 0.95152 |
+--------------------+----------+-------+-------------+--------------+--------------+----------------+-----------------+-----------------+--------------------+-------------------------------+---------+----------------+
```
Sometimes, there are also warnings that the single worker is pending due to lack of resources, however my CPU usage is minimum, plenty of RAM is free (~24 Gb) and GPU also some about a gig of free memory.
```
2021-02-15 13:56:53,761 WARNING worker.py:1107 -- The actor or task with ID ffffffffffffffff44ed5e1383be630817647ecd01000000 cannot be scheduled right now. It requires {CPU: 1.000000}, {GPU: 1.000000} for placement, but this node only has remaining {3.000000/4.000000 CPU, 14.990234 GiB/14.990234 GiB memory, 0.000000/1.000000 GPU, 1.000000/1.000000 node:172.28.0.2, 5.126953 GiB/5.126953 GiB object_store_memory, 1.000000/1.000000 accelerator_type:V100}
. In total there are 0 pending tasks and 1 pending actors on this node. This is likely due to all cluster resources being claimed by actors. To resolve the issue, consider creating fewer actors or increase the resources available to this Ray cluster. You can ignore this message if this Ray cluster is expected to auto-scale.
```
This is how the tuner looks like:-
```
from ray.tune.suggest.hyperopt import HyperOptSearch
from ray.tune.schedulers import PopulationBasedTraining
from ray import tune
import random
pbt = PopulationBasedTraining(
time_attr="training_iteration",
metric="accuracy",
mode="max",
perturbation_interval=10, # every 10 `time_attr` units
# (training_iterations in this case)
hyperparam_mutations={
"weight_decay": tune.uniform(1, 0.0001),
"seed": tune.uniform(1,20000),
"learning_rate": tune.choice([1e-5, 2e-5, 3e-5, 4e-5, 5e-5, 6e-5, 2e-7, 1e-7, 3e-7, 2e-8]),
"adafactor": tune.choice(['True','False']),
"adam_beta1": tune.uniform(1.0, 0.0),
"adam_beta2": tune.uniform(1.0, 0),
"adam_epsilon": tune.choice([1e-8, 2e-8, 3e-8, 1e-9, 2e-9, 3e-10]),
"max_grad_norm": tune.uniform(1.0, 0),
})
best_run = trainer.hyperparameter_search(n_trials=100, compute_objective='accuracy', direction="maximize", backend='ray',
scheduler=pbt)
```
Using `HyperOptScheduler` causes OOMs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10190/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10189/comments | https://api.github.com/repos/huggingface/transformers/issues/10189/events | https://github.com/huggingface/transformers/pull/10189 | 808,539,545 | MDExOlB1bGxSZXF1ZXN0NTczNTYzNzc1 | 10,189 | Fix TF template | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the TF template for the tests by adding the missing onnx boolean. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10189/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10189",
"html_url": "https://github.com/huggingface/transformers/pull/10189",
"diff_url": "https://github.com/huggingface/transformers/pull/10189.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10189.patch",
"merged_at": 1613398917000
} |
https://api.github.com/repos/huggingface/transformers/issues/10188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10188/comments | https://api.github.com/repos/huggingface/transformers/issues/10188/events | https://github.com/huggingface/transformers/issues/10188 | 808,496,626 | MDU6SXNzdWU4MDg0OTY2MjY= | 10,188 | Failing Multi-GPU trainer test | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have tried on my machine and the test passes, so the bug is linked to the setup of the machine executing the multi-GPU tests. I have never seen that error before but I would guess there is something wrong with nccl/cuda?",
"Retrieved the backtrace:\r\n\r\nThe error is: `RuntimeError: Address already in use`\r\n\r\nWhich means that more than one of these was running at the same time or one from a previous run is zombie and is holding this port. Probably need to try to catch that case and try a different port. I will have a look\r\n\r\nFull trace:\r\n```\r\ntests/test_trainer_distributed.py:72: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\ncmd = ['/home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python', '-m', 'torch.distributed.launc.../github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py', '--output_dir', ...]\r\nenv = {'CI': 'true', 'GITHUB_ACTION': 'run7', 'GITHUB_ACTIONS': 'true', 'GITHUB_ACTION_REF': '', ...}\r\nstdin = None, timeout = 180, quiet = False, echo = True\r\n\r\n def execute_subprocess_async(cmd, env=None, stdin=None, timeout=180, quiet=False, echo=True) -> _RunOutput:\r\n \r\n loop = asyncio.get_event_loop()\r\n result = loop.run_until_complete(\r\n _stream_subprocess(cmd, env=env, stdin=stdin, timeout=timeout, quiet=quiet, echo=echo)\r\n )\r\n \r\n cmd_str = \" \".join(cmd)\r\n if result.returncode > 0:\r\n stderr = \"\\n\".join(result.stderr)\r\n raise RuntimeError(\r\n> f\"'{cmd_str}' failed with returncode {result.returncode}\\n\\n\"\r\n f\"The combined stderr from workers follows:\\n{stderr}\"\r\n )\r\nE RuntimeError: '/home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python -m torch.distributed.launch --nproc_per_node=2 /home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py --output_dir /tmp/tmpsdmi_ca2' failed with returncode 1\r\nE \r\nE The combined stderr from workers follows:\r\nE Traceback (most recent call last):\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py\", line 82, in <module>\r\nE training_args = parser.parse_args_into_dataclasses()[0]\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/hf_argparser.py\", line 180, in parse_args_into_dataclasses\r\nE obj = dtype(**inputs)\r\nE File \"<string>\", line 59, in __init__\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py\", line 479, in __post_init__\r\nE if is_torch_available() and self.device.type != \"cuda\" and self.fp16:\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py\", line 1346, in wrapper\r\nE return func(*args, **kwargs)\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py\", line 601, in device\r\nE return self._setup_devices\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py\", line 1336, in __get__\r\nE cached = self.fget(obj)\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py\", line 1346, in wrapper\r\nE return func(*args, **kwargs)\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py\", line 586, in _setup_devices\r\nE torch.distributed.init_process_group(backend=\"nccl\")\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py\", line 436, in init_process_group\r\nE store, rank, world_size = next(rendezvous_iterator)\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/rendezvous.py\", line 179, in _env_rendezvous_handler\r\nE store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)\r\nE RuntimeError: Address already in use\r\nE Traceback (most recent call last):\r\nE File \"/usr/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\nE \"__main__\", mod_spec)\r\nE File \"/usr/lib/python3.7/runpy.py\", line 85, in _run_code\r\nE exec(code, run_globals)\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/launch.py\", line 260, in <module>\r\nE main()\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/launch.py\", line 256, in main\r\nE cmd=cmd)\r\nE subprocess.CalledProcessError: Command '['/home/github_actions/actions-runner/_work/transformers/transformers/.env/bin/python', '-u', '/home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py', '--local_rank=1', '--output_dir', '/tmp/tmpsdmi_ca2']' returned non-zero exit status 1.\r\nE Traceback (most recent call last):\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/tests/test_trainer_distributed.py\", line 82, in <module>\r\nE training_args = parser.parse_args_into_dataclasses()[0]\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/hf_argparser.py\", line 180, in parse_args_into_dataclasses\r\nE obj = dtype(**inputs)\r\nE File \"<string>\", line 59, in __init__\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py\", line 479, in __post_init__\r\nE if is_torch_available() and self.device.type != \"cuda\" and self.fp16:\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py\", line 1346, in wrapper\r\nE return func(*args, **kwargs)\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py\", line 601, in device\r\nE return self._setup_devices\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py\", line 1336, in __get__\r\nE cached = self.fget(obj)\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/file_utils.py\", line 1346, in wrapper\r\nE return func(*args, **kwargs)\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/src/transformers/training_args.py\", line 586, in _setup_devices\r\nE torch.distributed.init_process_group(backend=\"nccl\")\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py\", line 455, in init_process_group\r\nE barrier()\r\nE File \"/home/github_actions/actions-runner/_work/transformers/transformers/.env/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py\", line 1960, in barrier\r\nE work = _default_pg.barrier()\r\nE RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8\r\n\r\nsrc/transformers/testing_utils.py:1062: RuntimeError\r\n\r\n```",
"We can close this one, right, @LysandreJik?\r\n\r\nIt was happening because some zombies from previous jobs, which will now kill before the job starts. i.e. this test wasn't at fault.",
"Yes, we can close! Thanks."
] | 1,613 | 1,615 | 1,615 | MEMBER | null | This test is currently [failing in a multi-GPU setup](https://github.com/huggingface/transformers/runs/1902689616?check_suite_focus=true):
```
FAILED tests/test_trainer_distributed.py::TestTrainerDistributed::test_trainer
```
The error is the following:
```
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
```
cc @sgugger @stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10188/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10187/comments | https://api.github.com/repos/huggingface/transformers/issues/10187/events | https://github.com/huggingface/transformers/pull/10187 | 808,455,983 | MDExOlB1bGxSZXF1ZXN0NTczNDk0ODE2 | 10,187 | Add new model to labels that should not stale | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | MEMBER | null | The `New model` label gets added to the labels that should not become stale. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10187/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10187",
"html_url": "https://github.com/huggingface/transformers/pull/10187",
"diff_url": "https://github.com/huggingface/transformers/pull/10187.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10187.patch",
"merged_at": 1613388689000
} |
https://api.github.com/repos/huggingface/transformers/issues/10186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10186/comments | https://api.github.com/repos/huggingface/transformers/issues/10186/events | https://github.com/huggingface/transformers/issues/10186 | 808,450,576 | MDU6SXNzdWU4MDg0NTA1NzY= | 10,186 | Support for DeBERTa V2 models | {
"login": "saichandrapandraju",
"id": 41769919,
"node_id": "MDQ6VXNlcjQxNzY5OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saichandrapandraju",
"html_url": "https://github.com/saichandrapandraju",
"followers_url": "https://api.github.com/users/saichandrapandraju/followers",
"following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}",
"gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions",
"organizations_url": "https://api.github.com/users/saichandrapandraju/orgs",
"repos_url": "https://api.github.com/users/saichandrapandraju/repos",
"events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}",
"received_events_url": "https://api.github.com/users/saichandrapandraju/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, no workaround, we're working on the implementation now (https://github.com/huggingface/transformers/pull/10018). It should be available in a few days.",
"Thanks @LysandreJik ",
"@saichandrapandraju the PR was merged, so I think this issue can be closed now?",
"ok @yaysummeriscoming ,\r\n\r\nMay I know when will be the stable release( I think 4.4.0) for these merges?",
"Yes, this issue can be closed! v4.4.0 should be released in the next two weeks.",
"Has this been fixed? I'm downloading the model direct from huggingface [here](https://huggingface.co/microsoft/deberta-v2-xlarge) and i still get this error thrown",
"Are you using the `DebertaV2Model` or the `DebertaModel` ?",
"@LysandreJik I am using the simpletransformers library, i'm not sure if you're familiar with it but i believe by default it uses the DebertaModel, not sure how and if i can change it to DebertaV2Model"
] | 1,613 | 1,619 | 1,614 | NONE | null | Hi,
I downloaded [DeBERTa V2-XLarge](https://github.com/microsoft/DeBERTa) from [here](https://huggingface.co/microsoft/deberta-v2-xlarge) and trying to implement V2-XLarge model but I'm getting this error -
**RuntimeError: Error(s) in loading state_dict for DebertaForSequenceClassification:
size mismatch for deberta.encoder.rel_embeddings.weight: copying a param with shape torch.Size([512, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1536]).**
I saw that vocab got changed for V2 models. If that is the reason for above issue, is there any workaround to implement V2 models with HF ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10186/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10185/comments | https://api.github.com/repos/huggingface/transformers/issues/10185/events | https://github.com/huggingface/transformers/issues/10185 | 808,423,710 | MDU6SXNzdWU4MDg0MjM3MTA= | 10,185 | Saving HF wrapped in Keras | {
"login": "saboof",
"id": 59536094,
"node_id": "MDQ6VXNlcjU5NTM2MDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/59536094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saboof",
"html_url": "https://github.com/saboof",
"followers_url": "https://api.github.com/users/saboof/followers",
"following_url": "https://api.github.com/users/saboof/following{/other_user}",
"gists_url": "https://api.github.com/users/saboof/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saboof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saboof/subscriptions",
"organizations_url": "https://api.github.com/users/saboof/orgs",
"repos_url": "https://api.github.com/users/saboof/repos",
"events_url": "https://api.github.com/users/saboof/events{/privacy}",
"received_events_url": "https://api.github.com/users/saboof/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello @saboof!\r\n\r\nFirst of all, which version of Transformers and TF are you using? And can you share with us a Colab from which we can reproduce the issue. Thanks!!",
"version of transformers: transformers==4.2.2\r\nversion of TF2: tensorflow==2.3.1\r\n\r\nafter editing, the code is attached to the main issue ",
"Ok, Thanks for sharing! I now see better what you are trying to do. The results you get are different mostly because how the models are implemented don't support what you are trying to do and then some unexpected behavior might occur.\r\n\r\nUsually, a model should not belong to another model, and using a tokenizer inside a model is currently not recommended because not stable. Sorry, for your use case you will have to use `TFBertModel` and tokenize/pad your documents outside the model.\r\n\r\nIt is in our plans to give the possibility to integrate the tokenization process directly inside the model, and then be also part of a SavedModel, but we don't have an ETA yet. Sorry :(",
"hi @jplu \r\n\r\nThanks for the quick reply :), I don't think your comment answers my question.\r\nEven when I take the tokenizer outside the bert class model, we still get the same error: \r\n\r\nAttributeError: Must set `config_class` to use @keras_serializable",
"Sorry I should have been much clearrer. More specifically, you cannot use `TFBertMainLayer` publicly. You have to directly use `TFBertModel`.\r\n\r\nAlso, in case you don't know, TF doesn't recommend to use a model inside another model because some unstable behaviour might occur.",
"Thanks,\r\n\r\nBut I don't use `TFBertMainLayer` at all, the opposite, I use `TFBertModel`. \r\n\r\nRegarding your second point I'll try to take the BERT model outside the general model and only pass it as an argument. \r\nGenerally - how would you go about creating additional linear layer on top of BERT CLS embedding? \r\n\r\n\r\n",
"> Regarding your second point I'll try to take the BERT model outside the general model and only pass it as an argument.\r\n\r\nThis might bring unexpected behaviour as well.\r\n\r\n> Generally - how would you go about creating additional linear layer on top of BERT CLS embedding?\r\n\r\nI suggest you to take a look at how the `TFBertForSequenceClassification` is built :)",
"Great, I'll take a look at this, thanks @jplu ",
"Looking at the code of `TFBertForSequenceClassification` - \r\nhttps://github.com/huggingface/transformers/blob/d1b14c9b548de34b6606946482946008622967db/src/transformers/models/bert/modeling_bert.py#L1496 \r\nIt has a linear layer and the BERT model within the same model, (although written in Pytorch). This doesn't give me insights as to how to combine these two models/layers in the same Keras model so we would be able to save them both in the same command. \r\n\r\nThe only option that I see now is to completely adopt the `TFBertForSequenceClassification` as my full model. Unfortunately, This doesn't give the dynamic architecture generation which I would like to achieve. ",
"What do you mean by dynamic architecture? Can you detail a bit more about what you would like to do?",
"For example, having two types of outputs based on the same CLS embedding. \r\n\r\nThe first type would be based on a single linear layer and the second type would have two layers with some activation function between the layers.\r\n\r\nsketch of the model.\r\n\r\nInput -> BERT CLS embedded vector -> output based on a single layer -> another output based on a second layer\r\n\r\nMore generally, we would like to get the CLS embedded vector and to add on top of that whatever other layers. \r\nThese layers should be part of the Keras model so they all will be saved as part of the model.\r\n\r\nI hope this is somehow clearer now. \r\n",
"Ok this is much clearer now, thanks!!\r\n\r\nThe best way to do that would be to build your model as we do for ours, it means something like:\r\n\r\n```python\r\nclass MyModel(TFBertPreTrainedModel):\r\n def __init__(self, config, *inputs, **kwargs):\r\n super().__init__(config, *inputs, **kwargs)\r\n\r\n self.bert = TFBertMainLayer(config, name=\"bert\")\r\n self.my_first_layer = .....\r\n self.my_second_layer = ....\r\n\r\n def call(inputs, training=None):\r\n bert_output = bert(inputs)\r\n # below all the processes that you have to implement for your models\r\n ....\r\n```\r\n\r\nOnce you have your model you can instantiate it like our models with ```model = MyModel.from_pretrained(\"model_path\")```",
"Hi @jplu \r\n\r\nAgain, thanks for the help. \r\n\r\nSorry, I think we are back in square one. \r\nMy initial issue was how to save a Keras model which includes a HF model within it?\r\n\r\nAttached is an example of a code that generates a Keras model which incorporates a BERT model. \r\n[example.txt](https://github.com/huggingface/transformers/files/6015722/example.txt)\r\n\r\nHow would you go about saving this model in such a way that I'd be able to load the model with the additional layer on top of it? can I save the tokenizer in the same place? \r\n\r\n\r\n\r\n\r\n\r\n",
"> Sorry, I think we are back in square one. My initial issue was how to save a Keras model which includes a HF model within it?\r\n\r\nOnce again, you should not integrate a model inside another model, What I proposed you to do is exactly what you would like to do because `TFBertMainLayer == TFBertModel`.\r\n\r\n> How would you go about saving this model in such a way that I'd be able to load the model with the additional layer on top of it? can I save the tokenizer in the same place?\r\n\r\nI would go to what I proposed :)",
"Thanks @jplu \r\n\r\nFor the best of my understanding, your proposed code doesn't instantiate a Keres model. right?\r\nIf I get your suggestions correctly what you say is that I can't wrap HF model inside a Keras model since integrating a model inside another model, is discouraged by Keras. \r\n\r\n",
"Yes! I proposed you to use such a way to do that is 100% equivalent to what you would like.",
"Thanks! ",
"Keras discouraging nested models is new information for me. \r\nAnd this is weird as in their Keras official examples they do exactly that.\r\nThey have a nested HF model inside a Keras model.\r\n\r\nhttps://keras.io/examples/nlp/text_extraction_with_bert/#preprocess-the-data\r\nPlease look under the function create_model(). \r\n\r\nDo you have any pointers in which Keras discouraging nested models is mentioned? ",
"What you point out is a totally different way to what you are doing. In the link you proposed, the model is built in a functional manner which is totally different of building a model in a subclassing manner.\r\n\r\nPlease use the TFBertMainLayer, this is the exact same thing.",
"Sorry, I can't find the difference between the Keras example and my previous example under the create_model function.\r\nAttached is the function I used in my example file.\r\n\r\n```\r\ndef create_model(bert_variant, max_len):\r\n ## BERT encoder\r\n encoder = TFBertModel.from_pretrained(bert_variant)\r\n\r\n ## findings classifer Model\r\n input_word_ids = tf.keras.layers.Input(shape=(max_len,), dtype=tf.int32,\r\n name=\"input_word_ids\")\r\n embedding = encoder([input_word_ids]).last_hidden_state\r\n cls_layer = embedding[:,0,:]\r\n\r\n logits = layers.Dense(4, name=\"prediction\", use_bias=True)(cls_layer)\r\n logits = layers.Reshape((1, 4))(logits)\r\n\r\n pred_probs = layers.Activation(tf.keras.activations.softmax)(logits)\r\n\r\n model = tf.keras.Model(\r\n inputs=[input_word_ids],\r\n outputs=[pred_probs],\r\n )\r\n\r\n loss = tf.keras.losses.CategoricalCrossentropy(from_logits=False)\r\n optimizer = tf.keras.optimizers.Adam(lr=5e-5)\r\n model.compile(optimizer=optimizer, loss=loss)\r\n return model\r\n\r\n```\r\n \r\n",
"Sorry, this way of creating a model is fine, I was referencing to your first example script called `compile_model.txt`.\r\n\r\nNow to properly save the model you can do something like this:\r\n```python\r\ndef create_model(bert_variant, max_len):\r\n ## BERT encoder\r\n encoder = TFBertModel.from_pretrained(bert_variant).bert # ====> look at this change here :)\r\n\r\n ## findings classifer Model\r\n input_word_ids = tf.keras.layers.Input(shape=(max_len,), dtype=tf.int32,\r\n name=\"input_word_ids\")\r\n embedding = encoder([input_word_ids]).last_hidden_state\r\n cls_layer = embedding[:,0,:]\r\n\r\n logits = layers.Dense(4, name=\"prediction\", use_bias=True)(cls_layer)\r\n logits = layers.Reshape((1, 4))(logits)\r\n\r\n pred_probs = layers.Activation(tf.keras.activations.softmax)(logits)\r\n\r\n model = tf.keras.Model(\r\n inputs=[input_word_ids],\r\n outputs=[pred_probs],\r\n )\r\n\r\n loss = tf.keras.losses.CategoricalCrossentropy(from_logits=False)\r\n optimizer = tf.keras.optimizers.Adam(lr=5e-5)\r\n model.compile(optimizer=optimizer, loss=loss)\r\n return model\r\n\r\nmodel = create_model(\"bert-base-cased\", 128)\r\nmodel.save(\"model_path\")\r\n\r\nloaded_model = tf.keras.models.load_model(\"model_path\")\r\n```",
"Or a much nicer version IMO:\r\n```python\r\ndef create_model(bert_variant, max_len):\r\n config = BertConfig.from_pretrained(bert_variant)\r\n input_word_ids = tf.keras.layers.Input(shape=(max_len,), dtype=tf.int32,\r\n name=\"input_word_ids\")\r\n embedding = TFBertMainLayer(config)(input_word_ids)\r\n cls_layer = embedding.last_hidden_state[:,0,:]\r\n logits = tf.keras.layers.Dense(4, name=\"prediction\", use_bias=True)(cls_layer)\r\n logits = tf.keras.layers.Reshape((1, 4))(logits)\r\n pred_probs = tf.keras.layers.Activation(tf.keras.activations.softmax)(logits)\r\n model = tf.keras.Model(inputs=[input_word_ids], outputs=[pred_probs])\r\n loss = tf.keras.losses.CategoricalCrossentropy(from_logits=False)\r\n optimizer = tf.keras.optimizers.Adam(lr=5e-5)\r\n model.compile(optimizer=optimizer, loss=loss)\r\n\r\n return model\r\n\r\nmodel = create_model(\"bert-base-cased\", 128)\r\nmodel.save(\"model_path\") \r\n```\r\n\r\nOnce saved you can load and use it like this:\r\n```python\r\nmodel = tf.keras.models.load_model(\"model_path\")\r\nl = [1]*128 \r\ninp = tf.constant([l])\r\nmodel(inp)\r\n```",
"Thanks, @jplu ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | NONE | null | Hi,
Trying to save a Keras model which has a HF model and a liner layer (Dense layer) on top of it.
To save a model, Keras requires that every layer would have a serialize_layer_fn implemented.
However it seems that HF models don't include this function.
Spending some time understanding and googling this issue I came across the @keras_serializable
which I assume is supposed to allow a HF layer to be serialized.
However when I wrap my class with this, I get this issue:
AttributeError: Must set `config_class` to use @keras_serializable
Although the attached code which describes this issue does have a config_class member.
To test this behaviour I generated a code which creates a model based on BERT or some Bionlp-BERT variant.
Then the code tries to train the model.
After the short training period, we try to save the model, which includes the BERT model, BERT tokenizer (using save_pretrained) and the liner layer.
For me, the best way to save this model would be using the to_json function, which converts the Keras model into a serialized model. However, I couldn't manage to get this working. Any idea how can this be done? Or any objection about saving HF model using the "to_json" method?
A possible workaround would be to use the save_pretrained method for the BERT model and tokenizer but then how would I save also the linear layer?
This issue relates to this issue https://github.com/huggingface/transformers/issues/2733.
[compile_model.txt](https://github.com/huggingface/transformers/files/5981406/compile_model.txt)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10185/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10184/comments | https://api.github.com/repos/huggingface/transformers/issues/10184/events | https://github.com/huggingface/transformers/pull/10184 | 808,403,145 | MDExOlB1bGxSZXF1ZXN0NTczNDUyMjQ0 | 10,184 | Fixing NER pipeline for list inputs. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
- Changes TokenArgumentHandler(*args) into (inputs) signature to follow `__call__` signature.
- Fixes the bug.
- Backward compatible for single sentences
- Not backward compatible for multiple sentences, but it "worked" only for same length sentences in tokens (the result was bogus as it contained only the first sentence)
- This make NER *not* pass any batching to the model, which is not in line with other pipelines, however this is what was done beforehand. And not all pipelines support batching (and even batching is counterproductive in a lot of cases because the user cannot control number of tokens from raw strings).
- All slow tests now pass, argparser test was updated.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #10168
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10184/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10184",
"html_url": "https://github.com/huggingface/transformers/pull/10184",
"diff_url": "https://github.com/huggingface/transformers/pull/10184.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10184.patch",
"merged_at": 1613388165000
} |
https://api.github.com/repos/huggingface/transformers/issues/10183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10183/comments | https://api.github.com/repos/huggingface/transformers/issues/10183/events | https://github.com/huggingface/transformers/pull/10183 | 808,355,105 | MDExOlB1bGxSZXF1ZXN0NTczNDEzMDY1 | 10,183 | BigBird | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Will BigBird-Pegasus be added, and then `BigBirdForConditionalGeneration` so that summarization will be possible?",
"Yes, we will be adding that soon.\r\n\r\n> Will BigBird-Pegasus be added, and then `BigBirdForConditionalGeneration` so that summarization will be possible?\r\n\r\n",
"Once pre-trained checkpoints are uploaded to `huggingface_hub`, model & tokenizer can be accessed this way:\r\n\r\n```python\r\nfrom transformers import BigBirdForMaskedLM, BigBirdForPreTraining, BigBirdTokenizer\r\n\r\ntokenizer = BigBirdTokenizer.from_pretrained(\"google/bigbird-roberta-base\")\r\n\r\n# model with LM head\r\nmodel_with_lm = BigBirdForMaskedLM.from_pretrained(\"google/bigbird-roberta-base\")\r\n\r\n# model with pertaining heads\r\nmodel_for_pretraining = BigBirdForPreTraining.from_pretrained(\"google/bigbird-roberta-base\")\r\n```",
"```python\r\nfrom transformers import BigBirdConfig\r\n\r\n# config for bigbird base\r\nconfig = BigBirdConfig(hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072)\r\n# or simply\r\nconfig = BigBirdConfig()\r\n\r\n# config for bigbird trivia ckpts (both ITC & ETC)\r\nconfig = BigBirdConfig(type_vocab_size=16)\r\n\r\n# config for bigbird large\r\nconfig = BigBirdConfig(hidden_size=1024, num_hidden_layers=24, num_attention_heads=16, intermediate_size=4096)\r\n```\r\n\r\nRunning this script will enable checkpoints conversion:\r\n\r\n```shell\r\npython src/transformers/models/big_bird/convert_bigbird_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path ./tf_checkpoint/ckpt/model.ckpt-0 --big_bird_config_file ./tf_checkpoint/config.json --pytorch_dump_path ./hf_ckpt\r\n```",
"I will fix everything up & add tests for auto padding.",
"Failing tests are unrelated to this PR.",
"@sgugger, @LysandreJik I updated the code based on your suggestions. Please let me know if I have missed something.",
"Thank you for taking care of the comments @vasudevgupta7 and for this PR altogether!",
"@vasudevgupta7 great work, when are you planning to add the BigBirdForConditionalGeneration? And any plans on adding the pubmed pre-trained models?",
"@sayakmisra I am currently working on it. You can track PR #10991.",
"@vasudevgupta7 currently loading `vasudevgupta/bigbird-pegasus-large-bigpatent` into `BigBirdForConditionalGeneration` leads to some weights of the checkpoint not being used for initializing the model. Is there a workaround for this?\r\n\r\nCan we have separate pretrained checkpoints for BigBird and Pegasus without the finetuning, so that we can use the Pegasus decoder along with the BigBird encoder in our code?",
"Hey @jigsaw2212, \r\n\r\nwe are still working on integrating `BigBirdPegasus` -> for now only the `google/bigbird-...` are fully supported. `BigBirdPegasus` will be merged in 1,2 weeks "
] | 1,613 | 1,619 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
This PR will add Google's BigBird "Roberta".
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #6113.
This PR adds three checkpoints of BigBird:
- [bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-large)
- [bigbird-roberta-large](https://huggingface.co/google/bigbird-roberta-base)
- [bigbird-base-trivia-itc](https://huggingface.co/google/bigbird-base-trivia-itc)
Here a notebook showing how well BigBird works on long-document question answering: https://colab.research.google.com/drive/1DVOm1VHjW0eKCayFq1N2GpY6GR9M4tJP?usp=sharing
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10183/reactions",
"total_count": 22,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 10,
"rocket": 12,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10183/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10183",
"html_url": "https://github.com/huggingface/transformers/pull/10183",
"diff_url": "https://github.com/huggingface/transformers/pull/10183.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10183.patch",
"merged_at": 1617083494000
} |
https://api.github.com/repos/huggingface/transformers/issues/10182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10182/comments | https://api.github.com/repos/huggingface/transformers/issues/10182/events | https://github.com/huggingface/transformers/issues/10182 | 808,234,454 | MDU6SXNzdWU4MDgyMzQ0NTQ= | 10,182 | `super()` does not have `prepare_seq2seq_batch()` in `transformers/models/rag/tokenization_rag.py` | {
"login": "moyapchen",
"id": 72097364,
"node_id": "MDQ6VXNlcjcyMDk3MzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/72097364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moyapchen",
"html_url": "https://github.com/moyapchen",
"followers_url": "https://api.github.com/users/moyapchen/followers",
"following_url": "https://api.github.com/users/moyapchen/following{/other_user}",
"gists_url": "https://api.github.com/users/moyapchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moyapchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moyapchen/subscriptions",
"organizations_url": "https://api.github.com/users/moyapchen/orgs",
"repos_url": "https://api.github.com/users/moyapchen/repos",
"events_url": "https://api.github.com/users/moyapchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/moyapchen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi ! Thanks for reporting\r\n#10167 should fix this issue",
"Convenient to see that the fix was already in the pipeline. Thanks!"
] | 1,613 | 1,613 | 1,613 | NONE | null | ## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.6.12
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: nope
### Who can help
Models:
- rag: @patrickvonplaten, @lhoestq
## Information
Model I am using (Bert, XLNet ...): RAG
The problem arises when using:
* [ x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run any of the scripts in the examples on https://huggingface.co/transformers/model_doc/rag.html#overview , ex.
```
from transformers import RagTokenizer, RagRetriever, RagModel
import torch
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base")
retriever = RagRetriever.from_pretrained("facebook/rag-token-base", index_name="exact", use_dummy_dataset=True)
# initialize with RagRetriever to do everything in one forward call
model = RagModel.from_pretrained("facebook/rag-token-base", retriever=retriever)
input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="pt")
input_ids = input_dict["input_ids"]
outputs = model(input_ids=input_ids)
```
2. Get an error on https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/tokenization_rag.py#L77 about how `super()` does not have `prepare_seq2seq_batch()`
* Indeed, looking at the relevant file, RagTokenizer does not inherit from any other class.
## Expected behavior
RAG works properly.
Note that if I copy/paste the code in the file prior to https://github.com/huggingface/transformers/pull/9524 , it works fine. CC: @sgugger of that change.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10182/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10181/comments | https://api.github.com/repos/huggingface/transformers/issues/10181/events | https://github.com/huggingface/transformers/issues/10181 | 808,100,525 | MDU6SXNzdWU4MDgxMDA1MjU= | 10,181 | Inconsistent loss computation? | {
"login": "LuCeHe",
"id": 9610770,
"node_id": "MDQ6VXNlcjk2MTA3NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9610770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LuCeHe",
"html_url": "https://github.com/LuCeHe",
"followers_url": "https://api.github.com/users/LuCeHe/followers",
"following_url": "https://api.github.com/users/LuCeHe/following{/other_user}",
"gists_url": "https://api.github.com/users/LuCeHe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LuCeHe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LuCeHe/subscriptions",
"organizations_url": "https://api.github.com/users/LuCeHe/orgs",
"repos_url": "https://api.github.com/users/LuCeHe/repos",
"events_url": "https://api.github.com/users/LuCeHe/events{/privacy}",
"received_events_url": "https://api.github.com/users/LuCeHe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! If you look inside the `TFGPT2LMHeadModel`, you'll see it automatically shifts the labels for you.\r\n\r\nThe model generates tokens given a past. It then compares the generated token to the \"true\" token contained in the labels you passed to it. If you shift the tokens as it is done in the model, you should get identical results:\r\n\r\n```py\r\nimport tensorflow as tf\r\nfrom transformers import TFGPT2LMHeadModel\r\nfrom transformers.modeling_tf_utils import TFCausalLanguageModelingLoss\r\nimport numpy as np\r\n\r\nmodel = TFGPT2LMHeadModel.from_pretrained('gpt2')\r\n\r\n\r\none_line_dset = (np.random.rand(1, 1024)>.5)*1\r\ninput_ids = one_line_dset\r\ntarget_ids = one_line_dset\r\n\r\n# explicit loss calculation\r\nprediction = model.predict(input_ids).logits\r\nprediction = tf.convert_to_tensor(prediction)\r\n\r\n# internal loss calculation\r\noutputs = model(input_ids, labels=target_ids, training=False)\r\n\r\ntarget_ids = target_ids[:, 1:]\r\nprediction = prediction[:, :-1]\r\nl = TFCausalLanguageModelingLoss().compute_loss(target_ids, prediction)\r\n\r\nprint(tf.math.reduce_mean(l), tf.math.reduce_mean(outputs[0]))\r\n```\r\n\r\nYou'll see that the two results are very close to being equal. Note the very close, as they are not exactly equal. @jplu can chime in here but I believe this has to do with the switch to graph mode happening with the call to `.predict`.",
"I entirely second what @LysandreJik said! Nice and clear explanation 👍 ",
"@LysandreJik @jplu you are super fast guys! Thanks! a lot!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | NONE | null | `transformers` version: 4.3.2, Python version: 3.7, Tensorflow version (GPU?): 2.3.1, Using GPU in script?: No, Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behavior:
``` python
import tensorflow as tf
from transformers import TFGPT2LMHeadModel
from transformers.modeling_tf_utils import TFCausalLanguageModelingLoss
import copy
import numpy as np
model = TFGPT2LMHeadModel.from_pretrained('gpt2')
one_line_dset = (np.random.rand(1, 1024)>.5)*1
input_ids = one_line_dset
target_ids = one_line_dset
# explicit loss calculation
prediction = model.predict(input_ids).logits
prediction = tf.convert_to_tensor(prediction)
l = TFCausalLanguageModelingLoss().compute_loss(target_ids, prediction)
# internal loss calculation
outputs = model(input_ids, labels=target_ids)
print(tf.math.reduce_mean(l), tf.math.reduce_mean(outputs[0]))
print(l.shape, outputs[0].shape)
print('How many are the same? ', np.mean(outputs.loss==l[:-1]))
print('Are they equal? ', tf.math.reduce_mean(l) == tf.math.reduce_mean(outputs[0]))
```
I'm trying to understand how the loss is computed inside the model when the argument labels is provided, but what I am trying above doesn't seem to work. What am I doing wrong?
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect `tf.math.reduce_mean(l) == tf.math.reduce_mean(outputs[0])` to be True. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10181/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10180/comments | https://api.github.com/repos/huggingface/transformers/issues/10180/events | https://github.com/huggingface/transformers/issues/10180 | 808,013,284 | MDU6SXNzdWU4MDgwMTMyODQ= | 10,180 | ONNX Export for Fine-Tuned DistilBertForTokenClassification | {
"login": "biro-mark",
"id": 58680214,
"node_id": "MDQ6VXNlcjU4NjgwMjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/58680214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/biro-mark",
"html_url": "https://github.com/biro-mark",
"followers_url": "https://api.github.com/users/biro-mark/followers",
"following_url": "https://api.github.com/users/biro-mark/following{/other_user}",
"gists_url": "https://api.github.com/users/biro-mark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/biro-mark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/biro-mark/subscriptions",
"organizations_url": "https://api.github.com/users/biro-mark/orgs",
"repos_url": "https://api.github.com/users/biro-mark/repos",
"events_url": "https://api.github.com/users/biro-mark/events{/privacy}",
"received_events_url": "https://api.github.com/users/biro-mark/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I figured out how to do it. Within python, you have to use `model.save_pretrained(\"path/to/output_dir\")` where `model` is the fine-tuned `model = DistilBertForTokenClassification.from_pretrained('distilbert-base-uncased', num_labels=...)`\r\n\r\nThen inside an empty directory, run\r\n`python -m transformers.convert_graph_to_onnx --model path/to/output_dir --framework pt --tokenizer distilbert-base-uncased out.onnx`\r\n"
] | 1,613 | 1,613 | 1,613 | NONE | null | # 🚀 Feature request
I'd like to export a fine-tuned DistilBertForTokenClassification model to ONNX. Right now the conversion script convert_graph_to_onnx.py looks like it takes a string like "--model bert-base-cased" but I'd like to pass in the model object that I've fine tuned on my data set.
## Motivation
I've trained a custom token classifier that I want to run on the ONNX runtime.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10180/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10179/comments | https://api.github.com/repos/huggingface/transformers/issues/10179/events | https://github.com/huggingface/transformers/issues/10179 | 807,993,885 | MDU6SXNzdWU4MDc5OTM4ODU= | 10,179 | Why is the attention_mask added to the attn_weights instead of multiplying/masking? | {
"login": "fomalhautb",
"id": 14837467,
"node_id": "MDQ6VXNlcjE0ODM3NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/14837467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fomalhautb",
"html_url": "https://github.com/fomalhautb",
"followers_url": "https://api.github.com/users/fomalhautb/followers",
"following_url": "https://api.github.com/users/fomalhautb/following{/other_user}",
"gists_url": "https://api.github.com/users/fomalhautb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fomalhautb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fomalhautb/subscriptions",
"organizations_url": "https://api.github.com/users/fomalhautb/orgs",
"repos_url": "https://api.github.com/users/fomalhautb/repos",
"events_url": "https://api.github.com/users/fomalhautb/events{/privacy}",
"received_events_url": "https://api.github.com/users/fomalhautb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, Maybe this comment can help you out https://github.com/huggingface/transformers/issues/1935#issuecomment-561305086!",
"@LysandreJik Oh, I got it now, thank you!"
] | 1,613 | 1,613 | 1,613 | NONE | null | https://github.com/huggingface/transformers/blob/8fae93ca1972c39d19c8cf3d3c6a3dd2530cc59a/src/transformers/models/bart/modeling_bart.py#L219-L227
As far as I understand, attention_mask is to prevent the model peek into the future or padded positions, shouldn't the weights in these positions be masked out? What does this addition do? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10179/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10179/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10178/comments | https://api.github.com/repos/huggingface/transformers/issues/10178/events | https://github.com/huggingface/transformers/pull/10178 | 807,983,671 | MDExOlB1bGxSZXF1ZXN0NTczMTA3NjYw | 10,178 | Fix datasets set_format | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | COLLABORATOR | null | # What does this PR do?
This PR fixes a problem in `Trainer` when user provide a dataset using the new functionality in the upcoming v2 of `datasets` `set_transform` (see [here](https://github.com/huggingface/datasets/issues/1867) for more details). This is a hotfix that is not perfect and we will need to take some time to make this column ignoring more general (probably after batch creation) but I will look deeper into this at the end of next week. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10178/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10178",
"html_url": "https://github.com/huggingface/transformers/pull/10178",
"diff_url": "https://github.com/huggingface/transformers/pull/10178.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10178.patch",
"merged_at": 1613386147000
} |
https://api.github.com/repos/huggingface/transformers/issues/10177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10177/comments | https://api.github.com/repos/huggingface/transformers/issues/10177/events | https://github.com/huggingface/transformers/issues/10177 | 807,970,091 | MDU6SXNzdWU4MDc5NzAwOTE= | 10,177 | Loading a model from local files achieves way too lower accuracy in comparison to model downloading | {
"login": "sstojanoska",
"id": 17052700,
"node_id": "MDQ6VXNlcjE3MDUyNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/17052700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sstojanoska",
"html_url": "https://github.com/sstojanoska",
"followers_url": "https://api.github.com/users/sstojanoska/followers",
"following_url": "https://api.github.com/users/sstojanoska/following{/other_user}",
"gists_url": "https://api.github.com/users/sstojanoska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sstojanoska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sstojanoska/subscriptions",
"organizations_url": "https://api.github.com/users/sstojanoska/orgs",
"repos_url": "https://api.github.com/users/sstojanoska/repos",
"events_url": "https://api.github.com/users/sstojanoska/events{/privacy}",
"received_events_url": "https://api.github.com/users/sstojanoska/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> The problem arises when using the model from local files. I have noticed that the accuracy of the local model for the exact same configuration (same data, number of epochs, lr, etc.) is around 20% lower than downloading the model for each experiment.\r\n\r\nThis is extremely vague and we can't help you solve your bug if you don't give us something more tangible than this. What code are you then running? What's the difference in accuracy?\r\n",
"Thanks. It was my mistake. \r\nThe issue was that I was trying to dynamically change the model config file and reuse it for models with different number of output labels.\r\n"
] | 1,613 | 1,613 | 1,613 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik, @sgugger, @patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I am using a few models from the 'Models hub' ( https://huggingface.co/models ) and
I am working on token classification task. Since I have to repeat some of the experiments several times I have downloaded the models locally, to avoid downloading on each run.
Downloading:
```
model_name = "X"
model = AutoModelForTokenClassification.from_pretrained(model_name, num_labels=len(tag2idx))
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Saving locally the model and tokenizer ( and renaming tokenizer_config.json to config.json)
```
model.save_pretrained("/content/drive/MyDrive/model/model_name/")
tokenizer.save_pretrained("/content/drive/MyDrive/tokenizer/model_name/")
```
Loading the model and tokenizer from local directories
```
config = AutoConfig.from_pretrained("/content/drive/MyDrive/model/model_name/")
config.num_labels = len(tag2idx)
model = AutoModelForTokenClassification.from_config(config)
tokenizer = AutoTokenizer.from_pretrained("/content/drive/MyDrive/tokenizer/model_name/")
```
The problem arises when using the model from local files. I have noticed that the accuracy of the local model for the exact same configuration (same data, number of epochs, lr, etc.) is around 20% lower than downloading the model for each experiment.
I would like to know why this happens.
## Expected behavior
The model should achieve similar results whether being loaded from local files or downloaded from the models hub.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10177/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10176/comments | https://api.github.com/repos/huggingface/transformers/issues/10176/events | https://github.com/huggingface/transformers/issues/10176 | 807,926,442 | MDU6SXNzdWU4MDc5MjY0NDI= | 10,176 | Conditional generation with T5 | {
"login": "ShivanshuPurohit",
"id": 42869065,
"node_id": "MDQ6VXNlcjQyODY5MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/42869065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShivanshuPurohit",
"html_url": "https://github.com/ShivanshuPurohit",
"followers_url": "https://api.github.com/users/ShivanshuPurohit/followers",
"following_url": "https://api.github.com/users/ShivanshuPurohit/following{/other_user}",
"gists_url": "https://api.github.com/users/ShivanshuPurohit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShivanshuPurohit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShivanshuPurohit/subscriptions",
"organizations_url": "https://api.github.com/users/ShivanshuPurohit/orgs",
"repos_url": "https://api.github.com/users/ShivanshuPurohit/repos",
"events_url": "https://api.github.com/users/ShivanshuPurohit/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShivanshuPurohit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can decode them back to a string using `T5Tokenizer`, like so:\r\n\r\n`tokenizer.decode(outputs.squeeze().tolist(), skip_special_tokens=True)`\r\n\r\nBtw, for a really good guide on the different generation strategies of models like T5, see this blog post: https://huggingface.co/blog/how-to-generate",
"This post was really helpful, thanks!"
] | 1,613 | 1,613 | 1,613 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- t5: @patrickvonplaten, @patil-suraj
Library:
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
-->
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] the official example scripts: (give details below)
* Generating conditional text from T5
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('t5-3b')
model = T5ForConditionalGeneration.from_pretrained('t5-3b')
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>', return_tensors='pt').input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
logits = outputs.logits
input_ids = tokenizer("summarize: studies have shown that owning a dog is good for you ", return_tensors="pt").input_ids # Batch size 1
outputs = model.generate(input_ids)
```
## To reproduce
Steps to reproduce the behavior:
1. Run the code above
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
To see the generated text. Rather, the model outputs a torch Tensor like so
`tensor([[ 0, 363, 19, 8, 1784, 13, 1473, 58, 1]])`
How do I get words out of it rather than a tensor? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10176/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10175/comments | https://api.github.com/repos/huggingface/transformers/issues/10175/events | https://github.com/huggingface/transformers/pull/10175 | 807,915,511 | MDExOlB1bGxSZXF1ZXN0NTczMDU3ODQy | 10,175 | Speech2TextTransformer | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten, @sgugger , @LysandreJik The PR is now finalized and ready for your review :) ",
"I've added proper instructions to install the extra dependencies and addressed Patrick and Sylvain's comments regarding the docs and imports. All slow/non-slow tests are passing!\r\n\r\nMerging!",
"Edit: please see this issue: #10631 ",
"hi @xjdeng\r\n\r\nThanks for reporting this. This PR is now merged. So could you please open an issue with this error, we will discuss it there.\r\n"
] | 1,613 | 1,615 | 1,615 | MEMBER | null | # What does this PR do?
This PR adds the S2T model from [fairseq](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text) for end-to-end ASR and Speech-Translation (ST).
The model architecture is somewhat similar to the mBART model, except
- the encoder contains the convolutional subsampling module to downsample the speech features.
- no token embeddings in encoder.
This PR also adds the `Speech2TextFeatureExtractor`, and `Speech2TextProcessor` classes analogous to `Wav2Vec2` extractor and processor.
The `Speech2TextFeatureExtractor` here has an extra dependency on `torchaudio` which is required for extracting fbank features
The `generate` method works out-of-the-box for S2T! Usage example
```python
import torch
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
from datasets import load_dataset
import soundfile as sf
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
samples = ds.map(map_to_array)[5:8]
model = Speech2TextForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_small")
processor = Speech2TextProcessor.from_pretrained("valhalla/s2t_librispeech_small")
features = processor(samples["speech"], sampling_rate=16_000, padding=True, return_tensors="pt")
gen_tokens = model.generate(
input_ids=features["input_features"],
attention_mask=features["attention_mask"],
)
generated = processor.batch_decode(gen_tokens, skip_special_tokens=True)
```
TODOs:
- [x] add tests
- [x] implement `Speech2TextProcessor` after #10324 is merged
- [x] finish docs
- [ ] port and eval the CoVoST2 and MuSTc checkpoints
- [ ] add training/fine-tuning script in a follow-up PR
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10175/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10175/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10175",
"html_url": "https://github.com/huggingface/transformers/pull/10175",
"diff_url": "https://github.com/huggingface/transformers/pull/10175.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10175.patch",
"merged_at": 1615392725000
} |
https://api.github.com/repos/huggingface/transformers/issues/10174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10174/comments | https://api.github.com/repos/huggingface/transformers/issues/10174/events | https://github.com/huggingface/transformers/issues/10174 | 807,882,608 | MDU6SXNzdWU4MDc4ODI2MDg= | 10,174 | How to train an MBart model from scratch for a new language pair? | {
"login": "vineetha-thomas",
"id": 60310418,
"node_id": "MDQ6VXNlcjYwMzEwNDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/60310418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vineetha-thomas",
"html_url": "https://github.com/vineetha-thomas",
"followers_url": "https://api.github.com/users/vineetha-thomas/followers",
"following_url": "https://api.github.com/users/vineetha-thomas/following{/other_user}",
"gists_url": "https://api.github.com/users/vineetha-thomas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vineetha-thomas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vineetha-thomas/subscriptions",
"organizations_url": "https://api.github.com/users/vineetha-thomas/orgs",
"repos_url": "https://api.github.com/users/vineetha-thomas/repos",
"events_url": "https://api.github.com/users/vineetha-thomas/events{/privacy}",
"received_events_url": "https://api.github.com/users/vineetha-thomas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,613 | 1,613 | 1,613 | NONE | null | I want to train an MBART model from scratch, for a new language pair, unsupervised translation. I have monolingual data from both languages. Specifically, how do I prepare the data for the same?
Currently I start with a code as follows
_tokenizer = MBartTokenizer.from_pretrained('./tokenizer_de_hsb.model') //My own tokenizer trained with google sentencepiece
batch = tokenizer.prepare_seq2seq_batch(src_texts=src_txts, src_lang="en_XX",
tgt_texts=tgt_txts, tgt_lang="ro_RO",
return_tensors="pt") //The src and tgt language codes are dummy here.
config = MBartConfig()
model = MBartModel(config)
model(input_ids=batch['input_ids'], decoder_input_ids=batch['labels']) # forward pass
model.save_pretrained('./trained_model')_
Following are the doubts I have.
- For pre-training mbart, what should input_ids and decoder_input_id in the forward pass be? Is there a function that generates the input with the masked tokens?
- Is the approach to combine src and tgt language data and train once on the combined data?
- Is there a sample code for this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10174/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10173/comments | https://api.github.com/repos/huggingface/transformers/issues/10173/events | https://github.com/huggingface/transformers/issues/10173 | 807,853,292 | MDU6SXNzdWU4MDc4NTMyOTI= | 10,173 | What does the "<s> token" mean in Longformer's global_attention_mask? | {
"login": "Anthonyive",
"id": 8257285,
"node_id": "MDQ6VXNlcjgyNTcyODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8257285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anthonyive",
"html_url": "https://github.com/Anthonyive",
"followers_url": "https://api.github.com/users/Anthonyive/followers",
"following_url": "https://api.github.com/users/Anthonyive/following{/other_user}",
"gists_url": "https://api.github.com/users/Anthonyive/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Anthonyive/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Anthonyive/subscriptions",
"organizations_url": "https://api.github.com/users/Anthonyive/orgs",
"repos_url": "https://api.github.com/users/Anthonyive/repos",
"events_url": "https://api.github.com/users/Anthonyive/events{/privacy}",
"received_events_url": "https://api.github.com/users/Anthonyive/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, I got it. That means [CLS] token."
] | 1,613 | 1,613 | 1,613 | NONE | null | This might be a stupid question, but I couldn't find an answer. The documentation says "For example, for classification, the \<s\> token should be given global attention.". I've also checked the original [longformer paper](https://arxiv.org/pdf/2004.05150.pdf), but "\<s\> token" was only mentioned once. Can someone tell me what does it mean? Thanks for any help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10173/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10172/comments | https://api.github.com/repos/huggingface/transformers/issues/10172/events | https://github.com/huggingface/transformers/issues/10172 | 807,791,750 | MDU6SXNzdWU4MDc3OTE3NTA= | 10,172 | Saving PruneBERT notebook fails to run on torch > 1.5 | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for reporting that @lewtun!\r\nFeel free to open a PR when you have a working solution for higher versions of PyTorch!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | MEMBER | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@VictorSanh
## Information
The Saving PruneBERT [notebook](https://github.com/huggingface/transformers/blob/b11386e158e86e62d4041eabd86d044cd1695737/examples/movement-pruning/Saving_PruneBERT.ipynb) from the _examples/movement-pruning/_ directory is not compatible with PyTorch > 1.5 because Torchbind is used for `_packed_params` in v1.6 and higher (see PR [here](https://github.com/pytorch/pytorch/pull/34140)).
In particular, cell 4 of the notebook
```python
# Elementary representation: we decompose the quantized tensors into (scale, zero_point, int_repr).
# See https://pytorch.org/docs/stable/quantization.html
# We further leverage the fact that int_repr is sparse matrix to optimize the storage: we decompose int_repr into
# its CSR representation (data, indptr, indices).
elementary_qtz_st = {}
for name, param in qtz_st.items():
if param.is_quantized:
print("Decompose quantization for", name)
# We need to extract the scale, the zero_point and the int_repr for the quantized tensor and modules
scale = param.q_scale() # torch.tensor(1,) - float32
zero_point = param.q_zero_point() # torch.tensor(1,) - int32
elementary_qtz_st[f"{name}.scale"] = scale
elementary_qtz_st[f"{name}.zero_point"] = zero_point
# We assume the int_repr is sparse and compute its CSR representation
# Only the FCs in the encoder are actually sparse
int_repr = param.int_repr() # torch.tensor(nb_rows, nb_columns) - int8
int_repr_cs = sparse.csr_matrix(int_repr) # scipy.sparse.csr.csr_matrix
elementary_qtz_st[f"{name}.int_repr.data"] = int_repr_cs.data # np.array int8
elementary_qtz_st[f"{name}.int_repr.indptr"] = int_repr_cs.indptr # np.array int32
assert max(int_repr_cs.indices) < 65535 # If not, we shall fall back to int32
elementary_qtz_st[f"{name}.int_repr.indices"] = np.uint16(int_repr_cs.indices) # np.array uint16
elementary_qtz_st[f"{name}.int_repr.shape"] = int_repr_cs.shape # tuple(int, int)
else:
elementary_qtz_st[name] = param
```
fails with the following error
```
AttributeError Traceback (most recent call last)
<ipython-input-14-1266eb0d5085> in <module>
9 # if isinstance(param, tuple):
10 # param = param[0]
---> 11 if "dtype" not in name and param.is_quantized:
12 print("Decompose quantization for", name)
13 # We need to extract the scale, the zero_point and the int_repr for the quantized tensor and modules
AttributeError: 'tuple' object has no attribute 'is_quantized'
```
This is because in torch >= 1.6, the `layer_name.weight` and `layer_name.bias` tensors have been bundled as a tuple of the form `(weight, bias)` in `param`.
A simple fix I tried was to pick out the weight tensor directly by checking for a tuple in the for loop:
```
elementary_qtz_st = {}
for name, param in qtz_st.items():
if isintstance(param, tuple):
param = param[0]
if param.is_quantized:
print("Decompose quantization for", name)
```
but this produces mismatch between the keys of `qtz_st` and `elementary_qtz_st` because we append the `.scale` and `.zero_point` attributes to `_packed_params` and lose the bias term.
I'm currently trying to find a proper fix, but thought I should report this in the meantime.
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Clone `transformers` and follow the steps to install the `movement-pruning` example
2. Upgrade torch to v1.6 with `pip install torch==1.6`
3. Try to run the `Saving_PruneBERT.ipynb` notebook
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The `Saving_PruneBERT.ipynb` notebook runs end-to-end without errors.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10172/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10171/comments | https://api.github.com/repos/huggingface/transformers/issues/10171/events | https://github.com/huggingface/transformers/pull/10171 | 807,748,323 | MDExOlB1bGxSZXF1ZXN0NTcyOTM2NDYw | 10,171 | Revert propagation | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | MEMBER | null | The proposition offered in https://github.com/huggingface/transformers/pull/10092 unfortunately can't be applied as having a default handler and propagation across handlers results in several logged items.
Reverting that PR here as seen offline with @lhoestq and leaving the docs regarding the default handler introduced in #10092. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10171/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10171",
"html_url": "https://github.com/huggingface/transformers/pull/10171",
"diff_url": "https://github.com/huggingface/transformers/pull/10171.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10171.patch",
"merged_at": 1613222396000
} |
https://api.github.com/repos/huggingface/transformers/issues/10170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10170/comments | https://api.github.com/repos/huggingface/transformers/issues/10170/events | https://github.com/huggingface/transformers/issues/10170 | 807,744,975 | MDU6SXNzdWU4MDc3NDQ5NzU= | 10,170 | T5 training with Keras: InvalidArgumentError: logits and labels must have the same first dimension | {
"login": "marton-avrios",
"id": 59836119,
"node_id": "MDQ6VXNlcjU5ODM2MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/59836119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marton-avrios",
"html_url": "https://github.com/marton-avrios",
"followers_url": "https://api.github.com/users/marton-avrios/followers",
"following_url": "https://api.github.com/users/marton-avrios/following{/other_user}",
"gists_url": "https://api.github.com/users/marton-avrios/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marton-avrios/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marton-avrios/subscriptions",
"organizations_url": "https://api.github.com/users/marton-avrios/orgs",
"repos_url": "https://api.github.com/users/marton-avrios/repos",
"events_url": "https://api.github.com/users/marton-avrios/events{/privacy}",
"received_events_url": "https://api.github.com/users/marton-avrios/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Would like to help you here, I've created a [Colab notebook](https://colab.research.google.com/drive/1PtRxbK4oNUsm4lrsOvWoNYzA-BhOWwf2?usp=sharing) that illustrates how to fine-tune `TFT5ForConditionalGeneration` using Keras. However, I'm having the same issue as posted in #6817, namely:\r\n\r\n`ValueError: No gradients provided for any variable: ['shared/shared/weight:0', 'tf_t5for_conditional_generation_2/encoder/block_._0/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation_2/encoder/block_._0/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation_2/encoder/block_._0/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation_2/encoder/block_._0/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation_2/encoder/block_._0/layer_._0/SelfAttention/relative_attention_bias/embeddings:0', 'tf_t5for_conditional_generation_2/encoder/block_._0/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation_2/encoder/block_._0/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation_2/encoder/block_._0/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation_2/encoder/block_._0/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation_2/encoder/block_._1/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation_2/encoder/block_._1/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation_2/encoder/block_._1/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation_2/encoder/block_._1/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation_2/encoder/block_._1/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation_2/encoder/block_._1/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation_2/encoder/bloc...`\r\n\r\nUPDATE: this issue was resolved by providing the data in the correct format, namely a tuple of `(inputs, outputs)`. A forward pass on a random batch is now working. However, having the following error when calling `model.fit()`:\r\n\r\n```\r\nValueError: in user code:\r\n\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function *\r\n return step_function(self, iterator)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function **\r\n outputs = model.distribute_strategy.run(run_step, args=(data,))\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run\r\n return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica\r\n return self._call_for_each_replica(fn, args, kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica\r\n return fn(*args, **kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step **\r\n outputs = model.train_step(data)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:758 train_step\r\n self.compiled_metrics.update_state(y, y_pred, sample_weight)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:387 update_state\r\n self.build(y_pred, y_true)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:318 build\r\n self._metrics, y_true, y_pred)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1163 map_structure_up_to\r\n **kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1245 map_structure_with_tuple_paths_up_to\r\n expand_composites=expand_composites)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:878 assert_shallow_structure\r\n input_length=len(input_tree), shallow_length=len(shallow_tree)))\r\n\r\n ValueError: The two structures don't have the same sequence length. Input structure has length 4, while shallow structure has length 3.\r\n```",
"Hello!\r\n\r\nFor now T5 cannot be trained with usual `.compile()` and `.fit()` methods (such as multiple other models but we are currently working on this). You have to either use the TFTrainer or to update yourself the behavior of the internal training loop of Keras. An example of how to deal with T5 and properly training it, is showed in this nice [Colab](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb).",
"Ok, I saw that Colab but now understand why the author defined his own `train_step` (this was failing for me). Will update my notebook to add this.\r\n\r\nThank you!",
"@marton-avrios Here's an updated version of my Colab notebook, illustrating how to fine-tune `TFT5ForConditionalGeneration` on your data: https://colab.research.google.com/drive/1PtRxbK4oNUsm4lrsOvWoNYzA-BhOWwf2?usp=sharing",
"@NielsRogge Your colab looks much better but will be buggy for a few cases. Your train step must look something like this:\r\n\r\n```python\r\ndef train_step(self, data):\r\n x, y = data\r\n\r\n with tf.GradientTape() as tape:\r\n y_pred = self(x, training=True)\r\n loss = self.compiled_loss(y, y_pred.logits, regularization_losses=self.losses)\r\n\r\n gradients = tape.gradient(loss, self.trainable_variables)\r\n\r\n self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))\r\n\r\n self.compiled_metrics.update_state(y, y_pred.logits)\r\n\r\n return {m.name: m.result() for m in self.metrics}\r\n```\r\n\r\nNice adaptation in your Colab BTW :)",
"Thank you guys, very useful resource! Will it work on TPU or with other distribution strategies? Keras handles that when I stick to using ```compile()``` and ```fit()``` even if I redefine ```train_step()```, right? I mean dividing loss with global batch size, etc.",
"I doubt you will be able to train a T5 on TPU because T5 is not entirely XLA compliant so that you might encounter some unexpected issues. Sorry for that, it is also something on which we are currently working :)",
"So PyTorch version won't work either on TPU? Any hints as to which parts? I might be able to look into it. I get this error:\r\n```\r\nInvalid argument: {{function_node __inference_distributed_training_steps_51234}} Compilation failure: Detected unsupported operations when trying to compile graph cluster_distributed_training_steps_2864851955408122598[] on XLA_TPU_JIT: StringFormat (No registered 'StringFormat' OpKernel for XLA_TPU_JIT devices compatible with node {{node tf_t5for_conditional_generation/encoder/StringFormat}}){{node tf_t5for_conditional_generation/encoder/StringFormat}}\r\n```",
"There are still a lot of work to make TFT5 XLA compliant, so I don't suggest you to use it for this case. The Pytorch version is fully TPU compliant yes.",
"@marton-avrios turns out that the Tensorflow implementation of T5 already creates the `decoder_input_ids` for you as seen [here](https://github.com/huggingface/transformers/blob/587197dcd2b50ad9e96aedbfa389bf4fcc294c3c/src/transformers/models/t5/modeling_tf_t5.py#L1376), you don't need to prepare them yourself (I thought this was only supported in the PyTorch version for now). So I've updated my notebook, it's simpler now \r\n\r\n",
"Thank you @NielsRogge ! I think you need to provide either label input ids as ```labels``` or shifted label input ids as ```decoder_input_ids``` in input dictionary. At least it failed for me with only ```input_ids``` and ```attention_mask```.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux version 4.19.0-14-cloud-amd64 ([email protected]) (gcc version 8.3.0 (Debian 8.3.0-6)) #1 SMP Debian 4.19.171-2 (2021-01-30)
- Python version: 3.7
- PyTorch version (GPU?): 1.7.1, No
- Tensorflow version (GPU?): 2.3.1, No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
See code below:
```
import numpy as np
import tensorflow as tf
from transformers import T5TokenizerFast, TFT5ForConditionalGeneration
MODEL_NAME = "t5-small"
INPUT_TEXTS = [
"When Liana Barrientos was 23 years old, she got married in Westchester County, New York.",
"Only 18 days after that marriage, she got hitched yet again.",
"Then, Barrientos declared 'I do' five more times, sometimes only within two weeks of each other.",
"In 2010, she married once more, this time in the Bronx.",
"In an application for a marriage license, she stated it was her 'first and only' marriage.",
"Prosecutors said the marriages were part of an immigration scam.",
"In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.",
"All occurred either in Westchester County, Long Island, New Jersey or the Bronx.",
"Any divorces happened only after such filings were approved.",
"It was unclear whether any of the men will be prosecuted.",
]
LABEL_TEXTS = ["Yes", "No", "Yes", "Yes", "No", "Yes", "No", "No", "Well, you never know, right?", "Yes"]
tokenizer = T5TokenizerFast.from_pretrained(MODEL_NAME)
tokenized_inputs = tokenizer(INPUT_TEXTS, padding="max_length", truncation=True, return_tensors="tf")
tokenized_labels = tokenizer(LABEL_TEXTS, padding="max_length", truncation=True, return_tensors="tf")
decoder_input_texts = ["<pad> " + _txt for _txt in LABEL_TEXTS]
tokenized_decoder_inputs = tokenizer(decoder_input_texts, padding="max_length", truncation=True, return_tensors="tf")
def add_dec_inp_ids(_features, _labels, _dec_inp_ids):
_features["decoder_input_ids"] = _dec_inp_ids
return (_features, _labels)
ds = tf.data.Dataset.from_tensor_slices(
(tokenized_inputs.data, tokenized_decoder_inputs.input_ids, tokenized_labels.input_ids))\
.map(add_dec_inp_ids)
batch_size = 2
steps_per_epoch = np.ceil(len(INPUT_TEXTS) / batch_size)
train_ds = ds.repeat().prefetch(tf.data.experimental.AUTOTUNE).batch(batch_size)
model = TFT5ForConditionalGeneration.from_pretrained(MODEL_NAME)
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss)
model.fit(train_ds, epochs=2, steps_per_epoch=steps_per_epoch)
```
And what I get:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must have the same first dimension, got logits shape [8192,64] and labels shape [1024]
[[node sparse_categorical_crossentropy_4/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at <ipython-input-2-0152c3165ef3>:51) ]] [Op:__inference_train_function_27365]
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Training starts and then finishes without error.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10170/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10169/comments | https://api.github.com/repos/huggingface/transformers/issues/10169/events | https://github.com/huggingface/transformers/issues/10169 | 807,735,522 | MDU6SXNzdWU4MDc3MzU1MjI= | 10,169 | run_langauge_modeling for T5 | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi\r\n\r\nSeems to me this script is the repetition of this other script: transformers/examples/language-modeling/run_mlm.py \r\nDo you mind adding T5 also to this script? thanks ",
"Actually, we can not simply add T5 to this script, because `run_mlm.py` is for encoder-only models (such as BERT, RoBERTa, DeBERTa, etc.). T5 is an encoder-decoder (seq2seq) model, so this would require a new script. The [seq2seq scripts](https://github.com/huggingface/transformers/tree/master/examples/seq2seq) currently only support fine-tuning, not pre-training.\r\n\r\ncc @patil-suraj @sgugger ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | NONE | null | Hi
Based on readme on [1], run_langauge_modeling.py does not support T5 model so far, it would be really nice to include this model as well.
There is also this line "data_args.block_size = tokenizer.max_len", max_len does not exist anymore, I searched in pretrainedTokernizer class and did not find an equivalent variable to substitue, do you mind telling me how I can update this line to make this example work?
thank you.
[1] https://github.com/huggingface/transformers/blob/master/examples/legacy/run_language_modeling.py | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10169/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10168/comments | https://api.github.com/repos/huggingface/transformers/issues/10168/events | https://github.com/huggingface/transformers/issues/10168 | 807,734,670 | MDU6SXNzdWU4MDc3MzQ2NzA= | 10,168 | NER pipeline doesn't work for a list of sequences | {
"login": "elk-cloner",
"id": 5828101,
"node_id": "MDQ6VXNlcjU4MjgxMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5828101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elk-cloner",
"html_url": "https://github.com/elk-cloner",
"followers_url": "https://api.github.com/users/elk-cloner/followers",
"following_url": "https://api.github.com/users/elk-cloner/following{/other_user}",
"gists_url": "https://api.github.com/users/elk-cloner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elk-cloner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elk-cloner/subscriptions",
"organizations_url": "https://api.github.com/users/elk-cloner/orgs",
"repos_url": "https://api.github.com/users/elk-cloner/repos",
"events_url": "https://api.github.com/users/elk-cloner/events{/privacy}",
"received_events_url": "https://api.github.com/users/elk-cloner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@Narsil, do you want to take a look at this?",
"Took a look, it seems the issue was not padding, but argument handling.\r\n\r\n"
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: transformers==4.3.2
- Platform: Linux Ubuntu 20.04
- Python version: 3.6
- PyTorch version (GPU?): torch==1.7.0+cu101
- Tensorflow version (GPU?): tensorflow==2.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Library:
- pipelines: @LysandreJik
Documentation: @sgugger
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. i used the steps [here](https://huggingface.co/transformers/task_summary.html#named-entity-recognition) to use pipelines for NER task with a little change, so my script is as follow:
```
from transformers import pipeline
nlp = pipeline("ner")
sequence = [
"Hugging Face Inc. is a company based in New York City.",
"Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very close to the Manhattan Bridge which is visible from the window."
]
print(nlp(sequence))
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
i expected to get a list like this:
```
[
[
{'word': 'Hu', 'score': 0.999578595161438, 'entity': 'I-ORG', 'index': 1, 'start': 0, 'end': 2}
{'word': '##gging', 'score': 0.9909763932228088, 'entity': 'I-ORG', 'index': 2, 'start': 2, 'end': 7}
{'word': 'Face', 'score': 0.9982224702835083, 'entity': 'I-ORG', 'index': 3, 'start': 8, 'end': 12}
{'word': 'Inc', 'score': 0.9994880557060242, 'entity': 'I-ORG', 'index': 4, 'start': 13, 'end': 16}
{'word': 'New', 'score': 0.9994344711303711, 'entity': 'I-LOC', 'index': 11, 'start': 40, 'end': 43}
{'word': 'York', 'score': 0.9993196129798889, 'entity': 'I-LOC', 'index': 12, 'start': 44, 'end': 48}
{'word': 'City', 'score': 0.9993793964385986, 'entity': 'I-LOC', 'index': 13, 'start': 49, 'end': 53}
],
[
{'word': 'Hu', 'score': 0.9995632767677307, 'entity': 'I-ORG'},
{'word': '##gging', 'score': 0.9915938973426819, 'entity': 'I-ORG'},
{'word': 'Face', 'score': 0.9982671737670898, 'entity': 'I-ORG'},
{'word': 'Inc', 'score': 0.9994403719902039, 'entity': 'I-ORG'},
{'word': 'New', 'score': 0.9994346499443054, 'entity': 'I-LOC'},
{'word': 'York', 'score': 0.9993270635604858, 'entity': 'I-LOC'},
{'word': 'City', 'score': 0.9993864893913269, 'entity': 'I-LOC'},
{'word': 'D', 'score': 0.9825621843338013, 'entity': 'I-LOC'},
{'word': '##UM', 'score': 0.936983048915863, 'entity': 'I-LOC'},
{'word': '##BO', 'score': 0.8987102508544922, 'entity': 'I-LOC'},
{'word': 'Manhattan', 'score': 0.9758241176605225, 'entity': 'I-LOC'},
{'word': 'Bridge', 'score': 0.990249514579773, 'entity': 'I-LOC'}
]
]
```
but i got this error
```
ValueError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in convert_to_tensors(self, tensor_type, prepend_batch_axis)
770 if not is_tensor(value):
--> 771 tensor = as_tensor(value)
772
ValueError: expected sequence of length 16 at dim 1 (got 38)
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
6 frames
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in convert_to_tensors(self, tensor_type, prepend_batch_axis)
786 )
787 raise ValueError(
--> 788 "Unable to create tensor, you should probably activate truncation and/or padding "
789 "with 'padding=True' 'truncation=True' to have batched tensors with the same length."
790 )
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
i know the problem is from tokenizer and i should use tokenizer with some arguments like this:
```
tokenizer(
sequence,
return_tensors="pt",
truncation=True,
padding=True,
max_length=512,
)
```
but it's not clear from the documentation how can we define these argument("truncation=True", "padding=True", "max_length=512") when using pipelines for NER task
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10168/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10167/comments | https://api.github.com/repos/huggingface/transformers/issues/10167/events | https://github.com/huggingface/transformers/pull/10167 | 807,693,819 | MDExOlB1bGxSZXF1ZXN0NTcyODk2OTA2 | 10,167 | [RAG] fix tokenizer | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | MEMBER | null | # What does this PR do?
- Introduce `as_target_tokenizer` context manager in `RagTokenizer` to later update the docs when `prepare_seq2seq_batch` is depricated.
- `RagTokenizer.prepare_seq2seq_batch` calls `super().prepare_seq2seq_batch`, but it does not inherit from `PreTrainedTokenizer`. Fix the method temporarily using the context manager. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10167/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10167",
"html_url": "https://github.com/huggingface/transformers/pull/10167",
"diff_url": "https://github.com/huggingface/transformers/pull/10167.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10167.patch",
"merged_at": 1613398692000
} |
https://api.github.com/repos/huggingface/transformers/issues/10166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10166/comments | https://api.github.com/repos/huggingface/transformers/issues/10166/events | https://github.com/huggingface/transformers/issues/10166 | 807,683,843 | MDU6SXNzdWU4MDc2ODM4NDM= | 10,166 | [tests] failing test only when run in a group | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | closed | false | null | [] | [
"Hi @stas00,\r\n\r\nI could not understand how to use `RUN_SLOW` in the windows command line, When I run it I was getting\r\n```\r\n'RUN_SLOW' is not recognized as an internal or external command,\r\noperable program or batch file.\r\n```\r\n\r\nIt was mentioned in the contribution guidelines but I don't know how to enable it",
"Sorry, I don't know much about windows. \r\n\r\nAren't you using some unixy shell on windows to run this? In which case it should support it?\r\n\r\nOtherwise look up how to setup env vars in your windows shell.\r\n\r\nAnd of course the simplest hack is for the duration of your test to simply comment out the `@slow` decorator inside the test file.\r\n",
"oh, but I actually just fixed it here https://github.com/huggingface/transformers/pull/10584 - thank you for unearthing this one. I can close it now."
] | 1,613 | 1,615 | 1,615 | CONTRIBUTOR | null | If someone wants to solve a puzzle, this test:
```
RUN_SLOW=1 pytest examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow
```
works on its own, but fails if it's run in the group with other tests:
```
RUN_SLOW=1 pytest examples/seq2seq/test_finetune_trainer.py
```
it doesn't learn anything - eval_blue remains 0.0
The only small issue is that the test is being renamed and moved to use `run_seq2seq.py`, so if you're reading this in a few days, most likely it will be the following case instead - which has the exact same problem:
```
RUN_SLOW=1 pytest examples/tests/trainer/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_slow
```
works on its own, but fails if it's run in the group with other tests:
```
RUN_SLOW=1 pytest examples/tests/trainer/test_trainer_ext.py
```
it doesn't learn anything - eval_blue remains 0.0
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10166/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10165/comments | https://api.github.com/repos/huggingface/transformers/issues/10165/events | https://github.com/huggingface/transformers/issues/10165 | 807,669,826 | MDU6SXNzdWU4MDc2Njk4MjY= | 10,165 | [example scripts] inconsistency around eval vs val | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"While what you say make sense, I'm unsure it warrants a new change of argument names on all example scripts as it seems more cosmetic to me.\r\n\r\nThe `TrainingArguments` have the proper mode already (`--do_train`, `--do_eval`, `--do_predict`) so it's only the examples. We're less attached to no breaking changes there for now but we will soon need them to be production ready, so if we want to do this, it should be done by end of next week if possible.\r\n\r\nAlso cc @LysandreJik ",
"> we will soon need them to be production ready\r\n\r\nThis examples-are-not-production and examples-are-production back and forth depending on the context is incredibly difficult to sustain.\r\n\r\nI'm proposing to improve clarity and consistency so that the user can have an easier understanding, and if these are honest examples then the goal is to improve that - exemplification. If you make a slide presentation and people find typos in it, you don't say, but I already showed it to a group so the typos have to remain. Examples are that slide presentation that try to do the best exemplification of the core code. At least based on much feedback I received on my PRs.\r\n\r\nAnd if we want to have programs that are production quality see my rfc https://github.com/huggingface/transformers/issues/10155. I really hope this project will make a clear cut decision wrt this back and forth and stand by it. \r\n\r\nMy fantasy is that there will be:\r\n1. examples - change and improve those any time to make things easier - these are living and working tutorials. Nothing is fixed in stone here and which are a subject to evolve at any moment. No fixed API, but really easy to understand the code.\r\n2. apps - production quality programs that are part of the core, with well thought out API, clean refactored code, thorough testing and tests relying on these apps to do testing.\r\n",
"Not really my area of expertise here, but I do agree with @stas00 -> I think we should keep the liberty of quickly adapting the examples",
"> Are metrics reporting stats on a split or a mode?\r\n> A. split - rename all metrics keys to be `train|val|test`\r\n> B. mode - rename all metrics keys to be `train|eval|predict`\r\n\r\nSo what it should be? - either A or B - the current `train|eval|test` is a very odd amalgamation of A and B. Unless we just say that the `validation` set is really an `evaluation` set and let it be.\r\n\r\nThis impacts the results json files too in example scripts.\r\n",
"This really is broken. I was just trying to write some code that was using the splits and again run into this some args are \"*val*\" others \"*eval*\" :( so can't even write automated attribute retrieval by split and have to write the code as:\r\n\r\n```\r\ndef get_actual_samples(self, split):\r\n\r\n if not split in [\"train\", \"eval\", \"test\"]:\r\n raise ValueError(f\"Unknown split {split}\")\r\n\r\n dataset = getattr(self, f\"{split}_dataset\")\r\n split_fixed = split if split != \"eval\" else \"val\"\r\n max_samples_arg = getattr(self.args, f\"max_{split_fixed}_samples\")\r\n max_samples = max_samples_arg if max_samples_arg is not None else len(dataset)\r\n return min(max, len(dataset))\r\n```\r\n\r\nas you can see this is so strange.\r\n\r\nPlease, please, please - make your vote and let's make the examples use either the splits or the modes and not the mix of both:\r\n\r\nA. split - rename all metrics keys to be `train|val|test`\r\nB. mode - rename all metrics keys to be `train|eval|predict`\r\n\r\nTo remind currently it's:\r\n- `train|val-eval|test`\r\n\r\nIf option B is chosen we rename all cl arg keys + metrics: \"val\" => \"eval\" and \"test\" => \"predict\"\r\nIf option A is chosen we rename all cl arg keys + metrics: \"eval\" => \"val\" \r\n\r\nMy vote is B: `train|eval|predict` because we are reporting on and configuring a specific Trainer mode and not the split.\r\n\r\n\r\n",
"I vote for B, for consistency with `do_train`, `do_eval`, `do_predict`.\r\n\r\nFor examples: switching an arg name can be done without taking precautions for BC as long as the README is updated at the same time, but for `TrainingArguments `(if any is concerned), a proper deprecation cycle has to be made.",
"@bhadreshpsavani, would this be something you'd like to work on by chance? If you haven't tired of examples yet.",
"Hi @stas00,\r\nYa i will be happy to work more.\r\nActually I was looking for some issues to work on!",
"Awesome! Thank you, @bhadreshpsavani!\r\n\r\nSo the changes we need are:\r\n\r\n1. use `eval` instead of `val`\r\n2. use `predict` instead of `test`\r\n\r\nin cl args and variable names in example scripts (only the active ones, please ignore legacy/research subdirs).\r\n\r\nI hope this will be a last rename in awhile.\r\n",
"Hi @stas00, \r\nIn the dataset we have `validation` as a key for proper conversion shall we also need to change it to `evaluation`?\r\nFor `validation_file` we can either change it to `evalution_file` or `eval_file` or keep it as it is.",
"No the key in the dataset dictionary is \"validation\", so it should be `validation_file`.",
"While testing my changes I come to know that few example scripts are not working fine before my changes!\r\nHere is the List:\r\n```\r\n/language-modeling/run_clm.py \r\n/language-modeling/run_plm.py\r\n/question-answering/run_qa_beam_search.py\r\n```\r\nPlease check/run this [colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/TestingAllHuggingfaceScripts_.ipynb) for instant testing.\r\nWhen I run the above script locally in ubuntu my system got freeze.",
"I have no luck using colab today, it doesn't connect at all, so I can't test.\r\n\r\nI run the clm test as you posted it on my own machine and it worked just fine.\r\n\r\nIs it only colab that it's failing on? ",
"Hi @stas00,\r\nIt actually not giving any error but after a few epochs of training, it gives something like this ^C, and it's not doing further stages like eval and predict. ",
"Does it silently abort the run w/o any traceback? That often means the system run out of RAM and the kernel killed the process - often you get no response. colab is notorious for giving a tiny amount of RAM.\r\n\r\nI hope colab will start working for me again and I will see if I can reproduce this.\r\n\r\nfor hanging there is this trick, add this to the beginning of the program:\r\n\r\n```\r\nimport faulthandler\r\nfaulthandler.dump_traceback_later(20, repeat=True)\r\n```\r\nnow every 20 secs it will print out where each thread is (traceback). super handy!\r\n",
"There is also the on demand version:\r\n```\r\n# register and then kill the process w/ stack trace\r\nimport faulthandler, signal\r\nfaulthandler.register(signal.SIGUSR1)\r\n# kill the stuck process\r\nkill -USR1 PID\r\n```\r\nbut it often doesn't work.\r\n\r\n`py-spy` is another handy one but it requires `sudo` so won't work on colab. Unless you start the program with it:\r\n```\r\n# trace a running python application - e.g. when it's hanging or very slow and you want to see the backtrace - one way is using a sighandler - but that requires killing it and already having it installed\r\npip install py-spy\r\nsudo py-spy top --pid PID\r\n# if one has no sudo, start the program via\r\npy-spy -- python myprogram.py\r\n# and then it will attached without sudo\r\n# https://github.com/benfred/py-spy#when-do-you-need-to-run-as-sudo\r\n```\r\n",
"Ya, I think it silently aborted the run w/o any traceback. Might be because it is occupying the entire ram somehow.\r\nSimilar behavior I observed when I run a really big docker image locally.\r\n\r\nI will definitely try this command and dig more!\r\n\r\nThanks a lot for your input. This is really insightful! I will note down this as well :)",
"Yes, this is almost always the case in colab. It's too bad they don't have a simple widget that shows real time memory usage.\r\n\r\nFor your personal machine, always have a huge swap file if you do ML dev. Like 100GB. It will save the day.\r\n\r\nAlso there is a way to protect your desktop from your RAM-hungry commands - you can look into `cgroups`. If you machine crashes a lot because of run away training or jupyter, this is another super useful addition. But there are other way to manage resources.",
"ok, so I had to disable a `Privacy Badger` firefox extension and colab started working.\r\n\r\nFirst, make a habit to start colab with:\r\n```\r\n!free -h\r\n```\r\nsometimes I get 12GB RAM, other times 25GB, 12GB is typically too low for much.\r\n\r\nSo `run_clm` works just fine even on 12GB. I had to use a small bs so edited your cmd lines to limit bs:\r\n\r\n```\r\n!python examples/pytorch/language-modeling/run_clm.py \\\r\n--model_name_or_path gpt2 \\\r\n--dataset_name wikitext \\\r\n--max_train_samples 5 \\\r\n--max_val_samples 5 \\\r\n--dataset_config_name wikitext-2-raw-v1 \\\r\n--do_train \\\r\n--do_eval \\\r\n--output_dir /tmp/test-clm \\\r\n--per_device_eval_batch_size 2 \\\r\n--per_device_train_batch_size 2 \\\r\n--overwrite_output_dir\r\n```\r\nthis worked too:\r\n```\r\n!python examples/pytorch/language-modeling/run_plm.py \\\r\n--model_name_or_path xlnet-base-cased \\\r\n--dataset_name wikitext \\\r\n--max_train_samples 5 \\\r\n--max_val_samples 5 \\\r\n--dataset_config_name wikitext-2-raw-v1 \\\r\n--do_train \\\r\n--do_eval \\\r\n--output_dir /tmp/test-clm \\\r\n--per_device_eval_batch_size 2 \\\r\n--per_device_train_batch_size 2 \\\r\n--overwrite_output_dir\r\n```\r\nand so did:\r\n```\r\n!python examples/pytorch/question-answering/run_qa.py \\\r\n--model_name_or_path distilbert-base-uncased \\\r\n--train_file tests/fixtures/tests_samples/SQUAD/sample.json \\\r\n--validation_file tests/fixtures/tests_samples/SQUAD/sample.json \\\r\n--test_file tests/fixtures/tests_samples/SQUAD/sample.json \\\r\n--do_train \\\r\n--do_eval \\\r\n--do_predict \\\r\n--max_train_samples 5 \\\r\n--max_val_samples 5 \\\r\n--max_test_samples 5 \\\r\n--learning_rate 3e-5 \\\r\n--max_seq_length 384 \\\r\n--doc_stride 128 \\\r\n--version_2_with_negative \\\r\n--output_dir /tmp/debug_squad/ \\\r\n--per_device_eval_batch_size 2 \\\r\n--per_device_train_batch_size 2 \\\r\n--overwrite_output\r\n```\r\n"
] | 1,613 | 1,619 | 1,619 | CONTRIBUTOR | null | * `val` == validation set (split)
* `eval` == evaluation (mode)
those two are orthogonal to each other - one is a split, another is a model's run mode.
the trainer args and the scripts are inconsistent around when it's `val` and when it's `eval` in variable names and metrics.
examples:
* `eval_dataset` but `--validation_file`
* `eval_*` metrics key for validation dataset - why the prediction is then `test_*` metric keys?
* `data_args.max_val_samples` vs `eval_dataset` in the same line
the 3 parallels:
- `train` is easy - it's both the process and the split
- `prediction` is almost never used in the scripts it's all `test` - var names and metrics and cl args
- `eval` vs `val` vs `validation` is very inconsistent. when writing tests I'm never sure whether I'm looking up `eval_*` or `val_*` key. And one could run evaluation on the test dataset.
Perhaps asking a question would help and then a consistent answer is obvious:
Are metrics reporting stats on a split or a mode?
A. split - rename all metrics keys to be `train|val|test`
B. mode - rename all metrics keys to be `train|eval|predict`
Thank you.
@sgugger, @patil-suraj, @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10165/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10164/comments | https://api.github.com/repos/huggingface/transformers/issues/10164/events | https://github.com/huggingface/transformers/issues/10164 | 807,658,996 | MDU6SXNzdWU4MDc2NTg5OTY= | 10,164 | [example scripts] disambiguate language specification API | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Regarding \"case 1\", only the \"old\" T5 models: `t5-small`, `t5-base`, `t5-large`, `t5-3b` and `t5-11b` were trained with the `source_prefix` and not the new T5 models. Also IMO, there is a very legitimate case that people might want to fine-tune `t5-small`, `t5-base`, ... on translation, but don't want to condition the model on the prefix. In the second case, I agree more that we should probably raise if `--target_lang` and/or `--source_lang` are not given. For the first case, I'm fine with adding a warning",
"So if it is one of the 4 models that you listed print a warning to set `--source_prefix` if it's an exact match for model name and the flag wasn't passed, right?\r\n\r\nIs it just `run_seq2seq.py` or are there any other scripts that need these 2 special supports?",
"Yeah, I think this would be a good idea! Think it's only T5 so only `run_seq2seq.py`",
"I can take this on #10611, what I'd do is remove `source_prefix` and map `(src,tgt)` pairs to matching `source_prefix` values when the `model_name` matches the \"older\" T5 models.\r\nBut I'd need said mapping, do you have pointers?",
"Skimming through the T5 paper it seems the mapping is quite small, `en->{fr,de,ro}`? \r\nIf so no need to build an exhaustive mapping of 2 letter ISO codes to capitalized language names, and I can issue a warning when `{src,tgt}_lang` is out of \"supported\" language-pairs.",
"This is for the pre-trained models, but if a user provides their own model it could be any language.\r\n\r\nPlus you have https://github.com/google-research/multilingual-t5.\r\n\r\nI wonder if there is a python module that comes with such a map.",
"https://gist.github.com/carlopires/1262033/c52ef0f7ce4f58108619508308372edd8d0bd518",
"Language mapping:\r\nHere you go: https://github.com/LuminosoInsight/langcodes\r\nor another alternative: https://github.com/janpipek/iso639-python",
"Thanks, I had found a couple of mappings in json as well, should we hardcode them or use an external dependency?",
"we require running `pip install -r examples/seq2seq/requirements.txt` already, so why not follow suite.",
"This issue hasn't been resolved. @theo-m solved it initially and linked to it, but then the group didn't like the solution and it was reverted. So a user is still required to enter the language pair twice. Not a great example.",
"well, it looks like the overall agreement is that examples don't have to be perfect, they are just examples."
] | 1,613 | 1,616 | 1,616 | CONTRIBUTOR | null | Currently in example scripts like `run_seq2seq.py` we have:
1. for t5
```
--task translation_en_to_ro
--source_prefix "translate English to Romanian: "
```
2. Also these 2:
```
--target_lang ro_RO
--source_lang en_XX
```
are used only for MBart and are ignored for other models. Which means that people will unknowingly try to use these two as well when they aren't need.
The problem in both situations is that we provide error-prone API where a user wants to change the language and forgets that there is more than one of the same and changes only one of the sets of languages, but not the other, which leads to broken outcome.
If such an error is made the specification supplied by the user becomes ambiguous, because one can't tell which of the multiple inputs takes precedence.
Proposal: There should be only one way to input a set of languages and not multiple ways.
Specifically:
- in case 1, probably the easiest is to leave `--task translation_en_to_ro` and auto-generate `--source_prefix "translate English to Romanian: "`
- in case 2, assert if `--target_lang` or `--source_lang` are passed and the model is not MBart.
Thinking more about it, case 1 is a must to solve, because if a user misses `--source_prefix` or makes a typo in it - the train/eval won't fail, but will mysteriously produce really bad outcome. This is not user-friendly.
@sgugger, @patrickvonplaten, @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10164/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10163/comments | https://api.github.com/repos/huggingface/transformers/issues/10163/events | https://github.com/huggingface/transformers/issues/10163 | 807,648,782 | MDU6SXNzdWU4MDc2NDg3ODI= | 10,163 | Increasing gradient accummulation steps significantly slows down training | {
"login": "keleog",
"id": 11840053,
"node_id": "MDQ6VXNlcjExODQwMDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/11840053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keleog",
"html_url": "https://github.com/keleog",
"followers_url": "https://api.github.com/users/keleog/followers",
"following_url": "https://api.github.com/users/keleog/following{/other_user}",
"gists_url": "https://api.github.com/users/keleog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keleog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keleog/subscriptions",
"organizations_url": "https://api.github.com/users/keleog/orgs",
"repos_url": "https://api.github.com/users/keleog/repos",
"events_url": "https://api.github.com/users/keleog/events{/privacy}",
"received_events_url": "https://api.github.com/users/keleog/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger \r\n@LysandreJik \r\n\r\npls help",
"A reported step is a training step (with an optimizer pass). When you increase gradient accumulation, you take more input batches to do one step, so it's normal to have less training steps per second.\r\n\r\nPlease note that the issues are for bugs and feature requests only, general questions like this one should go on the [forums](https://discuss.huggingface.co/), which is why I'm closing this.",
"Yeah, I am aware that you take more input batches to do one step, so it's normal to have less training steps per second. However, the actual training time is much longer. Is this normal? Shouldn't it be faster or at least equals to a gradient accumulation step of 1. ",
"You did not report total training it. Since there are 4 (or 8) times less batches it should stay the same even if you have a slower iteration per second total."
] | 1,613 | 1,613 | 1,613 | NONE | null | When training with a batch size of 32 (grad accummulation step = 1), training speed is approximately 6 it/s, however I increase gradient accummulation step to 4 or 8 (equivalent to batch size of 128 and 256), speed reduces to 1.03 it/s.
Is this expected behaviour?
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux
- Python version: 3.7.4
- PyTorch version (GPU?): 1.7.1+cu101
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@sgugger
@patrickvonplaten
@LysandreJik
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): XLMR
The problem arises when using:
* [ ] the official example scripts: (give details below) Trainer script
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below) masked language model training
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10163/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10162/comments | https://api.github.com/repos/huggingface/transformers/issues/10162/events | https://github.com/huggingface/transformers/pull/10162 | 807,646,281 | MDExOlB1bGxSZXF1ZXN0NTcyODYwMzk1 | 10,162 | fix run_seq2seq.py; porting trainer tests to it | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"OK, I decided to go ahead and port the other scripts instead of waiting for merging of the first set. Had to make some more fixes in the script while at it.\r\n\r\n\r\n"
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | This PR:
- restores some of the essential dropped functionality from `finetune_trainer.py` - I'm almost sure this is far far from complete since so much was just dropped
- ports wmt_en_ro test data to `jsonlines` - I move the tests dataset into the root of examples so that it can be accessed by a variety of sub-projects.
- ports DeepSpeed tests to use `run_seq2seq.py`
- ports the other trainer script to use `run_seq2seq.py`
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10162/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10162",
"html_url": "https://github.com/huggingface/transformers/pull/10162",
"diff_url": "https://github.com/huggingface/transformers/pull/10162.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10162.patch",
"merged_at": 1613409137000
} |
https://api.github.com/repos/huggingface/transformers/issues/10161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10161/comments | https://api.github.com/repos/huggingface/transformers/issues/10161/events | https://github.com/huggingface/transformers/issues/10161 | 807,568,069 | MDU6SXNzdWU4MDc1NjgwNjk= | 10,161 | Seq2seq now has larger memory requirements, OOM w/Deepspeed on previously runnable models | {
"login": "PeterAJansen",
"id": 3813268,
"node_id": "MDQ6VXNlcjM4MTMyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3813268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterAJansen",
"html_url": "https://github.com/PeterAJansen",
"followers_url": "https://api.github.com/users/PeterAJansen/followers",
"following_url": "https://api.github.com/users/PeterAJansen/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterAJansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterAJansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterAJansen/subscriptions",
"organizations_url": "https://api.github.com/users/PeterAJansen/orgs",
"repos_url": "https://api.github.com/users/PeterAJansen/repos",
"events_url": "https://api.github.com/users/PeterAJansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterAJansen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"it's there:\r\n```\r\n./run_seq2seq.py -h | grep deepspeed\r\n [--sharded_ddp [SHARDED_DDP]] [--deepspeed DEEPSPEED]\r\n --deepspeed DEEPSPEED\r\n Enable deepspeed and pass the path to deepspeed json\r\n```\r\n\r\nof course, it would OOM w/o `--deepspeed` in your situation.\r\n\r\nand you could just \r\n\r\n```\r\npip install deepspeed==0.3.10\r\n```\r\ntoo ;)\r\n\r\nAnd I don't know if `xsum` dataset is the same. The one we used with `finetune_trainer.py` was hand-cured, see: https://github.com/huggingface/transformers/issues/10044 I'm trying to figure out how to make these available through the dataset hub.\r\n",
"> it's there:\r\n> \r\n> ```\r\n> ./run_seq2seq.py -h | grep deepspeed\r\n> [--sharded_ddp [SHARDED_DDP]] [--deepspeed DEEPSPEED]\r\n> --deepspeed DEEPSPEED\r\n> Enable deepspeed and pass the path to deepspeed json\r\n> ```\r\n> \r\n> of course, it would OOM w/o `--deepspeed` in your situation.\r\n> \r\n\r\nUgh. Sorry, my toddler didn't sleep well last night. Maybe I should just hang up my compiler for the day. Of course I just looked with my eyeballs instead of grep, and it's one of like three lines in the enormous parameter listing with a second parameter on the same line. :) \r\n\r\n> and you could just\r\n> \r\n> ```\r\n> pip install deepspeed==0.3.10\r\n> ```\r\n> \r\n> too ;)\r\n> \r\n\r\nI use the ./install.sh script because of that issue with the A100 architecture (80) seemingly not included by default. I haven't followed up to check if that's fixed in the last few weeks.\r\n\r\n> And I don't know if `xsum` dataset is the same. The one we used with `finetune_trainer.py` was hand-cured, see: #10044 I'm trying to figure out how to make these available through the dataset hub.\r\n\r\nThe behavior when running is a bit different -- I put xsum in the examples/seq2seq folder, but it downloaded a fresh copy from the dataset hub and used it, so that should be okay.\r\n\r\n\r\n\r\nWhen running with the deepspeed option:\r\n```\r\nexport OUTPUTDIR=tst-summarization\r\nexport BS=1; rm -rf $OUTPUTDIR; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=4 ./run_seq2seq.py \\\r\n --model_name_or_path allenai/unifiedqa-t5-11b \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_predict \\\r\n --task summarization \\\r\n --dataset_name xsum \\\r\n --output_dir $OUTPUTDIR \\\r\n --per_device_train_batch_size=$BS \\\r\n --per_device_eval_batch_size=$BS \\\r\n --overwrite_output_dir \\\r\n --predict_with_generate \\\r\n --max_train_samples 500 \\\r\n --max_val_samples 100 \\\r\n --max_test_samples 100 \\\r\n --deepspeed ../tests/deepspeed/ds_config.json \\\r\n```\r\n\r\nIt gets a little further, but then still OOMs:\r\n\r\n```\r\nRuntimeError: CUDA out of memory. Tried to allocate 18.00 MiB (GPU 2; 39.59 GiB total capacity; 36.92 GiB already allocated; 4.69 MiB free; 37.30 GiB reserved in total by PyTorch)\r\nTraceback (most recent call last):\r\n File \"./run_seq2seq.py\", line 629, in <module>\r\n main()\r\n File \"./run_seq2seq.py\", line 561, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/pajansen/github/transformers-feb12-2021/transformers/src/transformers/trainer.py\", line 960, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/home/pajansen/github/transformers-feb12-2021/transformers/src/transformers/trainer.py\", line 1346, in training_step\r\n self.deepspeed.backward(loss)\r\n File \"/home/pajansen/anaconda3/envs/transformers-feb12-2021/lib/python3.8/site-packages/deepspeed/runtime/engine.py\", line 845, in backward\r\n self.optimizer.backward(loss)\r\n File \"/home/pajansen/anaconda3/envs/transformers-feb12-2021/lib/python3.8/site-packages/deepspeed/runtime/zero/stage2.py\", line 1603, in backward\r\n buf_1 = torch.empty(int(self.reduce_bucket_size * 4.5),\r\nRuntimeError: CUDA out of memory. Tried to allocate 1.68 GiB (GPU 1; 39.59 GiB total capacity; 35.88 GiB already allocated; 840.69 MiB free; 36.48 GiB reserved in total by PyTorch)\r\n 0%|▍ | 1/375 [00:09<58:33, 9.39s/it]\r\n```\r\n\r\nThe ds_config.json bucket sizes are 2e8. I'm not sure I've run xsum before, so it's not clear to me if that just needs to be tinkered with (I'll try a few more values, and report back if that solves it). \r\n\r\n\r\n\r\n",
"(FYI It does look like training works on: \r\n\r\nhttps://github.com/huggingface/transformers/commit/c130e67dce56a092604949a8df6384a17f762189\r\n\r\nConfirming your suggestion that the change probably happened in #10114 )",
"Thank your validating that, @PeterAJansen. I will research and get back to you hopefully with a better solution.",
"Just an update on the new script - I finally managed to get it to produce an equivalent bleu score:\r\n\r\nNeeded to convert the dataset into `jsonlines` see https://github.com/huggingface/transformers/issues/10036 and multiple other changes, the most easy to miss (as it won't fail but produce abysmal results) is the one at the end of this comment.\r\n\r\nand then the script is:\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python ./run_seq2seq.py \\\r\n--model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 \\\r\n--train_file /hf/transformers-master/examples/seq2seq/wmt_en_ro/train.json \\\r\n--validation_file /hf/transformers-master/examples/seq2seq/wmt_en_ro/val.json \\\r\n--do_eval --do_train --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step \\\r\n--logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir \\\r\n--per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 \\\r\n --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 \\\r\n--max_train_samples 2000 --max_val_samples 500 --source_prefix \"translate English to Romanian: \"\r\n```\r\n\r\nNote the important new addition `--source_prefix \"translate English to Romanian: \"` - w/o it the score is close to 0 as the new script doesn't translate for t5 automatically - I advocate to change that, but time will show.\r\n\r\nI'm not sure if `xsum` dataset is the same - didn't get to it yet.\r\n\r\nSo with summarization you most likely need to add --source_prefix \"summarize: \"",
"Further update: I ported the wmt pre-processed data to HF `datasets`, so now the dataset fetching is automated:\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python ./run_seq2seq.py \\\r\n--model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 \\\r\n--do_eval --do_train --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step \\\r\n--logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir \\\r\n--per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 \\\r\n --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 \\\r\n--max_train_samples 2000 --max_val_samples 500 --source_prefix \"translate English to Romanian: \" \\\r\n--dataset_name wmt16-en-ro-pre-processed\r\n```",
"@PeterAJansen, so I have been thinking about that change that I introduced that you discovered made it impossible to eval the 45GB model on 40GB card. But the thing is, before the change, you were using an fp16 version remaining from train - during eval, which from what I understand may not give good accuracy - have you run evaluation and received good results?\r\n\r\nI'm trying to see whether the Trainer should support fp16 in eval.\r\n\r\nThe tricky issue is that currently we switch `.to(device)` in trainer's init, so this will have to be re-worked somehow. But first I would love to hear if that work on t5-11b quality-wise. `model.half()` will require only 22GB\r\n \r\nAs a quick test if you're doing `eval` only and no training it could be hacked by putting it before switching to gpu:\r\n\r\nhttps://github.com/huggingface/transformers/blob/1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885/src/transformers/trainer.py#L271-L276\r\n ",
"Hmmm, that's a good question. I've been doing exploration on new data, and the generations looked okay by eye, but I don't have a solid metric to automatically evaluate them right now -- so I can't immediately answer the question of whether the results look good. \r\n\r\nI've had a long run going for about 5 days that should be done in about 10 hours. Is there a test run that one of us could try then to verify that things look good before I stick the next 5-day batch on? :) (perhaps one of the standard t5 evaluation datasets with known performance?). ",
"What task and language are you training/finetuning for, so that we can find a way to compare apples to apples, and might be indicative.\r\n\r\nAnd of course the ultimate test is to compare the scores for the same model before and after the finetuning/training on the same test data.",
"Mine is a big can of worms (a complex inference task, with the data currently being generated by annotators, with no current automated metrics for evaluation) so we should use something different. \r\n\r\nMaybe the WMT task, since it's one of the examples shown in the huggingface seq2seq readme (and the one I used for the example script above to show the bug)? There are published expected results on Table 14 (page 39) in the T5 paper we can use as a guide:\r\n\r\nhttps://arxiv.org/pdf/1910.10683.pdf",
"So if you're running many days of training and you have no way of evaluating the quality improvement what is then the point of this exercise? Just to first know that it can be trained? Which is a totally valid exercise.\r\n\r\nSurely you could establish at least some baseline, to know even roughly if there is an improvement.\r\n\r\nIf the data/task is similar to WMT then yes, it'd be useful. \r\n\r\ne.g. eval en2ro translation:\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python ./run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_val_samples 500 --dataset_name wmt16 --dataset_config \"ro-en\" --source_prefix \"translate English to Romanian: \"\r\n...\r\n02/16/2021 10:45:50 - INFO - __main__ - ***** val metrics *****\r\n02/16/2021 10:45:50 - INFO - __main__ - val_bleu = 24.1257\r\n02/16/2021 10:45:50 - INFO - __main__ - val_gen_len = 39.554\r\n02/16/2021 10:45:50 - INFO - __main__ - val_loss = 3.7917\r\n02/16/2021 10:45:50 - INFO - __main__ - val_runtime = 18.2931\r\n02/16/2021 10:45:50 - INFO - __main__ - val_samples = 500\r\n02/16/2021 10:45:50 - INFO - __main__ - val_samples_per_second = 27.333\r\n```\r\n\r\nnote that the eval scores are very language pair-specific - the variations between various pairs can be huge.",
"The short answer is, I work in an area that doesn't yet have good automated metrics for evaluating generation quality, and so we typically evaluate them manually (which takes a lot of time, typically from research assistants -- part of what we're working on right now is figuring out reasonable automated metrics). But we still know from other earlier work and analyses that we've done that pre-training on related data helps, so that's what I'm doing now (the long early tail of pre-training). While I know that pre-training helps from past work, I can't easily evaluate it online -- I have to run the set, then evaluate it manually. \r\n\r\nBut all that is unrelated to the original question, whether T5-11B fp16 evaluation (in general, not paired to a specific dataset) has an issue or works okay relative to fp32:\r\n\r\n> @PeterAJansen, so I have been thinking about that change that I introduced that you discovered made it impossible to eval the 45GB model on 40GB card. But the thing is, before the change, you were using an fp16 version remaining from train - during eval, which from what I understand may not give good accuracy - have you run evaluation and received good results?\r\n> \r\n> I'm trying to see whether the Trainer should support fp16 in eval.\r\n\r\nTo figure that out, we won't be able to use my lab's dataset for various technical reasons, so if there's some minimum benchmarking dataset that helps measure this that works well with automated evaluation, then that would be best to use. :) \r\n",
"Thank you for elucidating your particular situation, @PeterAJansen \r\n\r\nI'm going to run some experiments on fp16 eval against fp32 for t5 w/ wmt and we shall see. If it works well, then we can make fp16-eval available in the Trainer for those who want to try it.",
"Interesting and possibly related bug (on c130e67): \r\n\r\n1) Fune-tuning T5-11B from the model hub (and saving it as. e.g. Model2) works\r\n2) Subsequently further fine-tuning Model 2 (loaded from disk) on different data appears to OOM. \r\n",
"Yes, there are a few places where `model.to(self.args.device)` is called, does the OOM go away if you disable them all - I think there 2 more that aren't conditioned on `deepspeed`.\r\n\r\nMost likely I need to go over and replicated each place where it's done for `self.is_model_parallel` since it's the same circumstances where we don't want the model to be on device right away.\r\n\r\nAlso what was the specific 2nd command line? so that I can add a test\r\n\r\nThank you.",
"This:\r\n```\r\ndiff --git a/src/transformers/trainer.py b/src/transformers/trainer.py\r\nindex 8afae0720..cda1a2822 100755\r\n--- a/src/transformers/trainer.py\r\n+++ b/src/transformers/trainer.py\r\n@@ -792,7 +792,7 @@ class Trainer:\r\n\r\n # If model was re-initialized, put it on the right device and update self.model_wrapped\r\n if model_reloaded:\r\n- if not self.is_model_parallel and self.args.place_model_on_device:\r\n+ if not (self.is_model_parallel or (args.deepspeed and args.do_train)) and self.args.place_model_on_device:\r\n self.model = self.model.to(self.args.device)\r\n self.model_wrapped = self.model\r\n\r\n@@ -1045,7 +1045,7 @@ class Trainer:\r\n )\r\n if isinstance(self.model, PreTrainedModel):\r\n self.model = self.model.from_pretrained(self.state.best_model_checkpoint)\r\n- if not self.is_model_parallel and self.args.place_model_on_device:\r\n+ if not (self.is_model_parallel or (args.deepspeed and args.do_train)) and self.args.place_model_on_device:\r\n self.model = self.model.to(self.args.device)\r\n else:\r\n state_dict = torch.load(os.path.join(self.state.best_model_checkpoint, WEIGHTS_NAME))\r\n```",
"Thanks! I hope to be able to give this diff a test tonight when the current run is done (about 10h left). \r\n\r\n> Also what was the specific 2nd command line? so that I can add a test\r\n\r\nHere are two cases (my exact script, but a distilled version that matches the WMT example at the top of this issue from the readme):\r\n\r\n1. Here is my exact script that I'm using for my experment (the two MODELDIR exports at the top being the critical difference between it working or not working -- the one currently selected is just the output of a past run of this script pointing to different training data): \r\n```\r\n#!/bin/bash\r\nexport DATADIR=/home/pajansen/github/compositional-expl/pretrain/min-6-max-8/ \\\r\nexport MODELDIR=allenai/unifiedqa-t5-11b\r\n#export MODELDIR=output_dir_compexpl-feb8-epoch3-uqa-11b-pretrain-teacher-min4-max5\r\nexport SEQLEN=256 \\\r\nexport EPOCHS=3 \\\r\nexport OUTPUTDIR=output_dir_compexpl-feb16-epoch$EPOCHSS-uqa-11b-pretrain-teacher-min6-max8 \\\r\n\r\nexport BS=1; rm -rf $OUTPUTDIR; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=4 ./finetune_trainer.py --model_name_or_path $MODELDIR --output_dir $OUTPUTDIR --adam_eps 1e-06 --data_dir $DATADIR \\\r\n--do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 \\\r\n--logging_first_step --logging_steps 5000 --max_source_length $SEQLEN --max_target_length $SEQLEN --num_train_epochs $EPOCHS \\\r\n--overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \\\r\n--predict_with_generate --sortish_sampler \\\r\n--test_max_target_length $SEQLEN --val_max_target_length $SEQLEN \\\r\n--warmup_steps 5 \\\r\n--deepspeed ../tests/deepspeed/ds_config.json --fp16 \\\r\n--save_total_limit 2 \\\r\n--save_steps 5000 \\\r\n```\r\n\r\n2. But, here's a distilled version, using the WMT example, that should illustrate the issue (but I haven't run this one). The call is identical here, it's just the OUTPUTDIRx and MODELDIRx environment variables that change (though in practice, like above, you'd want to change the data you're fine tuning with, too):\r\n```\r\n# Step 1: Fine-tune base model with dataset 1\r\nexport OUTPUTDIR1=tst-summarization-step1\r\nexport MODELDIR1=allenai/unifiedqa-t5-11b\r\nexport BS=1; rm -rf $OUTPUTDIR1; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=4 ./run_seq2seq.py \\\r\n --model_name_or_path $MODELDIR1 \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_predict \\\r\n --task summarization \\\r\n --dataset_name xsum \\\r\n --output_dir $OUTPUTDIR \\\r\n --per_device_train_batch_size=$BS \\\r\n --per_device_eval_batch_size=$BS \\\r\n --overwrite_output_dir \\\r\n --predict_with_generate \\\r\n --max_train_samples 500 \\\r\n --max_val_samples 100 \\\r\n --max_test_samples 100 \\\r\n\r\n# Step 2: Further fine-tune model saved in Step 1 with new data\r\n# Also pretend that the dataset_name is different here (suggesting fine-tuning the model from Step 1 using a different dataset -- but just for the test, fine-tuning twice on the same dataset should illustrate the OOM issue)\r\nexport OUTPUTDIR2=tst-summarization-step2\r\nexport MODELDIR2=$OUTPUTDIR1\r\nexport BS=1; rm -rf $OUTPUTDIR2; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=4 ./run_seq2seq.py \\\r\n --model_name_or_path $MODELDIR2 \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_predict \\\r\n --task summarization \\\r\n --dataset_name xsum \\\r\n --output_dir $OUTPUTDIR \\\r\n --per_device_train_batch_size=$BS \\\r\n --per_device_eval_batch_size=$BS \\\r\n --overwrite_output_dir \\\r\n --predict_with_generate \\\r\n --max_train_samples 500 \\\r\n --max_val_samples 100 \\\r\n --max_test_samples 100 \\\r\n```\r\n",
"Thank you for the details, @PeterAJansen - hoping to validate later in the day, but meanwhile this PR should solve it https://github.com/huggingface/transformers/pull/10243 (i.e. instead of the patch I sent last night).\r\n\r\n**edit** PR merged, so master should be OK.\r\n",
"Questions:\r\n1. This is with non-master version but then one before the fateful PR of mine, correct? since `eval` currently won't fit 45GB onto 22GB - I'm working on a solution.\r\n2. can you check if the saved model is bigger than the original? my feeling is that something else gets tacked onto the model that wasn't there in the original.\r\n\r\n I developed a new memory usage metrics feature: https://github.com/huggingface/transformers/pull/10225 so that should make it possible to identify and debug such problems on a much smaller model. You will probably find it useful too.\r\n \r\n So I should be well equipped to run your failing scenario now.",
"FYI, master has a new Trainer flag `--fp16_full_eval` https://github.com/huggingface/transformers/pull/10268 so now you should be able to eval at fp16 and be able to fit t5-11b onto 40gb gpu. It may or may not do what you want quality-wise, since `model.half()` doesn't always produce the desired results. But it does restore the original deepspeed/trainer non-deepspeed eval ability to fit in fp16.\r\n\r\nStill need to check on your 2 step scenario OOM report, @PeterAJansen ",
"another update: DS currently locks one in if one wants to be able to access the fp32 model, see https://github.com/microsoft/DeepSpeed/issues/797\r\nonce they add a method to extract the fp32 model https://github.com/microsoft/DeepSpeed/issues/800 then we can sort this out.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | NONE | null | (A continuation of #10149 , since it looks like it's a broader issue:)
It looks like seq2seq has changed in the past week, and now gives out-of-memory errors for @stas00 's impressive recent DeepSpeed work that allowed training/predicting e.g. T5-11B on a single 40GB card.
Here's a simple repeatable example using the newer scripts:
### Run script:
```
export OUTPUTDIR=tst-summarization
export BS=1; rm -rf $OUTPUTDIR; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=4 ./run_seq2seq.py \
--model_name_or_path allenai/unifiedqa-t5-11b \
--do_train \
--do_eval \
--do_predict \
--task summarization \
--dataset_name xsum \
--output_dir $OUTPUTDIR \
--per_device_train_batch_size=$BS \
--per_device_eval_batch_size=$BS \
--overwrite_output_dir \
--predict_with_generate \
--max_train_samples 500 \
--max_val_samples 100 \
--max_test_samples 100 \
```
(One note: Should I be adding a --deepspeed option as with the old finetune_trainer.py (I am not seeing it in the list of options)? And if so, should it be pointing to the new location for the config file ( ../tests/deepspeed/ds_config.json ), or does it use this location by default?)
### Conda Environment:
```
# Make new environment
conda create --name transformers-feb12-2021 python=3.8
conda activate transformers-feb12-2021
# Clone transformers
git clone https://github.com/huggingface/transformers.git
cd transformers
# Install nightly build of Pytorch
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U
# Install seq2seq transformers requirements
pip install -r examples/seq2seq/requirements.txt
# Install transformers
pip install -e .
# Install DeepSpeed from source for the A100 support
cd ..
git clone https://github.com/microsoft/DeepSpeed.git
cd DeepSpeed/
# Checkout release for DeepSpeed 0.3.10 (to avoid AMD bug in latest)
git checkout c14b839d9
./install.sh
pip install .
```
### Error:
```
...
RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 2; 39.59 GiB total capacity; 37.87 GiB already allocated; 40.69 MiB free; 37.88 GiB reserved in total by PyTorch)
Traceback (most recent call last):
File "./run_seq2seq.py", line 629, in <module>
main()
File "./run_seq2seq.py", line 543, in main
trainer = Seq2SeqTrainer(
File "/home/pajansen/github/transformers-feb12-2021/transformers/src/transformers/trainer.py", line 276, in __init__
model = model.to(args.device)
File "/home/pajansen/anaconda3/envs/transformers-feb12-2021/lib/python3.8/site-packages/torch/nn/modules/module.py", line 673, in to
return self._apply(convert)
File "/home/pajansen/anaconda3/envs/transformers-feb12-2021/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/pajansen/anaconda3/envs/transformers-feb12-2021/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
File "/home/pajansen/anaconda3/envs/transformers-feb12-2021/lib/python3.8/site-packages/torch/nn/modules/module.py", line 387, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File "/home/pajansen/anaconda3/envs/transformers-feb12-2021/lib/python3.8/site-packages/torch/nn/modules/module.py", line 409, in _apply
param_applied = fn(param)
File "/home/pajansen/anaconda3/envs/transformers-feb12-2021/lib/python3.8/site-packages/torch/nn/modules/module.py", line 671, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 3; 39.59 GiB total capacity; 37.87 GiB already allocated; 40.69 MiB free; 37.88 GiB reserved in total by PyTorch)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10161/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10161/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10160/comments | https://api.github.com/repos/huggingface/transformers/issues/10160/events | https://github.com/huggingface/transformers/issues/10160 | 807,557,119 | MDU6SXNzdWU4MDc1NTcxMTk= | 10,160 | past_key_values tuple index out of range error when using text2text-generation pipeline with encoder-decoder model | {
"login": "thominj",
"id": 3819908,
"node_id": "MDQ6VXNlcjM4MTk5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3819908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thominj",
"html_url": "https://github.com/thominj",
"followers_url": "https://api.github.com/users/thominj/followers",
"following_url": "https://api.github.com/users/thominj/following{/other_user}",
"gists_url": "https://api.github.com/users/thominj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thominj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thominj/subscriptions",
"organizations_url": "https://api.github.com/users/thominj/orgs",
"repos_url": "https://api.github.com/users/thominj/repos",
"events_url": "https://api.github.com/users/thominj/events{/privacy}",
"received_events_url": "https://api.github.com/users/thominj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"I have been digging into this a little bit more and found some information that might be helpful. It looks like the underlying problem is in the EncoderDecoderModel or one of its dependencies, not the pipeline. \r\n\r\n- When I replaced the pipeline call with a manual tokenization and call to the model's generate method, I got the same `tuple index out of range` error for past_key_values.\r\n- When I created the encoder_decoder_model using `transformers.EncoderDecoderModel.from_encoder_decoder_pretrained('roberta-base', 'roberta-base')`, the pipeline prediction worked. \r\n- If I use the AutoModel and AutoModelForCausulLM `from_pretrained` methods to create the encoder and decoder (mirroring the way that `from_encoder_decoder_pretrained` works) and then pass them to the EncoderDecoderModel constructor, I still get the `index out of range` error.\r\n- If I use the `AutoModel.from_pretrained()` methods to create the encoder and decoder, then call `save_pretrained()` on them to save in a local directory, then load them using `EncoderDecoderModel.from_encoder_decoder_pretrained()`, the pipeline prediction works.\r\n\r\nI believe there is some difference between the ways that EncoderDecoderModel's `init()` and `from_encoder_decoder_pretrained()` functions work that is leading to this error, but I haven't been able to figure out what the difference is, or why it is happening.",
"@thominj can you try with `decoder = transformers.RobertaForCausalLM.from_pretrained(pretrained_model_name_or_path='roberta-base', add_cross_attention=True, is_decoder=True, bos_token_id=<bos-id>, eos_token_id=<eos-id>)`?",
"> @thominj can you try with `decoder = transformers.RobertaForCausalLM.from_pretrained(pretrained_model_name_or_path='roberta-base', add_cross_attention=True, is_decoder=True, bos_token_id=<bos-id>, eos_token_id=<eos-id>)`?\r\n\r\nThat worked!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Is this considered to be expected behavior? If so, are add_cross_attention, is_decoder, bos_token_id, and eos_token_id all required for every decoder that can be used in EncoderDecoderModel?",
"HI @thominj \r\n\r\nYes, `add_cross_attention` and `is_decoder` is required if you are initializing the model as a decoder yourself.\r\n\r\nBut if you do \r\n```python\r\nmodel = EncoderDecoderModel. from_encoder_decoder_pretrained(\"roberta-base\", \"roberta-base\")\r\n```\r\n\r\nthen it'll happen automatically, the `from_encoder_decoder_pretrained` method takes care of this.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"The same exception may also be raised when model is in train mode, call model.eval() before may solve this problem. It happened when I use model `BartForConditionalGeneration`."
] | 1,613 | 1,645 | 1,621 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.0
- Platform: Linux-5.4.0-65-generic-x86_64-with-Ubuntu-20.04-focal
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): I am using the encoder-decoder model with a Roberta encoder and RobertaForCausalLM decoder.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
In my real code I am using custom pre-trained models and tokenizers, but the error and behavior is the same as that produced by the demo script below.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am trying to use a pipeline to generate results from an encoder-decoder model that was trained on a custom text2text dataset.
## To reproduce
Steps to reproduce the behavior:
You can just run the script below, or:
1. Load an encoder-decoder model with RoBERTa encoder and decoder
2. Create a text2text-generation pipeline with an appropriate tokenizer
3. Use the pipeline to generate a result
```python
import transformers
encoder = transformers.RobertaModel.from_pretrained(pretrained_model_name_or_path='roberta-base')
decoder = transformers.RobertaForCausalLM.from_pretrained(pretrained_model_name_or_path='roberta-base')
encoder_decoder_model = transformers.EncoderDecoderModel(encoder=encoder, decoder=decoder)
tokenizer = transformers.AutoTokenizer.from_pretrained('google/roberta2roberta_L-24_bbc')
text2text = transformers.pipeline('text2text-generation', model=encoder_decoder_model, tokenizer=tokenizer)
output = text2text('This is a test sentence.')
print(output)
```
Output:
```
If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.`
normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.
Traceback (most recent call last):
File "demo.py", line 12, in <module>
output = text2text('This is a test sentence.')
File "/home/james/Code/demo/.env/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py", line 125, in __call__
**generate_kwargs,
File "/home/james/Code/demo/.env/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/james/Code/demo/.env/lib/python3.7/site-packages/transformers/generation_utils.py", line 913, in generate
**model_kwargs,
File "/home/james/Code/demo/.env/lib/python3.7/site-packages/transformers/generation_utils.py", line 1177, in greedy_search
output_hidden_states=output_hidden_states,
File "/home/james/Code/demo/.env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/james/Code/demo/.env/lib/python3.7/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 430, in forward
**kwargs_decoder,
File "/home/james/Code/demo/.env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/james/Code/demo/.env/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 937, in forward
return_dict=return_dict,
File "/home/james/Code/demo/.env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/james/Code/demo/.env/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 771, in forward
past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
IndexError: tuple index out of range
```
## Expected behavior
I expect the pipeline to generate an output string.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10160/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10159/comments | https://api.github.com/repos/huggingface/transformers/issues/10159/events | https://github.com/huggingface/transformers/pull/10159 | 807,528,062 | MDExOlB1bGxSZXF1ZXN0NTcyNzYyNjY4 | 10,159 | [hf_api] delete deprecated methods and tests | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10159/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10159",
"html_url": "https://github.com/huggingface/transformers/pull/10159",
"diff_url": "https://github.com/huggingface/transformers/pull/10159.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10159.patch",
"merged_at": 1613162106000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10158/comments | https://api.github.com/repos/huggingface/transformers/issues/10158/events | https://github.com/huggingface/transformers/issues/10158 | 807,489,698 | MDU6SXNzdWU4MDc0ODk2OTg= | 10,158 | Multiple Mask support in Pipeline | {
"login": "naveenjafer",
"id": 7025448,
"node_id": "MDQ6VXNlcjcwMjU0NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7025448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naveenjafer",
"html_url": "https://github.com/naveenjafer",
"followers_url": "https://api.github.com/users/naveenjafer/followers",
"following_url": "https://api.github.com/users/naveenjafer/following{/other_user}",
"gists_url": "https://api.github.com/users/naveenjafer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naveenjafer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naveenjafer/subscriptions",
"organizations_url": "https://api.github.com/users/naveenjafer/orgs",
"repos_url": "https://api.github.com/users/naveenjafer/repos",
"events_url": "https://api.github.com/users/naveenjafer/events{/privacy}",
"received_events_url": "https://api.github.com/users/naveenjafer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"@LysandreJik \r\nThe current implementation for a single mask returns the data as a list of \r\n\r\n```\r\n{ \r\n \"sequence\" : \"the final sequence with the mask added\", \r\n \"score\" : \"the softmax score\", \r\n \"token\" : \"the token ID used in filling the MASK\", \r\n \"token_str\" : \"the token string used in filling the MASK\" \r\n} \r\n```\r\n\r\nWhen returning the results for sentences with multiple masks, it is not possible to maintain the same return format of the JSON. I propose to have a different pipeline call for this 'fill-mask-multiple' or something along those lines. The return format I have proceeded with is\r\n\r\n```\r\n{ \r\n \"sequence\" : \"the final sequence with all the masks filled by the model, \r\n \"scores\" : [\"the softmax score of mask 1\", \"the softmax score of mask 2\", ...]\r\n \"tokens\" : [\"the token ID used in filling mask 1\", \"the token ID used in filling mask 2\", ...]\r\n \"token_strs\" : [\"the token string used in filling mask 1\", \"the token string used in filling mask 2\", ...]\r\n} \r\n```\r\nSome minor changes will be made to the input param \"targets\" to support optional targets for each of the mask. \r\n\r\nIf having 2 separate pipelines does not seem a great idea, we could just club them both right now into one single pipeline call irrespective of whether it is a single mask or multiple mask. The return json type would change, I am not sure about the impact/how feasible it would be to bring that across in minor version updates. \r\n\r\nWould really benefit from some expert advice since I am sort of new here.\r\n\r\nPS: I have currently implemented the functionality for the pytorch framework, getting the same done in tf too.",
"This change seems okay to me. Since you have already some functionality for PyTorch, do you mind opening a PR (even a draft PR), so that we may play around with it and talk about the potential improvements? Thanks! Pinging @Narsil too"
] | 1,613 | 1,613 | null | NONE | null | # 🚀 Feature request
The [fill mask](https://huggingface.co/bert-base-uncased?text=Paris+is+the+capital+of+%5BMASK%5D+%3F) feature as a part of the pipeline currently only supports a single mask for the inputs. It could be expanded to predict and return the results for multiple masks in the same sentence too.
## Motivation
There are use cases where one would ideally have more than just a single mask where they would need a prediction from the model. For example, smarter template filling in outputs returned to users etc. Could also be used in better study of the implicit knowledge that BERT models have accumulated during pre-training.
## Your contribution
I should be able to raise a PR for the same. The output JSON schema would have to be slightly modified, but I can go ahead and complete the same if there is no other obvious issue that slipped my mind as to why only a single [MASK] token needs to be supported. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10158/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10158/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10157/comments | https://api.github.com/repos/huggingface/transformers/issues/10157/events | https://github.com/huggingface/transformers/pull/10157 | 807,429,947 | MDExOlB1bGxSZXF1ZXN0NTcyNjgyMTIx | 10,157 | Fix typo in comments | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10157/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10157",
"html_url": "https://github.com/huggingface/transformers/pull/10157",
"diff_url": "https://github.com/huggingface/transformers/pull/10157.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10157.patch",
"merged_at": 1613222761000
} |
https://api.github.com/repos/huggingface/transformers/issues/10156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10156/comments | https://api.github.com/repos/huggingface/transformers/issues/10156/events | https://github.com/huggingface/transformers/pull/10156 | 807,428,818 | MDExOlB1bGxSZXF1ZXN0NTcyNjgxMjA0 | 10,156 | Fix typo in comment | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10156/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10156",
"html_url": "https://github.com/huggingface/transformers/pull/10156",
"diff_url": "https://github.com/huggingface/transformers/pull/10156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10156.patch",
"merged_at": 1613222786000
} |
https://api.github.com/repos/huggingface/transformers/issues/10155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10155/comments | https://api.github.com/repos/huggingface/transformers/issues/10155/events | https://github.com/huggingface/transformers/issues/10155 | 807,427,367 | MDU6SXNzdWU4MDc0MjczNjc= | 10,155 | rfc: integration tests need non-example application for testing | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
},
{
"id": 2604155188,
"node_id": "MDU6TGFiZWwyNjA0MTU1MTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Benchmarks",
"name": "Benchmarks",
"color": "2DF372",
"default": false,
"description": "Issues related to Memory regressions in tests and scripts"
}
] | closed | false | null | [] | [
"I'm all for having core integration tests to do regression testing. As you have said, these tests should not be under `examples/` as that is a dedicated `examples/` folder, but should be under `tests/`.\r\n\r\nI'm not 100% sure whether we would want that in existing testing files (for example a BART regression test in `test_modeling_bart.py`), or if we would want to create new files.\r\n\r\nWe could also create a new file `test_modeling_common_integration.py` that would serve a similar purpose to `test_modeling_common.py`, but for integration tests, if these can be shared among models simply.\r\n\r\n---\r\n\r\nRegarding how to approach this, these would be quite heavy tests to run, and take a long time. Do we want to run them daily, like other slow tests, or should we create a weekly suite? We'll need to create a weekly suite eventually, as some TensorFlow tests take 3+ hours *per* model to test `saved_models` (cc @jplu)\r\n\r\n---\r\n\r\nI believe you've proposed a similar approach to the registry system you've built for fastai, I think this would be a good approach to tackle the issue. Happy to help set this up/work with you on that front to keep the current performance regression tests you have created.",
"Thank you for your feedback, @LysandreJik \r\n\r\nI think one thing that was missing is that this need is not only for performance regression testing, but also for normal testing of deepspeed/apex/fairscale which are part of the core. So I think I wasn't clear at communicating that for that I need a real program, such as the ones we have under examples. So the tests run this program. As compared to a normal test that has all the logic contained within. This is for functionality testing. So we have 2 unrelated things:\r\n\r\n1. performance+quality regression testing - is our core getting slower? is it delivering worse quality?\r\n2. 3rd party component integration functionality testing - can we run HF Trainer w/ DeepSpeed on a gpu with only 3 legs?\r\n\r\nThey are common only as such that they should be placed somewhere under the core tests.\r\n\r\nWrt number 1 - yes, we started discussing at how to implement that practically (there was an idea of reusing the registry), but again I'm returning to the main need which is not that, but perhaps adapting one of the example scripts to be a testing tool and not an example:\r\n- not facing users - clean refactored code\r\n- probably needs to have several different functionalities - so that different aspects can be tested - probably it needs to cover all the main NLP tasks (not exhaustively, but say one translation, one summarization, etc.) So that the different main logic paths can be tested.\r\n\r\nOnce we have the tool then we can see how to start recording and validating results. Of course, it can be an organically need-based grown tool, and my first question is where such tool would live.\r\n\r\nIt'll also be used for posting public benchmarks - so users should be able to use it too to reproduce reported results, but not try to read its code as they would with an example, just as an opaque tool.\r\n\r\nI won't worry at the moment at how often we run those things, the schedule will evolve once we have something in place and then we can see what the requirements are.\r\n\r\nWe don't need to do one hour training to detect a quality or performance regression, while we can - we should instead design optimized scenarios where bad things are detected within a much quicker time span. \r\n\r\nAt the moment these are just feeler notes, I'd be happy to start compiling a detailed proposal once others get a chance to voice their inspirations and of course we need to see if there is a group's desire to go in that direction.",
"we agreed to copy what's needed for the benchmarking/testing which may happen down the road."
] | 1,613 | 1,616 | 1,616 | CONTRIBUTOR | null | # 🚀 Feature request
We have an ongoing conflict with some of the core integration tests needing a serious program to be tested with. The only place these can be found is under `examples/` - and so the tests - e.g. deepspeed/apex/fairscale reside under `examples/` because of that.
The problem is that because they are under `examples/` they are being treated as such, but they are not examples.
I propose we have at least one complex representative example turned into a serious program that is being supported like any other core function. Such program or programs can then be used for integration testing and what we keep on discussing but not getting to do is performance regression. You can't do performance regression on mock ups.
The few attempts to measure bleu scores are being killed as well with the current `seq2seq` wipeout. How can one do regression testing if there is nothing to measure. The regressions can be subtle and not detected by general common tests. It's a way easier to know that this input should give this bleu score on this model and if it doesn't then something is wrong.
@patrickvonplaten, @sgugger, @LysandreJik, @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10155/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10154/comments | https://api.github.com/repos/huggingface/transformers/issues/10154/events | https://github.com/huggingface/transformers/pull/10154 | 807,327,286 | MDExOlB1bGxSZXF1ZXN0NTcyNTk1NzA2 | 10,154 | Add mBART-50 | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> As a follow-up, is mBART aligned with mBART-50? We should have the same setter there. It would make a nice first issue I believe, once this PR is merged and provides a good model.\r\n\r\nYes. Here the setter was necessary because of the many to many models. But yes mBART can also be used for multilingual fine-tuning so the tokenizers should also be aligned.",
"Hi, I want to use MBart-Large-50 for finetuning, but I get the error:\r\n`File \"/home/michael/anaconda3/envs/paraphrases/lib/python3.9/site-packages/transformers/models/mbart/tokenization_mbart.py\", line 199, in set_src_lang_special_tokens\r\n self.cur_lang_code = self.lang_code_to_id[src_lang]\r\nKeyError: None\r\n`\r\nIt was working with the previous version of the model. Is it correct, that this PR adresses this issue that MBartTokenizer is used and not MBart50Tokenizer?",
"Hey @MichaelJanz \r\n\r\nFor mBART-50 you should use the `MBart50Tokenizer`. Also when fine-tuning make sure that you either pass or set the `src_lang` and `tgt_lang` attributes",
"Hi @patil-suraj and thanks for answering!\r\nI am using the script under examples/seq2seq run_seq2seq.py, which has no reference to the `MBart50Tokenizer`, but I think it should have. `src_lang` and `tgt_lang` are set. I suspect that the base class of `MBart50Tokenizer` is used, which is simply `MBartTokenizer`. Will the script work when this commit is merged, or are there further changes neccesary, or am I executing the script wrong?",
"`MBart50Tokenizer` does not inherit from `MBartTokenizer` and for now, the script does not support mBART-50, but you could easily modify the script for mBART-50. I think the only necessary change is to use the correct tokenizer class. This PR will be merged today, please open an issue if you have more questions after the PR is merged. Happy to answer :)",
"Thanks for your help!\r\nIf I can get it to work, I will create a PR. Thanks for your great work :)",
"Thanks a lot, everyone! Merging!"
] | 1,613 | 1,613 | 1,613 | MEMBER | null | # What does this PR do?
This is the second part of splitting #9811
This PR adds the mBART-50 models.
- Add `MBart50Tokenizer` and `MBart50TokenizerFast`. A new tokenizer is needed because it adds extra languages and the encoding format is different than `MBartTokenizer`. The difference is that for `mbart-50` both source and target language text begin with the `<language token>`, whereas for `mbart-cc25` `<language_token>` is used as suffix token.
- The new tokenizers use `src_lang` as a `getter` and `setter` property. This is needed because for many-to-many translation models whenever we change the `src_lang` we need to set special tokens for that language. The `src_lang.setter` calls `set_src_lang_special_tokens` method whenever a we set a new `src_lang` to handle this.
- A new model class is not necessary as mBART-50 is similar to our existing mBART-25 model, the only difference being `relu` activation instead of `gelu` and emb size of 250054 instead of 250027
All model checkpoints are uploaded on hub https://huggingface.co/models?filter=mbart-50 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10154/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10154",
"html_url": "https://github.com/huggingface/transformers/pull/10154",
"diff_url": "https://github.com/huggingface/transformers/pull/10154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10154.patch",
"merged_at": 1613402935000
} |
https://api.github.com/repos/huggingface/transformers/issues/10153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10153/comments | https://api.github.com/repos/huggingface/transformers/issues/10153/events | https://github.com/huggingface/transformers/pull/10153 | 807,323,341 | MDExOlB1bGxSZXF1ZXN0NTcyNTkyNDkw | 10,153 | I-BERT model support | {
"login": "kssteven418",
"id": 50283958,
"node_id": "MDQ6VXNlcjUwMjgzOTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/50283958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kssteven418",
"html_url": "https://github.com/kssteven418",
"followers_url": "https://api.github.com/users/kssteven418/followers",
"following_url": "https://api.github.com/users/kssteven418/following{/other_user}",
"gists_url": "https://api.github.com/users/kssteven418/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kssteven418/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kssteven418/subscriptions",
"organizations_url": "https://api.github.com/users/kssteven418/orgs",
"repos_url": "https://api.github.com/users/kssteven418/repos",
"events_url": "https://api.github.com/users/kssteven418/events{/privacy}",
"received_events_url": "https://api.github.com/users/kssteven418/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Actually as @patrickvonplaten correctly mentioned, we really need some test files before we can merge this.",
"@kssteven418, \r\n\r\nThanks a mille for your PR - that's an amazing contribution!\r\n\r\nI think before merging we still do need to do a couple of things:\r\n\r\n1) **Tests** - it seems that currently no tests were added to the PR. It would be nice to add tests here. Besides the standard model tests, that are usually directly generated by the cookie-cutter, we should definitely also add some tests for the new quantization functionality\r\n\r\n2) **Remove the Encoder-Decoder logic** I don't think that this model is ready to be used in an Encoder-Decoder setting yet -> so it would be better to remove all things related to Encoder-Decoder I think. This corresponds to *fully* removing the logic of `encoder_hidden_states`, `encoder_attention_mask`, `past_key_values`, `cross_attention`, ...\r\n\r\n3) **CPU - compatible** - To me it seems that the model is only compatible on GPU at the moment - there are some `cuda()` call hardcoded in the utils functions. I think it would be nice to remove those",
"It seems that some failures appear in the automatic tests. Could you help me out resolving them?",
"@LysandreJik I let you merge when you think it's ready"
] | 1,613 | 1,614 | 1,614 | CONTRIBUTOR | null | # What does this PR do?
This PR implements [I-BERT](https://arxiv.org/abs/2101.01321), an integer-only quantization scheme for Transformer architectures. I-BERT is based on the model architecture and the pre-trained parameters of RoBERTa (this can be extended to other architectures as a future task), except that it calls custom integer-only operations instead of the normal ones. (The custom kernels are implemented in `ibert/quant_modules.py`.) Therefore, under the current implementation, I-BERT inherits its tokenizer and configuration from the RoBERTa’s, and pulls the model parameter from the `roberta-base/large` repo.
The model can be finetuned on a specific task in 2-pass,
1) Finetune the model on a given task with the normal mode (`config.quant_mode = False`) before quantizing it. The model will then take the normal non-quantized pass.
2) Once the model achieves the best accuracy, do another finetuning with the quantization mode (`config.quant_mode = True`). The model will then take the integer-only quantized pass to recover the accuracy degradation through quantization-aware training.
You can skip the first pass and do task-specific finetuning and quantization-aware training at the same time, but it normally results in lower accuracy.
Here are some missing features and TODOs:
- [x] Static quantization: activation ranges (min/max) must be fixed in evaluation time.
- [x] `ibert-roberta-large` support
- [ ] Test on different types of tasks
- [ ] More intuitive APIs?
## Results on the GLUE tasks
* RTE, MRPC, SST2, and QNLI with `ibert-roberta-base`
* Without extensive hyperparameter tuning (the results, both the baseline and I-BERT, could be improved)
Task | RTE | MRPC | SST2 | QNLI
--- | --- | --- | --- |---
Baseline(FP32) | 74.37 | 90.75 | 92.15 | 92.89
I-BERT(INT8) | 79.78 | 91.18 | 93.81 | 91.83
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!--
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10153/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10153/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10153",
"html_url": "https://github.com/huggingface/transformers/pull/10153",
"diff_url": "https://github.com/huggingface/transformers/pull/10153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10153.patch",
"merged_at": 1614265602000
} |
https://api.github.com/repos/huggingface/transformers/issues/10152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10152/comments | https://api.github.com/repos/huggingface/transformers/issues/10152/events | https://github.com/huggingface/transformers/pull/10152 | 807,273,980 | MDExOlB1bGxSZXF1ZXN0NTcyNTUxMDg4 | 10,152 | Reduce the time spent for the TF slow tests | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger Yes, this is exactly that. There was an important overlap across these three tests (all based on creating a saved model and two on testing the output), so merging them was IMO the best way to keep the coverage and reduce the time.\r\n\r\n@patrickvonplaten feel free to merge if the PR looks ok for you!"
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
This PR reduces by half the time spent on running the all the tests (including the slow tests). Here are the time comparison (time recorded on my machine with the models already downloaded):
- albert: from 13mins to 6mins
- bart: from 19mins to 9mins
- bert: from 17mins to 9mins
- blenderbot_small: from 19mins to 9mins
- blenderbot: from 19mins to 9mins
- convbert: from 21mins to 11mins
- ctrl: from 10mins to 7mins
- distilbert: from 13mins to 7mins
- dpr: from 6mins to 3mins
- electra: from 15mins to 7mins
- flaubert: from 13mins to 7mins
- funnel: from 28mins to 13mins
- gpt2: from 8mins to 4mins
- led: from 44mins to 20mins
- longformer: from 1h30mins to 40mins
- lxmert: from 6mins to 3mins
- marian: from 19mins to 9mins
- mbart: from 19mins to 9mins
- mobilebert: from 33mins to 16mins
- mpnet: from 13mins to 7mins
- openai gpt: from 8mins to 4mins
- pegasus: from 19mins to 9mins
- roberta: from 10mins to 6mins
- t5: from 12mins to 7mins
- transfo_xl: from 8mins to 5mins
- xlm: from 13mins to 7mins
- xlnet: from 9mins to 5mins
Total: from 8h5mins to 4h8mins
The total time spent on running the entire tests has been reduced by half by merging three tests about the SavedModel into a single one. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10152/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10152/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10152",
"html_url": "https://github.com/huggingface/transformers/pull/10152",
"diff_url": "https://github.com/huggingface/transformers/pull/10152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10152.patch",
"merged_at": 1613659977000
} |
https://api.github.com/repos/huggingface/transformers/issues/10151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10151/comments | https://api.github.com/repos/huggingface/transformers/issues/10151/events | https://github.com/huggingface/transformers/issues/10151 | 807,193,248 | MDU6SXNzdWU4MDcxOTMyNDg= | 10,151 | Model Parallelism for Bert Models | {
"login": "saichandrapandraju",
"id": 41769919,
"node_id": "MDQ6VXNlcjQxNzY5OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saichandrapandraju",
"html_url": "https://github.com/saichandrapandraju",
"followers_url": "https://api.github.com/users/saichandrapandraju/followers",
"following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}",
"gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions",
"organizations_url": "https://api.github.com/users/saichandrapandraju/orgs",
"repos_url": "https://api.github.com/users/saichandrapandraju/repos",
"events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}",
"received_events_url": "https://api.github.com/users/saichandrapandraju/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2627272588,
"node_id": "MDU6TGFiZWwyNjI3MjcyNTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel",
"name": "Model Parallel",
"color": "8B66A5",
"default": false,
"description": "Model Parallelilsm Implementations"
}
] | closed | false | null | [] | [
"We already have naive vertical MP implemented in t5 and gpt, and there is a much easier version of Bart MP - but it's not merged (https://github.com/huggingface/transformers/pull/9384).\r\n\r\nThe problem with naive MP is that it's very inefficient. That's why at the moment the rest of transformers isn't being ported.\r\n\r\nUntil then try HF Trainer DeepSpeed integration: https://huggingface.co/blog/zero-deepspeed-fairscale\r\n\r\nPipeline is the next in line, but it's very complicated.\r\n\r\nNaive vertical MP is Pipeline with chunks=1.\r\n\r\nSee my work in progress notes on Parallelism: https://github.com/huggingface/transformers/issues/9766\r\n",
"Thanks @stas00 for sharing your work. I'll implement DeepSpeed with HF..",
"Hi @stas00 ,\r\n\r\nAs mentioned above, I installed deepspeed and used HF Trainer to train instead of native pytorch. Without DeepSpeed, I'm able to complete the training but with DeepSpeed, execution is stuck at -\r\n**[2021-02-17 15:05:24,441] [INFO] [distributed.py:40:init_distributed] Initializing torch distributed with backend: nccl** . \r\n\r\ncomplete log is - \r\n\r\n```\r\n[2021-02-17 15:05:06,621] [WARNING] [runner.py:117:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\r\n[2021-02-17 15:05:06,736] [INFO] [runner.py:355:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=29500 ./Deepspeed.py --output_dir test1 --overwrite_output_dir --do_train --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --learning_rate 3e-5 --weight_decay 0.01 --num_train_epochs 1 --load_best_model_at_end --deepspeed ds_config.json\r\n[2021-02-17 15:05:08,344] [INFO] [launch.py:78:main] WORLD INFO DICT: {'localhost': [0]}\r\n[2021-02-17 15:05:08,344] [INFO] [launch.py:87:main] nnodes=1, num_local_procs=1, node_rank=0\r\n[2021-02-17 15:05:08,345] [INFO] [launch.py:99:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})\r\n[2021-02-17 15:05:08,345] [INFO] [launch.py:100:main] dist_world_size=1\r\n[2021-02-17 15:05:08,345] [INFO] [launch.py:103:main] Setting CUDA_VISIBLE_DEVICES=0\r\n2021-02-17 15:05:10.792753: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nSome weights of the model checkpoint at /home/jovyan/models/roberta-large/ were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']\r\n- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of RobertaForSequenceClassification were not initialized from the model checkpoint at /home/jovyan/models/roberta-large/ and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nloaded df\r\nEncoding done\r\nparser created\r\n[2021-02-17 15:05:24,441] [INFO] [distributed.py:40:init_distributed] Initializing torch distributed with backend: nccl\r\n```\r\n\r\nI'm passing below in cmd - \r\n```\r\n!deepspeed ./Deepspeed.py --output_dir test1 --overwrite_output_dir --do_train \\\r\n--per_device_train_batch_size 8 --per_device_eval_batch_size 8 --learning_rate 3e-5 --weight_decay 0.01 --num_train_epochs 1 \\\r\n--load_best_model_at_end --deepspeed ds_config.json\r\n```\r\n\r\nHere's my simple script - \r\n\r\n```\r\nfrom transformers import RobertaForSequenceClassification, RobertaTokenizerFast, Trainer, TrainingArguments, HfArgumentParser\r\nimport pandas as pd\r\nimport numpy as np\r\nimport torch\r\nimport os\r\nos.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\r\n\r\n\r\ntok = RobertaTokenizerFast.from_pretrained('/home/jovyan/models/roberta-large/')\r\nmodel = RobertaForSequenceClassification.from_pretrained('/home/jovyan/models/roberta-large/', num_labels=2)\r\n\r\ndf_full = pd.read_csv('IMDB_Dataset.csv')\r\nprint(\"loaded df\")\r\ndf_full = df_full.sample(frac=1).reset_index(drop=True)\r\ndf_req = df_full.head(1000)\r\ndf_train = df_req.head(800)\r\ndf_eval = df_req.tail(200)\r\n\r\ntrain_text, train_labels_raw, val_text, val_labels_raw = df_train.review.values.tolist(), df_train.sentiment.values.tolist(), df_eval.review.values.tolist(), df_eval.sentiment.values.tolist()\r\n\r\n\r\ntrain_encodings = tok(train_text, padding=True, truncation=True, max_length=512)\r\nval_encodings = tok(val_text, padding=True, truncation=True, max_length=512)\r\ntrain_labels = [1 if i=='positive' else 0 for i in train_labels_raw]\r\nval_labels = [1 if i=='positive' else 0 for i in val_labels_raw]\r\n\r\nclass IMDbDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings, labels):\r\n self.encodings = encodings\r\n self.labels = labels\r\n\r\n def __getitem__(self, idx):\r\n item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}\r\n item['labels'] = torch.tensor(self.labels[idx])\r\n return item\r\n\r\n def __len__(self):\r\n return len(self.labels)\r\n\r\ntrain_dataset = IMDbDataset(train_encodings, train_labels)\r\nval_dataset = IMDbDataset(val_encodings, val_labels)\r\nprint(\"Encoding done\")\r\n\r\nparser = HfArgumentParser(TrainingArguments)\r\nprint('parser created')\r\ntrain_args = parser.parse_args_into_dataclasses()\r\n\r\n\r\nprint('got training')\r\nprint(train_args[0])\r\n \r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=train_args[0],\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset\r\n )\r\n\r\nprint('------------TRAINING-------------')\r\ntrainer.train()\r\n```\r\n\r\nPlz let me know if I missed anything..",
"This looks like a pytorch distributed issue, can you launch your script as following?\r\n```\r\npython -m torch.distributed.launch --nproc_per_node=1 ./Deepspeed.py --output_dir test1 --overwrite_output_dir --do_train \\\r\n--per_device_train_batch_size 8 --per_device_eval_batch_size 8 --learning_rate 3e-5 --weight_decay 0.01 --num_train_epochs 1 \\\r\n--load_best_model_at_end\r\n```\r\n\r\nDeespeed requires a distributed env even with one gpu. so in this experiment we remove DeepSpeed completely but launch a similar distributed environment for a single process.\r\n\r\nWhat's the output of: `python -m torch.utils.collect_env` on that system? Are you running on a recent pytorch version? I'm noticing that I have a different `distributed.py`, since the logger reports a different line number on my side:\r\n```\r\n[2021-02-17 09:36:01,176] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl\r\n```\r\n\r\nAlso, I'm noticing your trying to run it from a notebook. This could be related as well. Any reason why you're not using a normal console? Are you on colab or some restricted environment?\r\n\r\nThough I checked I can launch deepspeed just fine from the notebook. via `!deepspeed` or `%%bash cell`.\r\n\r\nAlternatively you can launch your script via the native notebook, i.e. no script, using this:\r\nhttps://huggingface.co/transformers/master/main_classes/trainer.html#deployment-in-notebooks\r\n\r\nBut let's see if we can resolve the distributed hanging, by first ensuring your are on a recent pytorch. I see bug reports for this in older pytorch versions (from 2018-2019)",
"Hi @stas00 ,\r\nThanks for reverting. Here are the results for above experiment - \r\n\r\n1. \r\n```\r\n!python -m torch.distributed.launch --nproc_per_node=1 ./Deepspeed.py --output_dir test1 --overwrite_output_dir --do_train \\\r\n--per_device_train_batch_size 8 --per_device_eval_batch_size 8 --learning_rate 3e-5 --weight_decay 0.01 --num_train_epochs 1 \\\r\n--load_best_model_at_end\r\n```\r\nwith the above command, execution got hanged and below is the output - \r\n```\r\n2021-02-18 01:29:23.513697: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nSome weights of the model checkpoint at /home/jovyan/models/roberta-large/ were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']\r\n- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of RobertaForSequenceClassification were not initialized from the model checkpoint at /home/jovyan/models/roberta-large/ and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nloaded df\r\nEncoding done\r\nparser created\r\n```\r\n2. \r\nI'm using transformers-4.3.0 and below is the detailed output for `!python -m torch.utils.collect_env` - \r\n```\r\nCollecting environment information...\r\nPyTorch version: 1.7.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.5 LTS (x86_64)\r\nGCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0\r\nClang version: Could not collect\r\nCMake version: Could not collect\r\n\r\nPython version: 3.6 (64-bit runtime)\r\nIs CUDA available: True\r\nCUDA runtime version: 10.1.243\r\nGPU models and configuration: GPU 0: Tesla V100-SXM2-32GB\r\nNvidia driver version: 450.51.06\r\ncuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\n\r\nVersions of relevant libraries:\r\n[pip3] kubeflow-pytorchjob==0.1.3\r\n[pip3] numpy==1.18.5\r\n[pip3] torch==1.7.1\r\n[pip3] torchvision==0.8.2\r\n[conda] Could not collect\r\n```\r\n3.\r\nI am using kubeflow notebook servers provided by my company. So that's why I'm running commands in notebook itself..\r\n\r\n4.\r\n\r\nI tried by setting env variables as mentioned in https://huggingface.co/transformers/master/main_classes/trainer.html#deployment-in-notebooks and execution got hanged in below cell - \r\n\r\n",
"Thank you for your detailed answers, @saichandrapandraju \r\n\r\nIt feels like your environment can't run pytorch distributed. Here is a very simple test to check that the launcher + dist init works:\r\n```\r\n%%bash\r\necho 'import os, torch; print(os.environ[\"LOCAL_RANK\"]); torch.distributed.init_process_group(\"nccl\")' > test.py\r\npython -m torch.distributed.launch --nproc_per_node=1 test.py\r\n```\r\nyou can copy-n-paste it as is into a new cell including bash magic and then run it.\r\n\r\nIt should print `0` and not fail.\r\n\r\nAnd if it fails, perhaps trying a different backend instead of `nccl`? what if you try `gloo`? But I don't think it'd do any good if it does work with `gloo`, as it doesn't support the same ops as `nccl` https://pytorch.org/docs/stable/distributed.html#backends\r\n\r\nIf this test fails let me know and I will ask if Deepspeed can support any other way. Normally distributed isn't needed for 1 gpu, but since the cpu acts as a sort of another gpu, they use the distributed environment to communicate between the two units.\r\n",
"This looks like a potential thread to explore for the hanging \" Initializing torch distributed with backend: nccl \":\r\n\r\nhttps://discuss.pytorch.org/t/unexpected-hang-up-when-using-distributeddataparallel-on-two-machines/92262\r\n\r\nSee if you have any luck identifying the problem with the suggestions in that thread.",
"Hi @stas00 ,\r\n\r\nwith below command it got hanged again\r\n```\r\n%%bash\r\necho 'import os, torch; print(os.environ[\"LOCAL_RANK\"]); torch.distributed.init_process_group(\"nccl\")' > test.py\r\npython -m torch.distributed.launch --nproc_per_node=1 test.py\r\n```\r\n\r\nBut returned `0` with `gloo`\r\n\r\nsame after trying https://discuss.pytorch.org/t/unexpected-hang-up-when-using-distributeddataparallel-on-two-machines/92262\r\n\r\nBelow versions are different. Is it fine?\r\n```\r\nCUDA runtime version: 10.1.243\r\nCUDA used to build PyTorch: 10.2\r\n```",
"So this is a pure pytorch issue, you may want to file an Issue with pytorch: https://github.com/pytorch/pytorch/issues\r\n\r\nIf you can't launch distributed then DeepSpeed won't work for you.\r\n\r\nAlso I'd try pytorch-nightly - I read in one thread they have been tweaking this functionality since the last release. https://pytorch.org/get-started/locally/ - you should be able to install that locally.\r\n\r\n\r\n> Below versions are different. Is it fine?\r\n> ```\r\n> CUDA runtime version: 10.1.243\r\n> CUDA used to build PyTorch: 10.2\r\n> ```\r\n\r\nShouldn't be a problem. Pytorch comes with its own toolkit. \r\n\r\nThis system-wide entry is useful for when building pytorch CPP extensions (which incidentally Deepspeed is). There ideally you want to have the same version for both, but sometimes minor version difference is not a problem.\r\n\r\n",
"Thanks @stas00 ,\r\n\r\nRaised an issue https://github.com/pytorch/pytorch/issues/52433 and https://discuss.pytorch.org/t/hanging-torch-distributed-init-process-group/112223\r\n\r\nEven I'm thinking of nightly. Will give it a try...",
"If this is sorted out, I hope HFTrainer and deepspeed will work with single and multi gpu setting..",
"I'd help for you to augment your pytorch Issue with the information they request - at the very least the output of `python -m torch.utils.collect_env` and probably mention that you're running from a notebook and in a kubeflow container. Because as you presented it now, they won't know what to do with it, as such code works just fine on a normal setup.",
"Thanks @stas00 ,\r\n\r\nI installed `1.7.1+cu101` and below returned `0`\r\n```\r\n%%bash\r\necho 'import os, torch; print(os.environ[\"LOCAL_RANK\"]); torch.distributed.init_process_group(\"nccl\")' > test.py\r\npython -m torch.distributed.launch --nproc_per_node=1 test.py\r\n```\r\nBut it got hanged again with script and below are the logs - \r\n```\r\n2021-02-18 19:00:28.946359: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nSome weights of the model checkpoint at /home/jovyan/models/roberta-large/ were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']\r\n- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of RobertaForSequenceClassification were not initialized from the model checkpoint at /home/jovyan/models/roberta-large/ and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nloaded df\r\nEncoding done\r\nfastai-c2-0:13993:13993 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0\r\nfastai-c2-0:13993:13993 [0] NCCL INFO NCCL_SOCKET_IFNAME set to eth0\r\nfastai-c2-0:13993:13993 [0] NCCL INFO Bootstrap : Using [0]eth0:10.244.2.134<0>\r\nfastai-c2-0:13993:13993 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation\r\n\r\nfastai-c2-0:13993:13993 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]\r\nfastai-c2-0:13993:13993 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0\r\nfastai-c2-0:13993:13993 [0] NCCL INFO NCCL_SOCKET_IFNAME set to eth0\r\nfastai-c2-0:13993:13993 [0] NCCL INFO NET/Socket : Using [0]eth0:10.244.2.134<0>\r\nfastai-c2-0:13993:13993 [0] NCCL INFO Using network Socket\r\nNCCL version 2.7.8+cuda10.1\r\n```\r\n\r\nAlso tried with nightly build(`1.9.0.dev20210218+cu101`) and got `0` for that bash command, but now it hanged at trainer.train() and below are the logs - \r\n\r\n```\r\n2021-02-18 19:28:13.170701: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nSome weights of the model checkpoint at /home/jovyan/models/roberta-large/ were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']\r\n- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of RobertaForSequenceClassification were not initialized from the model checkpoint at /home/jovyan/models/roberta-large/ and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nloaded df\r\nEncoding done\r\nparser and args created\r\n------------TRAINING-------------\r\nfastai-c2-0:14431:14431 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0\r\nfastai-c2-0:14431:14431 [0] NCCL INFO NCCL_SOCKET_IFNAME set to eth0\r\nfastai-c2-0:14431:14431 [0] NCCL INFO Bootstrap : Using [0]eth0:10.244.2.134<0>\r\nfastai-c2-0:14431:14431 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation\r\n\r\nfastai-c2-0:14431:14431 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]\r\nfastai-c2-0:14431:14431 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0\r\nfastai-c2-0:14431:14431 [0] NCCL INFO NCCL_SOCKET_IFNAME set to eth0\r\nfastai-c2-0:14431:14431 [0] NCCL INFO NET/Socket : Using [0]eth0:10.244.2.134<0>\r\nfastai-c2-0:14431:14431 [0] NCCL INFO Using network Socket\r\nNCCL version 2.7.8+cuda10.1\r\n```\r\n\r\nused the same script for both - \r\n```\r\nfrom transformers import RobertaForSequenceClassification, RobertaTokenizerFast, Trainer, TrainingArguments, HfArgumentParser\r\nimport pandas as pd\r\nimport numpy as np\r\nimport torch\r\nimport os\r\n\r\nos.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\r\nos.environ['NCCL_DEBUG']='INFO'\r\nos.environ['NCCL_DEBUG_SUBSYS']='ALL'\r\nos.environ['NCCL_IB_DISABLE']='1'\r\nos.environ['NCCL_SOCKET_IFNAME']='eth0'\r\n\r\ntok = RobertaTokenizerFast.from_pretrained('/home/jovyan/models/roberta-large/')\r\nmodel = RobertaForSequenceClassification.from_pretrained('/home/jovyan/models/roberta-large/', num_labels=2)\r\n\r\ndf_full = pd.read_csv('IMDB_Dataset.csv')\r\nprint(\"loaded df\")\r\ndf_full = df_full.sample(frac=1).reset_index(drop=True)\r\ndf_req = df_full.head(1000)\r\ndf_train = df_req.head(800)\r\ndf_eval = df_req.tail(200)\r\ntrain_text, train_labels_raw, val_text, val_labels_raw = df_train.review.values.tolist(), df_train.sentiment.values.tolist(), df_eval.review.values.tolist(), df_eval.sentiment.values.tolist(),\r\n\r\n\r\ntrain_encodings = tok(train_text, padding=True, truncation=True, max_length=512)\r\nval_encodings = tok(val_text, padding=True, truncation=True, max_length=512)\r\ntrain_labels = [1 if i=='positive' else 0 for i in train_labels_raw]\r\nval_labels = [1 if i=='positive' else 0 for i in val_labels_raw]\r\n\r\n\r\nclass IMDbDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings, labels):\r\n self.encodings = encodings\r\n self.labels = labels\r\n\r\n def __getitem__(self, idx):\r\n item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}\r\n item['labels'] = torch.tensor(self.labels[idx])\r\n return item\r\n\r\n def __len__(self):\r\n return len(self.labels)\r\n\r\ntrain_dataset = IMDbDataset(train_encodings, train_labels)\r\nval_dataset = IMDbDataset(val_encodings, val_labels)\r\n\r\nprint(\"Encoding done\")\r\n\r\n\r\nparser = HfArgumentParser(TrainingArguments)\r\ntrain_args = parser.parse_args_into_dataclasses()\r\nprint('parser and args created')\r\n\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=train_args[0],\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset\r\n )\r\nif train_args[0].do_train:\r\n print('------------TRAINING-------------')\r\n trainer.train() \r\nif train_args[0].do_eval:\r\n print('------------EVALUATING-------------')\r\n trainer.evaluate()\r\n```\r\n\r\nUpdated same in pytorch issues and forums as well ...\r\nWanted to let you know about the progress.\r\n",
"> I installed `1.7.1+cu101` and below returned `0`\r\n> \r\n> ```\r\n> %%bash\r\n> echo 'import os, torch; print(os.environ[\"LOCAL_RANK\"]); torch.distributed.init_process_group(\"nccl\")' > test.py\r\n> python -m torch.distributed.launch --nproc_per_node=1 test.py\r\n> ```\r\n\r\nThat's a good step forward, I'm glad it worked. From what I understand system-wide cuda shouldn't have impact on whether distributed works or not, but clearly in your case it did.\r\n\r\nHow can I reproduce your setup? I don't know where you got your dataset from. As suggested earlier if you want to save my time, please setup a public google colab notebook (free) and then me and others can easily look at the situation without needing to figure out how to set up our own.",
"Hi @stas00 ,\r\n\r\n[Here](https://colab.research.google.com/drive/1u0QHP8kdjlEqv85IyB98KVlVLBcddhMi?usp=sharing) is the colab version of my script. I used [IMDB from kaggle](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) in local but in colab I gave a download and extractable version. Also, I included torch and transformers versions that I'm using.",
"Thank you, but have you tried running it? It fails in many cells, perhaps I wasn't clear but the idea was to give us a working notebook and then it's easier to spend the time trying to understand the problem, rather than trying to figure out how to make it run - does it make sense?",
"Hmm, you're running on a system with multi-gpus, correct? In one threads I found out that if a vm is used and NVLink they may not work unless properly configured, and that person solved the problem with:\r\n```\r\nexport NCCL_P2P_DISABLE=1\r\n```\r\nwhich disables NVLink between the 2 cards and switches to the slower PCIe bridge connection.\r\n\r\nCould you try and check that this is not your case?\r\n",
"So sorry for that..\r\nBut in colab everything works just fine with same library versions that I'm using. [Here](https://colab.research.google.com/drive/1u0QHP8kdjlEqv85IyB98KVlVLBcddhMi?usp=sharing) is the updated one along with outputs.\r\n\r\nI have 3 VM's where 1 is having 2 GPU's and rest with single GPU. Currently I'm trying in one of the VM with single GPU and if everything is fine we'll replicate this to 2 GPU VM or combine all 4 V100-32GB GPU's for bigger models. This is the higher level roadmap.\r\n\r\n1. with deepspeed : \r\n\r\nI tried exact colab that I shared in my notebook server and it is hanging here - \r\n\r\n\r\n\r\n2. normal torch.distributed :\r\nSame with script using torch.distributed.launch and it also hangs at trainer.train() with below log - \r\n\r\n```\r\nparser and args created\r\nfastai-c2-0:22177:22177 [0] NCCL INFO Bootstrap : Using [0]eth0:10.244.2.134<0>\r\nfastai-c2-0:22177:22177 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation\r\n\r\nfastai-c2-0:22177:22177 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]\r\nfastai-c2-0:22177:22177 [0] NCCL INFO NET/Socket : Using [0]eth0:10.244.2.134<0>\r\nfastai-c2-0:22177:22177 [0] NCCL INFO Using network Socket\r\nNCCL version 2.7.8+cuda10.1\r\n```\r\nsame with `export NCCL_P2P_DISABLE=1`\r\n\r\nBut now it's not hanging at ' Initializing torch distributed with backend: nccl ' anymore - \r\n\r\n\r\n",
"Will there be any potential configuration issue..?\r\nBut I think everything should work with 1 GPU. Correct me if I'm wrong.",
"Hi @stas00 ,\r\n\r\nIt's working with `NCCL_SOCKET_IFNAME=lo` from [this](https://github.com/NVIDIA/nccl/issues/352) thread.\r\n\r\nboth of the below were working now - \r\n```\r\n!NCCL_SOCKET_IFNAME=lo python -m torch.distributed.launch --nproc_per_node=1 ./Seq2Seq.py --output_dir ./out_dir/results --overwrite_output_dir --do_train \\\r\n--do_eval --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --learning_rate 3e-5 --weight_decay 0.01 \\\r\n--num_train_epochs 1 --load_best_model_at_end --local_rank 0\r\n```\r\nand \r\n\r\n```\r\n!NCCL_SOCKET_IFNAME=lo deepspeed ./Seq2Seq.py --output_dir ./out_dir/results --overwrite_output_dir --do_train \\\r\n--do_eval --per_device_train_batch_size 12 --per_device_eval_batch_size 12 --learning_rate 3e-5 --weight_decay 0.01 \\\r\n--num_train_epochs 1 --load_best_model_at_end --local_rank 0 --deepspeed ds_config.json\r\n``` \r\nNot sure exactly what it's doing internally. I will check in other scenarios like multi-GPU and let you know...",
"Yay, so glad to hear you found a solution, @saichandrapandraju! \r\n\r\nThank you for updating the notebook too!\r\n\r\nIf the issue has been fully resolved for you please don't hesitate to close this Issue.\r\n\r\nIf some new problem occurs, please open a new dedicated issue. Thank you.",
"Tested DeepSpeed on multi-GPU as well and it worked !!\r\n\r\nBy setting `NCCL_SOCKET_IFNAME=lo`, everything worked as expected. \r\n\r\nThanks a lot @stas00 "
] | 1,613 | 1,694 | 1,613 | NONE | null | Hi,
I'm trying to implement Model parallelism for BERT models by splitting and assigning layers across GPUs. I took DeBERTa as an example for this.
For DeBERTa, I'm able to split entire model into 'embedding', 'encoder', 'pooler', 'classifier' and 'dropout' layers as shown in below pic.

With this approach, I trained on IMDB classification task by assigning 'encoder' to second GPU and others to first 'GPU'. At the end of the training, second GPU consumed lot of memory when compared to first GPU and this resulted in 20-80 split of the entire model.
So, I tried splitting encoder layers also as shown below but getting this error - **"TypeError: forward() takes 1 positional argument but 2 were given"**
```
embed = dberta.deberta.embeddings.to('cuda:0')
f6e = dberta.deberta.encoder.layer[:6].to('cuda:0')
l6e = dberta.deberta.encoder.layer[6:].to('cuda:1')
pooler = dberta.pooler.to('cuda:0')
classifier = dberta.classifier.to('cuda:0')
dropout = dberta.dropout.to('cuda:0')
test = "this is to test deberta"
inp_ids = tok_dberta(test, return_tensors='pt').input_ids
att_mask = tok_dberta(test, return_tensors='pt').attention_mask
emb_out = embed(inp_ids.to('cuda:0'))
first_6_enc_lay_out = f6e(emb_out)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-15-379d948e5ba5> in <module>
----> 1 first_6_enc_lay_out = f6e(emb_out)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
TypeError: forward() takes 1 positional argument but 2 were given
```
Plz suggest how to proceed further.. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10151/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10150/comments | https://api.github.com/repos/huggingface/transformers/issues/10150/events | https://github.com/huggingface/transformers/issues/10150 | 807,130,110 | MDU6SXNzdWU4MDcxMzAxMTA= | 10,150 | Problem with evaluation_strategy | {
"login": "mtortoli",
"id": 29463872,
"node_id": "MDQ6VXNlcjI5NDYzODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/29463872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mtortoli",
"html_url": "https://github.com/mtortoli",
"followers_url": "https://api.github.com/users/mtortoli/followers",
"following_url": "https://api.github.com/users/mtortoli/following{/other_user}",
"gists_url": "https://api.github.com/users/mtortoli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mtortoli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtortoli/subscriptions",
"organizations_url": "https://api.github.com/users/mtortoli/orgs",
"repos_url": "https://api.github.com/users/mtortoli/repos",
"events_url": "https://api.github.com/users/mtortoli/events{/privacy}",
"received_events_url": "https://api.github.com/users/mtortoli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"`evaluation_strategy` is not an argument fully implemented in `TFTrainer`, it only supports \"steps\". (The PyTorch counterpart supports all the possibilities.)\r\nTo evaluate every epoch, the best is to use the native Keras fit method.",
"Thanks a lot!!"
] | 1,613 | 1,613 | 1,613 | NONE | null | Hi everyone!
I have a problem (i think is a bug but i'm not sure) with the parameter "evaluation_strategy" in TFTrainingArguments.
I created a script for finetuning a transfomers model, based on the example "run_tf_text_classification.py" file.
In "TFTrainingArguments" i put the parameter "evaluation_strategy="epoch"", to see how the eval_loss change after each epoch.
Unfortunately, the eval_loss is not printed after each epoch, but if a change from "epoch" to "steps", actually the eval_loss is printed after each steps. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10150/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10149/comments | https://api.github.com/repos/huggingface/transformers/issues/10149/events | https://github.com/huggingface/transformers/issues/10149 | 807,039,226 | MDU6SXNzdWU4MDcwMzkyMjY= | 10,149 | Issue using num_beams parameter for T5 / DeepSpeed | {
"login": "PeterAJansen",
"id": 3813268,
"node_id": "MDQ6VXNlcjM4MTMyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3813268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterAJansen",
"html_url": "https://github.com/PeterAJansen",
"followers_url": "https://api.github.com/users/PeterAJansen/followers",
"following_url": "https://api.github.com/users/PeterAJansen/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterAJansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterAJansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterAJansen/subscriptions",
"organizations_url": "https://api.github.com/users/PeterAJansen/orgs",
"repos_url": "https://api.github.com/users/PeterAJansen/repos",
"events_url": "https://api.github.com/users/PeterAJansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterAJansen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's `--eval_beams` in that particular script:\r\n```\r\n./finetune_trainer.py -h | grep beams\r\n [--tgt_lang TGT_LANG] [--eval_beams EVAL_BEAMS]\r\n --eval_beams EVAL_BEAMS\r\n # num_beams to use for evaluation.\r\n```\r\n\r\nThis script is going to be retired soon and `run_seq2seq.py` is the replacement, and there by my suggestions we switched to `num_beams` to match the `model.config.num_beams`",
"Thanks -- I was doing it the complex way and looking through the seqtrainer to verify the num_beams was being passed, when really I should have started with funetune_trainer.py to verify the name was the same. :)\r\n\r\nThat did get rid of the argument error. But I am now seeing different errors:\r\n\r\n1) I received the \"RuntimeError: Input, output and indices must be on the current device\" error, but then realized that was fixed in #10039 , so I did a pull of master.\r\n\r\n2) Then I was getting OOM errors when calling trainer with just --do_predict. I tried reducing eval_beams to 1, then excluding the argument all together, and the OOM is still thrown. \r\n\r\n3) To figure out if this was a broader issue from the pull, I've went back to rerunning my fine tuning script, but it's also now throwing OOM on T5-11B (but worked okay on my pull from ~Feb 4th). I'm running a few more tests to try to rule out if it's something I accidentally changed (so far nothing). I should probably start a fresh issue. \r\n\r\n",
"You probably need to start transitioning to `run_seq2seq.py` as `funetune_trainer.py` is about to be demoted into the legacy underworld. \r\n\r\nI haven't full figured out how to do it as not everything was ported, but I'm updating notes here: https://github.com/huggingface/transformers/issues/10036 as I learn new nuances - one of the main changes is that datasets are now done in a complete different way. \r\n\r\n\r\n\r\n----\r\n\r\n> To figure out if this was a broader issue from the pull, I've went back to rerunning my fine tuning script, but it's also now throwing OOM on T5-11B\r\n\r\nYes, I remember I had encountered that too - I went back to the original scripts that I know worked (https://github.com/huggingface/transformers/issues/9996) and then started comparing what changes I have done and then discovered which differences I made that led to more GPU usage.\r\n\r\nAlso note that since the merge of https://github.com/huggingface/transformers/pull/10114 the DeepSpeed process is completely contained in the `train()` stage (since it doesn't have anything to offer during eval at the moment). I think this then would impact the ability to load t5-11b 45GB model onto 40GB gpu, because DeepSpeed was loading it in fp16 (22GB), but HF trainer can't do that. But this is a very recent change. I started looking at doing fp16 during eval in HF Trainer, but it looks like this is a wildcard and many models fail to deliver when `.half`ed.\r\n\r\nBefore this PR was merged, if you were to train and then eval then the smaller model would avail itself to eval. Not yet sure how to best to proceed - surely if one can train a model, they should be able to eval it too.\r\n\r\n**edit**: looking closer, `self.model` will remain as it were in `train` anyway, so actually this PR shouldn't have affected the eval stage - i.e. should remain in fp16 if the trainer set the model. But if `train` wasn't run it surely won't be able to load in fp32 (45GB>40GB).",
"Thanks -- I migrated to ```run_seq2seq.py``` and I'm now able to replicate the OOM error on the README examples (assuming I have DeepSpeed configured correctly). So it does seem like it's a broader issue, and we may back to not being able to train T5-11B on the 40gb cards on the current master (though I can always go back and try to see if there's a commit from the past week that's post-eval-issue fix and pre-new issue). \r\n\r\nSince this is unrelated to the ```--num_beams`` argument, I put it in a new issue: #10161 and we can probably close this one. "
] | 1,613 | 1,613 | 1,613 | NONE | null | Using a fine-turned seq2seq model, I'd like to generate some number of possible different generations for a given input. One way of typically doing this is using beam search.
Using @stas00 's amazing DeepSpeed additions so that T5-11B will fit in my GPUs, I'm calling the trainer ( finetune_trainer.py
) with only the --do_predict (no train/eval) and (critically) the --num_beams parameter, but this is throwing an error.
I think the issue is likely one of the following:
1) That this is an unexpected bug/error
2) That this is normal/expected, and that beam search isn't supported on trainer prediction, but rather normally accomplished using run_distributed_eval.py (as described in https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md ). But if I remember correctly I don't think run_distributed_eval.py currently works with DeepSpeed (though I could be wrong?).
I am using a pull from around Feb 4th, so if things have changed in the past week, it's possible that's my issue, too.
### Run Script
```
export BS=1; rm -rf $OUTPUTDIR; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=4 ./finetune_trainer.py --model_name_or_path allenai/unifiedqa-t5-11b --output_dir $OUTPUTDIR --adam_eps 1e-06 --data_dir $DATADIR \
--do_predict \
--num_beams 8 \
--evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 \
--logging_first_step --logging_steps 1000 --max_source_length $SEQLEN --max_target_length $SEQLEN --num_train_epochs $EPOCHS \
--overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \
--predict_with_generate --sortish_sampler \
--test_max_target_length $SEQLEN --val_max_target_length $SEQLEN \
--warmup_steps 5 \
--deepspeed ds_config.json --fp16 \
```
### Error
```
[2021-02-12 01:02:55,207] [WARNING] [runner.py:117:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2021-02-12 01:02:55,861] [INFO] [runner.py:355:main] cmd = /home/pajansen/anaconda3/envs/transformers-feb4-2020/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 ./finetune_trainer.py --model_name_or_path allenai/unifiedqa-t5-11b --output_dir output_dir_compexpl-feb10-epoch1-uqa-11b-pretrain-teacher-min6-max8-step2-beam --adam_eps 1e-06 --data_dir /home/pajansen/github/compositional-expl/pretrain/min-6-max-8-noduptest/ --do_predict --num_beams 8 --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 256 --max_target_length 256 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --sortish_sampler --test_max_target_length 256 --val_max_target_length 256 --warmup_steps 5 --deepspeed ds_config.json --fp16
[2021-02-12 01:02:56,753] [INFO] [launch.py:78:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}
[2021-02-12 01:02:56,753] [INFO] [launch.py:84:main] nnodes=1, num_local_procs=4, node_rank=0
[2021-02-12 01:02:56,753] [INFO] [launch.py:99:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3]})
[2021-02-12 01:02:56,753] [INFO] [launch.py:100:main] dist_world_size=4
[2021-02-12 01:02:56,753] [INFO] [launch.py:102:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3
[2021-02-12 01:02:59,580] [INFO] [distributed.py:39:init_distributed] Initializing torch distributed with backend: nccl
[2021-02-12 01:02:59,723] [INFO] [distributed.py:39:init_distributed] Initializing torch distributed with backend: nccl
[2021-02-12 01:02:59,828] [INFO] [distributed.py:39:init_distributed] Initializing torch distributed with backend: nccl
[2021-02-12 01:02:59,976] [INFO] [distributed.py:39:init_distributed] Initializing torch distributed with backend: nccl
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 160, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/hf_argparser.py", line 189, in parse_args_into_dataclasses
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueError: Some specified arguments are not used by the HfArgumentParser: ['--num_beams', '8']
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 160, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/hf_argparser.py", line 189, in parse_args_into_dataclasses
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueError: Some specified arguments are not used by the HfArgumentParser: ['--num_beams', '8']
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 160, in main
main()
File "./finetune_trainer.py", line 160, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/hf_argparser.py", line 189, in parse_args_into_dataclasses
File "/home/pajansen/github/transformers-feb4-2021/transformers/src/transformers/hf_argparser.py", line 189, in parse_args_into_dataclasses
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueErrorValueError: : Some specified arguments are not used by the HfArgumentParser: ['--num_beams', '8']Some specified arguments are not used by the HfArgumentParser: ['--num_beams', '8']
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10149/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10148/comments | https://api.github.com/repos/huggingface/transformers/issues/10148/events | https://github.com/huggingface/transformers/pull/10148 | 806,957,026 | MDExOlB1bGxSZXF1ZXN0NTcyMjg1MDU1 | 10,148 | Fix typo in GPT2DoubleHeadsModel docs | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | If I'm not mistaken, masked label ids should be set to `-100` not `-1`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10148/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10148",
"html_url": "https://github.com/huggingface/transformers/pull/10148",
"diff_url": "https://github.com/huggingface/transformers/pull/10148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10148.patch",
"merged_at": 1613150319000
} |
https://api.github.com/repos/huggingface/transformers/issues/10147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10147/comments | https://api.github.com/repos/huggingface/transformers/issues/10147/events | https://github.com/huggingface/transformers/issues/10147 | 806,702,135 | MDU6SXNzdWU4MDY3MDIxMzU= | 10,147 | BERT with regression head cannot fit one datapoint | {
"login": "lucky-bai",
"id": 123435,
"node_id": "MDQ6VXNlcjEyMzQzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/123435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucky-bai",
"html_url": "https://github.com/lucky-bai",
"followers_url": "https://api.github.com/users/lucky-bai/followers",
"following_url": "https://api.github.com/users/lucky-bai/following{/other_user}",
"gists_url": "https://api.github.com/users/lucky-bai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucky-bai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucky-bai/subscriptions",
"organizations_url": "https://api.github.com/users/lucky-bai/orgs",
"repos_url": "https://api.github.com/users/lucky-bai/repos",
"events_url": "https://api.github.com/users/lucky-bai/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucky-bai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,613 | 1,613 | 1,613 | NONE | null | Hi, I am trying to use BERT for a token-level regression task (predict a continuous value for each token), and I'm having trouble getting my model to train. As a debugging strategy, I'm trying to get it to overfit one datapoint, which should be easy, but it's failing that also.
Here is a minimal reproducing source code. The model is BERT which feeds into a ``nn.Linear(768, 1)``. To keep things simple, I am feeding it a sequence of length 1, and training it to output 0.5.
```
import torch
import transformers
class RegressionModel(torch.nn.Module):
def __init__(self):
super(RegressionModel, self).__init__()
self.bert = transformers.BertModel.from_pretrained('bert-base-uncased')
self.linear = torch.nn.Linear(768, 1)
def forward(self, X_ids):
return self.linear(self.bert(X_ids).last_hidden_state)
model = RegressionModel().cuda()
model.train()
opt = torch.optim.Adam(model.parameters())
X_ids = torch.LongTensor([[12345]]).cuda()
Y_true = torch.Tensor([[0.5]]).cuda()
steps = 0
while True:
opt.zero_grad()
Y_pred = model(X_ids)
loss = (Y_true - Y_pred)**2
loss.backward()
print(steps, Y_pred, float(loss))
steps += 1
opt.step()
```
After a few thousand iterations, it predicts around 0.5 but not exactly:
```
2315 tensor([[[0.4669]]], device='cuda:0', grad_fn=<AddBackward0>) 0.0010972624877467752
2316 tensor([[[0.5115]]], device='cuda:0', grad_fn=<AddBackward0>) 0.00013136999041307718
2317 tensor([[[0.4788]]], device='cuda:0', grad_fn=<AddBackward0>) 0.00045129822683520615
2318 tensor([[[0.4658]]], device='cuda:0', grad_fn=<AddBackward0>) 0.0011675604619085789
```
Note that if I set `model.eval()` instead of `model.train()`, then the model is able to fit as expected (predicts 0.5000 after about 200 iterations). The problem exists in the RoBERTa model as well.
## Version information
- `transformers` version: 4.3.2
- Platform: Linux-4.15.0-112-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.5
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10147/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10146/comments | https://api.github.com/repos/huggingface/transformers/issues/10146/events | https://github.com/huggingface/transformers/issues/10146 | 806,670,345 | MDU6SXNzdWU4MDY2NzAzNDU= | 10,146 | Model not training beyond 1st epoch | {
"login": "neel04",
"id": 11617870,
"node_id": "MDQ6VXNlcjExNjE3ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/11617870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neel04",
"html_url": "https://github.com/neel04",
"followers_url": "https://api.github.com/users/neel04/followers",
"following_url": "https://api.github.com/users/neel04/following{/other_user}",
"gists_url": "https://api.github.com/users/neel04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neel04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neel04/subscriptions",
"organizations_url": "https://api.github.com/users/neel04/orgs",
"repos_url": "https://api.github.com/users/neel04/repos",
"events_url": "https://api.github.com/users/neel04/events{/privacy}",
"received_events_url": "https://api.github.com/users/neel04/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you please post this on the [forum](https://discuss.huggingface.co/), rather than here? The authors of HuggingFace like to keep this place for bugs or feature requests, and they're more than happy to help you on the forum.\r\n\r\nLooking at your code, this seems more like an issue with preparing the data correctly for the model.\r\n\r\nTake a look at [this example in the docs](https://huggingface.co/transformers/custom_datasets.html#sequence-classification-with-imdb-reviews) on how to perform text classification with the Trainer.\r\n\r\n",
"@NielsRogge Not very pleased with your reply, please ask someone a question if you are unclear about something rather than trying to just close an issue. \r\n\r\nAs regards the data, I can assure you it is in the format specified by your guide - It is in NumPy arrays converted to list and then made into a TFDataset object and has all the correct parts. The conversion was made to list because an error clearly specified that lists are to be passed.\r\n\r\nThis **is** a bug because the model does appear to be training, just having extremely low accuracy (Which may be because of the activation function, but I am not sure) and it won't train any further than the 1st epoch, where subsequent epochs don't pick up where the previous epoch left.",
"I've created a Google Colab that will hopefully resolve your issue:\r\n\r\nhttps://colab.research.google.com/drive/1azTvNc0AZeN5JMyzPnOGic53jddIS-QK?usp=sharing\r\n\r\nWhat I did was create some dummy data based on the format of your data, and then see if the model is able to overfit them (as this is [one of the most common things to do first when debugging a neural network](http://karpathy.github.io/2019/04/25/recipe/)). As you can see in the notebook, it appears to do, so everything seems to be working fine. Let me know if this helps.\r\n\r\nUPDATE: looking at your code, it appears that the learning rate is way too low in your case. A typical value for Transformers is 5e-5. ",
"@NielsRogge Thanx a lot for the advice, I will surely update you regarding any solution.\r\n\r\nI have been trying to apply this to my own code, but I am still reproducing the bug - the warnings are there (unlike yours) I am using the latest version of `transformers`. The problem is that it doesn't learn - whatever progress it has made in 1st epoch is replicated in the rest of them. As an example, using this dummy dataset:-\r\n\r\n```\r\ntrain_text = ['a', 'b']\r\ntrain_label = [0,1]\r\nval_text = ['b']\r\nval_label = [1]\r\n```\r\n\r\neven after 35 epochs, the model does not overfit. the same accuracy/loss is maintained irrespective of the loss function.\r\n```\r\n\r\nfrom transformers import TFRobertaForSequenceClassification\r\nimport tensorflow as tf\r\n\r\nmodel = TFRobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=1)\r\n\r\noptimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)\r\n\r\nloss_fn = tf.keras.losses.CategoricalCrossentropy(from_logits=True)\r\n\r\nmodel.compile(optimizer=optimizer, loss=loss_fn, metrics=['accuracy']) # can also use any keras loss fn\r\nmodel.fit(train_dataset.batch(16), validation_data = val_dataset.batch(64), epochs=5, batch_size=1)\r\n```\r\n\r\n**UPDATE:** You might have missed this line @NeilsRogge about using the Keras loss function rather than the default one\r\n`loss_fn = tf.keras.losses.CategoricalCrossentropy(from_logits=True)`\r\ncan you try reproduce the issue with that?",
"> Not very pleased with your reply, please ask someone a question if you are unclear about something rather than trying to just close an issue.\r\n\r\nI want to jump in here and let you know that this kind of behavior is inappropriate. @NielsRogge is doing his best to help you here and he is doing this on his own free time. \"My model is not training\" is very vague and doesn't seem like a bug, so suggesting to take this on the forums is very appropriate: more people will be able to help you there.\r\n\r\nPlease respect that this is an open-source project. No one has to help you solve your bug so staying open-mined and kind will go a long way into getting the help you need.",
"@sgugger with all due respect, My model was training; just that it lost all progress it had made in an epoch for the next one - starting and ending with the exact number. And this is very much a bug.\r\n\r\nAnd about the open-source project, I do understand that this is voluntary **but**, someday if you need help and someone else tells you without reading your question that whatever you have done (without any prior proof) and suggests you to ask your question somewhere else that I know for a fact is not that active, I would like to see your response. \r\n\r\nWe have many projects that are not backed by a company - look at `TPOT` for instance. its maintainer (weixuanfu) does this mostly as a hobby and for learning but if there is something he does not know, he wouldn't say \"ask your question somewhere else\" and not fully try to solve the problem.\r\n\r\nIf you don't want to spend time solving my problem, that's fine. I have no issue with that. But if you do not want to solve my problem just to close down the list of issues **then**, it feels pretty bad. I do know that I don't understand ML very deeply and certainly not enough to make a project of mine, but I do know the difference between someone actually trying to help me versus just trying to reduce the number of open GIthub issues.",
"I do think there's a bit of a misunderstanding with what we mean by a _bug_.\r\n\r\nOf course, since your model isn't training properly, there's a bug in your code. But in this case, it's a bug probably caused by the user (these bugs include setting hyperparameters like learning rate too low, not setting your model in training mode, improper use of the Trainer, etc.). These things are bugs, but they are caused by the user. And for such cases, the forum is the ideal place to seek help. \r\n\r\nGithub issues are mostly for bugs caused by the Transformers library itself, i.e. caused by the authors (these bugs include implementations of models which are incorrect, a bug in the implementation of the Trainer, etc.). \r\n\r\nSo the issue you're posting here is a perfect use case for the forum! It's not that we want to close issues as soon as possible, and it's also not the case that we don't want to help you. It's just a difference between bugs due to the user/bugs due to the library itself, and there are 2 different places for this.",
"What said @NielsRogge is correct, your way of training your model is not correct (and your data might also be malformed). As far as I can see, if your data really looks like:\r\n\r\n```\r\nID,Text,Label\r\n......................\r\nId_1, \"Lorem Ipsum\", 14\r\n```\r\n\r\nI guess that if you have label id up to at least 14, it certainly means that you have more than one label, then the line\r\n`model = TFRobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=1)` is wrong and `1` should be replaced by the proper number.\r\n\r\nNevertheless, if you really have only one label, your loss must be `tf.keras.losses.MeanSquaredError` and not `tf.keras.losses.CategoricalCrossentropy`. But, if you have more than one label your loss must be `tf.keras.losses.SparseCategoricalCrossentropy`.\r\n\r\nSo as far as I can say, I second what has been said before and this post should be on the forum, not here.",
" @jplu Hmm.. I had thought that num_labels was the number labels to be predicted by the model (Like if it is multi-label classification) and about the data, I am importing it in NumPy arrays after preprocessing so I don't see why the structure of the data frame might be a problem. \r\n\r\n@NielsRogge You may be right that the bug may be hyperparameter (I tried using all sorts of LR but it didn't work) but the reason why I think it is a bug in `transformers` is that if the loss starts from `100` and ends at `70` in 1st epoch, it is the exact same story in the rest of the epochs (They start and end with the same numbers):\r\n\r\n```\r\n.................\r\naccuracy: 0.0025 - val_loss: 87.4479 - val_accuracy: 0.0077\r\naccuracy: 0.0047 - val_loss: 87.4479 - val_accuracy: 0.0077\r\naccuracy: 0.0049 - val_loss: 87.4479 - val_accuracy: 0.0077\r\naccuracy: 0.0043 - val_loss: 87.4479 - val_accuracy: 0.0077\r\naccuracy: 0.0052 - val_loss: 87.4479 - val_accuracy: 0.0077\r\n.................\r\n```\r\n\r\nAnother reason was that trying to train the model using `Trainer()` did not work (the cell executes successfully) but does not start training nor report an error. Can you tell me whether this is a bug or not? I had put it in the list above, and this is the output of the cell:- [just normal warnings, but does not start training]\r\n\r\n```\r\nAll model checkpoint layers were used when initializing TFRobertaForSequenceClassification.\r\n\r\nSome layers of TFRobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\n\r\n```\r\nUPDATE: After quite some fixing, the model is now training and seems to be learning (I am still confused about what exactly `num_labels` is supposed to mean - number of total labels present in data OR labels that the model has to predict [multi-label classification]). Anyways, It still **doesn't** train with `Trainer()` which means I can't do Hyperparameter tuning :(",
"> Anyways, It still doesn't train with Trainer() which means I can't do Hyperparameter tuning :(\r\n\r\nAs mentioned before `TFTrainer` does not have hyper-parameter tuning. You should try the Keras one.",
"@sgugger I don't get what you mean - I should use PyTorch trainer? because I can't find any trainer for Keras in docs, only for native Tensorflow. In the example, [here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=ttfT0CqaIrJm) they just use `Trainer`. Is there any way to do Htuning with keras/TF only, and not use pytorch?",
"This example is using PyTorch, not TensorFlow. There is no hyper-parameter tuning implemented in Transformers in TensorFlow, which is why I was recommending [Keras Tuner](https://blog.tensorflow.org/2020/01/hyperparameter-tuning-with-keras-tuner.html).",
"Alright. Thanx a ton!",
"> > Anyways, It still doesn't train with Trainer() which means I can't do Hyperparameter tuning :(\r\n> \r\n> As mentioned before `TFTrainer` does not have hyper-parameter tuning. You should try the Keras one.\r\n\r\nDo you plan to add this support for TFTrainer?",
"@liaocs2008 the `TFTrainer` is not deprecated in favor of `Keras` which is now the default in all of our examples.",
"> After quite some fixing, the model is now training and seems to be learning\r\n\r\n@neel04 I am facing the same issue, the model seems to be resetting after each epoch. Could you please share what fixes you implemented?"
] | 1,613 | 1,651 | 1,613 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0.dev0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No (Single GPU) --> **COLAB**
### Who can help
Models:
- albert, bert, xlm: @LysandreJik
- tensorflow: @jplu
- trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): RoBERTa
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
First off, this issue is basically a continuation of #10055 but since that error was mostly resolved, I have thus opened another issue. I am using a private dataset, so I am not at liberty to share it. However, I can provide a clue as to how the `csv` looks like:-
```
,ID,Text,Label
......................
Id_1, "Lorem Ipsum", 14
```
This is the code:-
```
!git clone https://github.com/huggingface/transformers.git
!cd transformers
!pip install -e .
train_text = list(train['Text'].values)
train_label = list(train['Label'].values)
val_text = list(val['Text'].values)
val_label = list(val['Label'].values)
from transformers import RobertaTokenizer, TFRobertaForSequenceClassification
import tensorflow as tf
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base')
train_encodings = tokenizer(train_text, truncation=True, padding=True)
val_encodings = tokenizer(val_text, truncation=True, padding=True)
train_dataset = tf.data.Dataset.from_tensor_slices((
dict(train_encodings),
train_label
))
val_dataset = tf.data.Dataset.from_tensor_slices((
dict(val_encodings),
val_label
))
#----------------------------------------------------------------------------------------------------------------------
#Since The trainer does not work, I will use the native one
from transformers import TFTrainingArguments, TFTrainer
training_args = TFTrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
with training_args.strategy.scope():
model = TFRobertaForSequenceClassification.from_pretrained("roberta-base")
trainer = TFTrainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
#----------------------------------------------------------------------------------------------------------------------
#Using Native Tensorflow
from transformers import TFRobertaForSequenceClassification
import tensorflow as tf
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base', num_labels=1)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-18)
loss_fn = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss_fn, metrics=['accuracy']) # can also use any keras loss fn
model.fit(train_dataset.batch(8), validation_data = val_dataset.batch(64), epochs=15, batch_size=8)
```
**The Problems:**
- [ ] Cannot train using the `Trainer()` method. The cell successfully executes, but it does nothing - does not start training at all. This is not much of a major issue but it may be a factor in this problem.
- [x] Model does not train more than 1 epoch :---> I have shared this log for you, where you can clearly see that the model does not train beyond 1st epoch; The rest of epochs just do what the first accomplished:-
```
All model checkpoint layers were used when initializing TFRobertaForSequenceClassification.
Some layers of TFRobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Epoch 1/5
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f5b14f1b6c8>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: <cyfunction Socket.send at 0x7f5b323fb2a0> is not a module, class, method, function, traceback, frame, or code object
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f5b14f1b6c8>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: <cyfunction Socket.send at 0x7f5b323fb2a0> is not a module, class, method, function, traceback, frame, or code object
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function wrap at 0x7f5b301d3c80> and will run it as-is.
Cause: while/else statement not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function wrap at 0x7f5b301d3c80> and will run it as-is.
Cause: while/else statement not yet supported
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
180/180 [==============================] - ETA: 0s - loss: 0.0000e+00 - accuracy: 0.0022WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
180/180 [==============================] - 150s 589ms/step - loss: 0.0000e+00 - accuracy: 0.0022 - val_loss: 0.0000e+00 - val_accuracy: 0.0077
Epoch 2/5
180/180 [==============================] - 105s 582ms/step - loss: 0.0000e+00 - accuracy: 0.0022 - val_loss: 0.0000e+00 - val_accuracy: 0.0077
Epoch 3/5
180/180 [==============================] - 105s 582ms/step - loss: 0.0000e+00 - accuracy: 0.0022 - val_loss: 0.0000e+00 - val_accuracy: 0.0077
```
> I think the problem may be that the `activation function` may be wrong. For `CategoricalCrossentropy` we need a `Sigmoid` loss but maybe the activation used in my code is not that.
Can anyone tell me how exactly to change the activation function, or maybe other thoughts on the potential problem? I have tried changing the learning rate with no effect. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10146/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10145/comments | https://api.github.com/repos/huggingface/transformers/issues/10145/events | https://github.com/huggingface/transformers/pull/10145 | 806,601,099 | MDExOlB1bGxSZXF1ZXN0NTcxOTkyMTUw | 10,145 | Add Fine-Tuning for Wav2Vec2 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is really nice, and the piece that will make the wav2vec 2.0 stuff awesome and more readily available! let me know if I can assist in testing/whatnot :)"
] | 1,613 | 1,614 | 1,614 | MEMBER | null | # What does this PR do?
This PR adds the possibility to finetune Wav2Vec2 on a downstream task. I ran a couple of experiments and I think the training is pretty stable now, see this training run *e.g.*:
https://wandb.ai/patrickvonplaten/huggingface/reports/Project-Dashboard--Vmlldzo0OTI0OTc?accessToken=8azw8iyxnbiqd4ytxcgm4hbnfh3x1b2c9l2eyfqfzdqw7l0icreljc9qpx0rkl6f
Once this is merged, I will make a nice forum post and link 1,2 notebooks.
## Who can review?
Would be great if @sgugger @LysandreJik and @patil-suraj could review. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10145/reactions",
"total_count": 19,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10145/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10145",
"html_url": "https://github.com/huggingface/transformers/pull/10145",
"diff_url": "https://github.com/huggingface/transformers/pull/10145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10145.patch",
"merged_at": 1614589998000
} |
https://api.github.com/repos/huggingface/transformers/issues/10144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10144/comments | https://api.github.com/repos/huggingface/transformers/issues/10144/events | https://github.com/huggingface/transformers/issues/10144 | 806,553,933 | MDU6SXNzdWU4MDY1NTM5MzM= | 10,144 | T5 Base length of Tokenizer not equal config vocab_size | {
"login": "ari9dam",
"id": 14134882,
"node_id": "MDQ6VXNlcjE0MTM0ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14134882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ari9dam",
"html_url": "https://github.com/ari9dam",
"followers_url": "https://api.github.com/users/ari9dam/followers",
"following_url": "https://api.github.com/users/ari9dam/following{/other_user}",
"gists_url": "https://api.github.com/users/ari9dam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ari9dam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ari9dam/subscriptions",
"organizations_url": "https://api.github.com/users/ari9dam/orgs",
"repos_url": "https://api.github.com/users/ari9dam/repos",
"events_url": "https://api.github.com/users/ari9dam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ari9dam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"duplicate of https://github.com/huggingface/transformers/issues/4875 I think",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Installed from git
## Issue
The `len(AutoTokenizer.from_pretrained("t5-base"))` is `32100` but the `T5ForConditionalGeneration.from_pretrained("t5-base").config.vocab_size` is `32128`. Seems to be a similar issue to that of : https://github.com/huggingface/transformers/issues/2020
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10144/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10143/comments | https://api.github.com/repos/huggingface/transformers/issues/10143/events | https://github.com/huggingface/transformers/issues/10143 | 806,533,214 | MDU6SXNzdWU4MDY1MzMyMTQ= | 10,143 | context manager for seeding, or generating fixed random tensor. | {
"login": "sadakmed",
"id": 18331629,
"node_id": "MDQ6VXNlcjE4MzMxNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadakmed",
"html_url": "https://github.com/sadakmed",
"followers_url": "https://api.github.com/users/sadakmed/followers",
"following_url": "https://api.github.com/users/sadakmed/following{/other_user}",
"gists_url": "https://api.github.com/users/sadakmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadakmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadakmed/subscriptions",
"organizations_url": "https://api.github.com/users/sadakmed/orgs",
"repos_url": "https://api.github.com/users/sadakmed/repos",
"events_url": "https://api.github.com/users/sadakmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadakmed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | CONTRIBUTOR | null | # 🚀 Feature request
context manager for torch random seed, where the seed is fixed only inside
## Motivation
in some integration test, an input required is very large to be hardcoded, and the ids_tensor provide only int32 examples.
However to fix this input either we can use NumPy seed, but probably it will compromise the randomness of other parts of the test.
I think a context manager where the seed is fixed only inside, it would benefit alot.
example of issues needing large fixed tensors as input #9951 #9954
if there's any alternative please suggest? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10143/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10142/comments | https://api.github.com/repos/huggingface/transformers/issues/10142/events | https://github.com/huggingface/transformers/issues/10142 | 806,406,834 | MDU6SXNzdWU4MDY0MDY4MzQ= | 10,142 | T5 GPU Runtime Degradation | {
"login": "dsgissin",
"id": 20739375,
"node_id": "MDQ6VXNlcjIwNzM5Mzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/20739375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsgissin",
"html_url": "https://github.com/dsgissin",
"followers_url": "https://api.github.com/users/dsgissin/followers",
"following_url": "https://api.github.com/users/dsgissin/following{/other_user}",
"gists_url": "https://api.github.com/users/dsgissin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsgissin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsgissin/subscriptions",
"organizations_url": "https://api.github.com/users/dsgissin/orgs",
"repos_url": "https://api.github.com/users/dsgissin/repos",
"events_url": "https://api.github.com/users/dsgissin/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsgissin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks a lot for this issue @dsgissin! Will take a look this week!",
"Hey! \r\nDid you get a chance to look into the runtime degradation?\r\n\r\nThanks",
"Looking now! Sorry for the delay",
"Okey, I can reproduce the degradation! Will try to fix it today",
"I think this PR should fix it: https://github.com/huggingface/transformers/pull/10496\r\n\r\nLet me know if you still encounter a degradation!\r\n\r\nThanks a mille for spotting this degradation - you probably now made T5 faster for the whole community :-)",
"Great, thanks a lot for the quick fix!"
] | 1,613 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1 VS 3.4.0
- Platform: Colab (K80 GPU)
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?): N.A.
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [x] the official example scripts: (give details below)
* [] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Hello,
I’ve noticed that the running time of T5 on a GPU has increased between v3.4.0 and the current version (v4.2.1). When running inference on a single example on a K80 GPU (Google Colab), the average runtime of a generate() call for a single example (the one in the transformers documentation) with t5-base in v3.4.0 is 539 ± 13 ms, while the runtime for v4.2.1 is 627 ± 13 ms.
On t5-large, the difference is 1004 ± 22 ms, compared to 1242 ± 15 ms.
I made two colab notebooks that compare the two versions:
https://colab.research.google.com/drive/1Rm9RFdfLUFFHOvjAOg816-6oXw8zm_tE?usp=sharing#scrollTo=eeJ0sS_g7-X2
https://colab.research.google.com/drive/1U2QPA4MR48xPCpn4XiG5KBk3qZGYeoIJ?usp=sharing
I’m aware of a at least one bug fix that was made to the attention mechanism of T5 in v4.0.0 (#8158), but I don’t think this change should have caused such a degradation.
Any idea why such a degradation occurred?
Thanks!
## To reproduce
See Colab notebooks attached. See the following code snippet as well:
```
device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
print(f"Using device: {device}")
t5_tokenizer = T5TokenizerFast.from_pretrained('t5-base')
t5_model = T5ForConditionalGeneration.from_pretrained('t5-base')
t5_model = t5_model.to(device)
t5_input_ids = t5_tokenizer("summarize: studies have shown that owning a dog is good for you ", return_tensors="pt").input_ids # Batch size 1
t5_input_ids = t5_input_ids.to(device)
import time
import numpy as np
N = 100
times = []
for _ in range(N):
start = time.time()
t5_outputs = t5_model.generate(t5_input_ids)
end = time.time()
times.append(end-start)
print(f"transformers version: {transformers_version}")
print(f"torch version: {torch_version}")
print(f"{1000*np.mean(times):.0f} ms \u00B1 {1000*np.std(times):.2f} ms per loop (mean \u00B1 std of {N} runs)")
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10142/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10141/comments | https://api.github.com/repos/huggingface/transformers/issues/10141/events | https://github.com/huggingface/transformers/pull/10141 | 806,366,601 | MDExOlB1bGxSZXF1ZXN0NTcxNzk5NzA2 | 10,141 | Add AMP for TF Albert | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can split this PR into two different ones, but the one on AMP will be very short (only two single line to update, see the review above). Are you agree with a that tiny PR? If it is still ok, I will split this one^^",
"It's ok, thanks for showing me the changes!",
"@patrickvonplaten feel free to merge if it looks ok for you!"
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
This PR adds the following features to TF Albert:
- AMP compliancy
- Loss computation for TFAlbertForPreTraining
- Cleaning source code | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10141/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10141",
"html_url": "https://github.com/huggingface/transformers/pull/10141",
"diff_url": "https://github.com/huggingface/transformers/pull/10141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10141.patch",
"merged_at": 1613405914000
} |
https://api.github.com/repos/huggingface/transformers/issues/10140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10140/comments | https://api.github.com/repos/huggingface/transformers/issues/10140/events | https://github.com/huggingface/transformers/issues/10140 | 806,339,706 | MDU6SXNzdWU4MDYzMzk3MDY= | 10,140 | Direct way to apply different learning rate for different group of parameters in Trainer. | {
"login": "liyucheng09",
"id": 27999909,
"node_id": "MDQ6VXNlcjI3OTk5OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liyucheng09",
"html_url": "https://github.com/liyucheng09",
"followers_url": "https://api.github.com/users/liyucheng09/followers",
"following_url": "https://api.github.com/users/liyucheng09/following{/other_user}",
"gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions",
"organizations_url": "https://api.github.com/users/liyucheng09/orgs",
"repos_url": "https://api.github.com/users/liyucheng09/repos",
"events_url": "https://api.github.com/users/liyucheng09/events{/privacy}",
"received_events_url": "https://api.github.com/users/liyucheng09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What if instead you derive from `Trainer` and override `create_optimizer_and_scheduler()` and have that function set your different learning rates?"
] | 1,613 | 1,613 | 1,613 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
For now, if I want to specify learning rate to different parameter groups, I need to define an AdamW optimizer in my main function like the following:
```
optimizer = AdamW([{'params': model.classifier.parameters(), 'lr': 0.03 }],
model.bert.parameters(), lr=5e-5
)
```
and new a lr_schedule like the following:
```
lr_scheduler = get_linear_schedule_with_warmup(
self.optimizer, num_warmup_steps=self.args.warmup_steps, num_training_steps=num_training_steps
)
```
I believe that adding the feature of specifying different learning rates to `Trainer` for networks is quite convenient for fine-tuning processing.
Like the following:
```
trainer = Trainer(
...
grouped_parameters={
"params": ..,
},
{
"params": ..,
},
]
```
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I am not a professional Github user, but I think a can make a PR if necessary.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10140/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10139/comments | https://api.github.com/repos/huggingface/transformers/issues/10139/events | https://github.com/huggingface/transformers/issues/10139 | 806,289,744 | MDU6SXNzdWU4MDYyODk3NDQ= | 10,139 | ValueError: `Checkpoint` was expecting a trackable object (an object derived from `TrackableBase`), got GPT2LMHeadModel | {
"login": "George-Ogden",
"id": 38294960,
"node_id": "MDQ6VXNlcjM4Mjk0OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/38294960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/George-Ogden",
"html_url": "https://github.com/George-Ogden",
"followers_url": "https://api.github.com/users/George-Ogden/followers",
"following_url": "https://api.github.com/users/George-Ogden/following{/other_user}",
"gists_url": "https://api.github.com/users/George-Ogden/gists{/gist_id}",
"starred_url": "https://api.github.com/users/George-Ogden/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/George-Ogden/subscriptions",
"organizations_url": "https://api.github.com/users/George-Ogden/orgs",
"repos_url": "https://api.github.com/users/George-Ogden/repos",
"events_url": "https://api.github.com/users/George-Ogden/events{/privacy}",
"received_events_url": "https://api.github.com/users/George-Ogden/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh no! I was using the PyTorch model with the TF trainer! I have fixed it now."
] | 1,613 | 1,613 | 1,613 | NONE | null | I'm having this issue, and I think it's my fault, but can someone, please, advise me in case this is a bug rather than a mistake:
```
from transformers import TFTrainer, TFTrainingArguments, GPT2Tokenizer, GPT2LMHeadModel
training_args = TFTrainingArguments(
do_train=True,
output_dir="results",
overwrite_output_dir=True,
num_train_epochs=4,
per_device_train_batch_size=16,
per_device_eval_batch_size=64,
logging_dir="logs",
)
model = GPT2LMHeadModel.from_pretrained("distilgpt2")
trainer = TFTrainer(model=model,args=training_args,train_dataset=data)
trainer.train()
```
which returns the error:
```
Traceback (most recent call last):
File ".\train-tf.py", line 52, in <module>
trainer.train()
File "C:\Anaconda3\lib\site-packages\transformers\trainer_tf.py", line 492, in train
ckpt = tf.train.Checkpoint(optimizer=self.optimizer, model=self.model)
File "C:\Anaconda3\lib\site-packages\tensorflow\python\training\tracking\util.py", line 1929, in __init__
_assert_trackable(converted_v)
File "C:\Anaconda3\lib\site-packages\tensorflow\python\training\tracking\util.py", line 1410, in _assert_trackable
raise ValueError(
ValueError: `Checkpoint` was expecting a trackable object (an object derived from `TrackableBase`), got GPT2LMHeadModel(
```
and then it displays the model data.
Thanks for anything that you can add. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10139/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10138/comments | https://api.github.com/repos/huggingface/transformers/issues/10138/events | https://github.com/huggingface/transformers/issues/10138 | 806,279,502 | MDU6SXNzdWU4MDYyNzk1MDI= | 10,138 | Back Translation | {
"login": "chaituValKanO",
"id": 8213640,
"node_id": "MDQ6VXNlcjgyMTM2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8213640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chaituValKanO",
"html_url": "https://github.com/chaituValKanO",
"followers_url": "https://api.github.com/users/chaituValKanO/followers",
"following_url": "https://api.github.com/users/chaituValKanO/following{/other_user}",
"gists_url": "https://api.github.com/users/chaituValKanO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chaituValKanO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chaituValKanO/subscriptions",
"organizations_url": "https://api.github.com/users/chaituValKanO/orgs",
"repos_url": "https://api.github.com/users/chaituValKanO/repos",
"events_url": "https://api.github.com/users/chaituValKanO/events{/privacy}",
"received_events_url": "https://api.github.com/users/chaituValKanO/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @chaituValKanO \r\n\r\nPlease use the [forum](https://discuss.huggingface.co/) to ask such questions. Issues are for bugs, feature requests etc. \r\n\r\nAnd to answer your question you could use the `MarianMT` models for this purpose. Here's a nice blog-post about that\r\nhttps://amitness.com/back-translation/.\r\n\r\nWill close this issue. Thanks!",
"Hello Suraj,\n\nBut the issue is that there are no Pre trained models for MarianMT under\nTensorflow framework where target is English language.\n\nNevertheless, I will check with people in forum.\nThanks and regards,\nChaitanya Kanth.\n\nOn Fri, 12 Feb 2021 at 5:46 PM, Suraj Patil <[email protected]>\nwrote:\n\n> Closed #10138 <https://github.com/huggingface/transformers/issues/10138>.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/10138#event-4324522055>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AB6VJCAIPJNM36CXVKH37QDS6UL2XANCNFSM4XOTUU4A>\n> .\n>\n-- \nSent from my iphone\n",
"Marian is available in TF as well, you'll just need to pass `from_pt=True` to `from_pretrained` when loading the TF modle\r\n",
"Thanks Suraj 😁\n\nOn Fri, Feb 12, 2021 at 6:22 PM Suraj Patil <[email protected]>\nwrote:\n\n> Marian is available in TF as well, you'll just need to pass from_pt=True\n> to from_pretrained when loading the TF modle\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/10138#issuecomment-778177918>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AB6VJCAC53RGMVY5OCVVYRLS6UQCXANCNFSM4XOTUU4A>\n> .\n>\n"
] | 1,613 | 1,613 | 1,613 | NONE | null | # 🚀 Feature request
I want to perform Back translation as a text data augmentation technique using TensorFlow
I want to augment data using translation techniques. I want to perform the below operation English ---> French ---> English. so that resulting english statement might be a new one. These sentences can be part of test data and will help to perform Behavioral Testing of NLP models.
## Motivation
I want to perform above operation using tensorflow. Currently there is no implementation of OPUS models for <any source text> to English. All the models are available in pytorch and not in tensorflow. This is really irritating.
## Your contribution
I am not sure, if I can. But you can assign me simple tasks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10138/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10137/comments | https://api.github.com/repos/huggingface/transformers/issues/10137/events | https://github.com/huggingface/transformers/issues/10137 | 806,261,648 | MDU6SXNzdWU4MDYyNjE2NDg= | 10,137 | Text to Speech Generalized End-To-End Loss for Speaker Verification, Real Time Voice Cloning | {
"login": "BirgerMoell",
"id": 1704131,
"node_id": "MDQ6VXNlcjE3MDQxMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BirgerMoell",
"html_url": "https://github.com/BirgerMoell",
"followers_url": "https://api.github.com/users/BirgerMoell/followers",
"following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}",
"gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions",
"organizations_url": "https://api.github.com/users/BirgerMoell/orgs",
"repos_url": "https://api.github.com/users/BirgerMoell/repos",
"events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}",
"received_events_url": "https://api.github.com/users/BirgerMoell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"@patrickvonplaten This is a suggestion but there are several models available and I think the best first step would be to look into getting a Text-To-Speech model working.\r\n\r\nI explored the Real-Time-Voice-Cloning the other day and noticed it had several issues (since the project is no longer maintained) so it might be good to look into other speech models.\r\n\r\nHere are some examples of repos that might be useful.\r\n\r\nhttps://github.com/mozilla/TTS\r\n\r\nhttps://github.com/as-ideas/ForwardTacotron\r\n\r\n\r\n\r\n",
"Hey @BirgerMoell - thanks a lot for the links I will take a look soon :-)",
"@BirgerMoell \r\nThank you for resource sharing. I also want to add [TransformerTTS](https://github.com/as-ideas/TransformerTTS) to the list since it makes more sense to me to have transformers involved :P\r\n\r\nI'd love to see this addition to huggingface though",
"I think it'd make a lot of sense to add FastSpeech2 to the library - happy to help with a PR if someone is interested. See: https://github.com/huggingface/transformers/pull/11135",
"Also, we started integrating https://github.com/as-ideas/TransformerTTS to the model hub so that people have easier access to TensorflowTTS models :-) \r\n\r\nhttps://huggingface.co/tensorspeech/tts-fastspeech2-baker-ch",
"Hello\r\nTo avoid duplication, I just wanted to check if anyone is working on this or if this is still relevant. If someone is still needed for this, I will be interested to take this up."
] | 1,613 | 1,650 | null | NONE | null | # 🌟 New model addition
## Model description
Generalized End-To-End Loss for Speaker Verification implements Real time voice cloning, a way to generate a Text-To-Speech model adapted to a certain speaker with a short audio sample. The model implements the following paper.
https://arxiv.org/pdf/1806.04558.pdf and the code is available on github.
https://github.com/CorentinJ/Real-Time-Voice-Cloning
<!-- Important information -->
## Open source status
* [ ] the model implementation is available: (give details)
https://colab.research.google.com/drive/1SUq5RLOI0TIMkrBzMHMms01aaVNgkO7c?usp=sharing
The model can be run through Colaboratory. Here is an example of a generated voice.
https://soundcloud.com/birger-mo-ll/generated-voice
* [ ] the model weights are available: (give details)
Here are the model weights that are used.
encoder.load_model(project_name / Path("encoder/saved_models/pretrained.pt"))
synthesizer = Synthesizer(project_name / Path("synthesizer/saved_models/logs-pretrained/taco_pretrained"))
vocoder.load_model(project_name / Path("vocoder/saved_models/pretrained/pretrained.pt"))
* [ ] who are the authors: @CorentinJ
The author is not currently working on the repo, but since it is a fairly popular repo (25.000 stars) it might be reasonable to take the time to explore how to recreate / adapt the model to work with Huggingface transformer.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10137/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10137/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10136/comments | https://api.github.com/repos/huggingface/transformers/issues/10136/events | https://github.com/huggingface/transformers/pull/10136 | 806,208,572 | MDExOlB1bGxSZXF1ZXN0NTcxNjc0NTgz | 10,136 | [WIP][examples/seq2seq] move old s2s scripts to legacy | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot Stas and Sylvain :)"
] | 1,613 | 1,613 | 1,613 | MEMBER | null | # What does this PR do?
Move the `finetune_trainer.py` and related utils, tests, bash scripts to `examples/legacy/seq2seq` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10136/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10136",
"html_url": "https://github.com/huggingface/transformers/pull/10136",
"diff_url": "https://github.com/huggingface/transformers/pull/10136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10136.patch",
"merged_at": 1613414882000
} |
https://api.github.com/repos/huggingface/transformers/issues/10135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10135/comments | https://api.github.com/repos/huggingface/transformers/issues/10135/events | https://github.com/huggingface/transformers/issues/10135 | 806,190,544 | MDU6SXNzdWU4MDYxOTA1NDQ= | 10,135 | Adding end-to-end retriever training to RAG with RAY implementation. | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | CONTRIBUTOR | null | # 🚀 Feature request
Use of RAY to run separate processes for retrieve document indexes, training the system, and re-initialize the indexes with an updated context encoder.
## Motivation
Recent [papers](https://arxiv.org/abs/2101.00408) have shown that fine-tuning the entire retriever gives huge gains for QA tasks. Also having able to fine-tune end-end manner can give better results in different domains.
The idea is to train the RAG as is, but we keep updating the gradients of the context encoder with the supervised loss function (doc score mentioned in the RAG). Then in every n-steps, we re-initialize the embedding and indexes with updated context encoder weights.
REALM [does this with a back-and-forth process ](https://github.com/google-research/language/tree/master/language/realm#running-the-code)which is only run on a single GPU.
As discussed in this [issue](https://github.com/huggingface/transformers/issues/9646#issuecomment-775309123), @lhoestq suggested it would be easier to complete this with RAY since we can have separate actors.
@richardliaw
@amogkam | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10135/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10134/comments | https://api.github.com/repos/huggingface/transformers/issues/10134/events | https://github.com/huggingface/transformers/issues/10134 | 806,188,442 | MDU6SXNzdWU4MDYxODg0NDI= | 10,134 | cant install from source | {
"login": "prathameshk",
"id": 4078857,
"node_id": "MDQ6VXNlcjQwNzg4NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4078857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prathameshk",
"html_url": "https://github.com/prathameshk",
"followers_url": "https://api.github.com/users/prathameshk/followers",
"following_url": "https://api.github.com/users/prathameshk/following{/other_user}",
"gists_url": "https://api.github.com/users/prathameshk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prathameshk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prathameshk/subscriptions",
"organizations_url": "https://api.github.com/users/prathameshk/orgs",
"repos_url": "https://api.github.com/users/prathameshk/repos",
"events_url": "https://api.github.com/users/prathameshk/events{/privacy}",
"received_events_url": "https://api.github.com/users/prathameshk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nmaybe `pip` is not connected to a proper Python version (f-strings work in >= 3.6). To make sure that the right version gets called, you can execute the following command:\r\n```\r\npython -m pip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nThis is why it's a good idea to use a virtual env.\r\n\r\nLet me know if this helps.",
"this was issue with the wrong pip version"
] | 1,613 | 1,613 | 1,613 | NONE | null | ## Environment info
transformers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 3.3.1
- Platform: Linux-4.19.0-13-cloud-amd64-x86_64-with-debian-10.7
- Python version: 3.7.8
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
## To reproduce
pip install git+https://github.com/huggingface/transformers
Collecting git+https://github.com/huggingface/transformers
Cloning https://github.com/huggingface/transformers to /tmp/pip-req-build-BEpAl9
Installing build dependencies ... done
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-BEpAl9/setup.py", line 192
entries = "\n".join([f' "{k}": "{v}",' for k, v in deps.items()])
^
SyntaxError: invalid syntax
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10134/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10133/comments | https://api.github.com/repos/huggingface/transformers/issues/10133/events | https://github.com/huggingface/transformers/pull/10133 | 806,171,948 | MDExOlB1bGxSZXF1ZXN0NTcxNjQ1ODE0 | 10,133 | [examples/run_s2s] remove task_specific_params and update rouge computation | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"**Context:**\r\n\r\nHere some context on the `task_specific_params` config param. In the beginning, we had T5 as the only model that was used for both the translation and summarization pipeline. The problem was that we had **one** model that we used as a default for both pipelines. At that time @thomwolf and I thought about a nice general design that - depending on the specific task (e.g. summarization, translation) - automatically sets the correct parameter set, so we started adding a `task_specific_params` parameter to the config that depending on the task sets the correct parameters. This is why the config of T5 is so long and looks like this:\r\n\r\n```\r\n{\r\n...\r\n \"task_specific_params\": {\r\n \"summarization\": {\r\n \"early_stopping\": true,\r\n \"length_penalty\": 2.0,\r\n \"max_length\": 200,\r\n \"min_length\": 30,\r\n \"no_repeat_ngram_size\": 3,\r\n \"num_beams\": 4,\r\n \"prefix\": \"summarize: \"\r\n },\r\n \"translation_en_to_de\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to German: \"\r\n },\r\n \"translation_en_to_fr\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to French: \"\r\n },\r\n \"translation_en_to_ro\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to Romanian: \"\r\n }\r\n },\r\n ...\r\n}\r\n```\r\n\r\n=> So this design was chosen only for the pipelines and essentially only for T5 version 1 since T5 version 1, is the only model we have that needs task-specific params (especially due to the different required prefixes depending on the task). Up until now, there were too many problems with this mechanism IMO so that the benefit of having it is IMO outweighed by its disadvantages, which are:\r\n\r\n**1)** It blows up the config a lot and is not scalable (what do you do with many-to-many translation models? you can have each combination of `translation_..._to_...`)\r\n\r\n**2)** No one understood anymore what was happening under the hood. IMO, having such a mechanism is a bit too \"magical\" because it creates a whole other logical layer to the already complicated mechanism that we have for the config params. In short, we currently have the following logic in pipelines:\r\n\r\ni) The function argument is used (such as `max_length`), if not given, then\r\nii) the config's `task_specific_params` (such as `config.task_specific_params[\"summarization\"][\"max_length\"]` is used, if not set, then\r\niii) the normal config's param is used such as `config.max_length`, if not set, then\r\niiii) the default `PretrainedConfig` param is used.\r\n\r\n=> It is obvious that this a very complicated and somewhat \"magical\" logic and lot of people internally didn't even really understand it. This is why I really would like to remove the second step. It's confusing to see multiple `max_length` parameters in the config IMO and it's just not worth it.\r\n\r\n**3)** So far `T5` is the only model that really requires this \"magical\" mechanism and that's mostly because it has a very special constraint in the sense that it was primed during training on cues such as `translation from X to Y: ...` which is definitely not something general that we would expect future models to have as well. We might very well have models in the future that have task-specific params like `max_length` and `beam_search` (It can very well be that a GPT3-like model that can do everything wants to adapt those params depending on the task), but those params are usually things that people are aware of and adjust themselves during evaluation IMO. E.g. if one is evaluating a model on `summarization`, setting the correct `max_length`, `num_beams` and maybe `repetition_penalty` is IMO something people should do themselves and not expect to be set correctly automatically. \r\n\r\n**4)** It makes the pipelines in general very inflexible. E.g. when importing the pipeline classes directly, say the `TranslationPipeline` (which is what we did for a long time for the inference API - and maybe still do - not so sure anymore @julien-c @Narsil), there is no way of knowing that we should pass a `task=\"summary\"` arg to the init to correctly load the `task_specific_parms`. To be more precise, imagine you want to directly import the `TranslationPipeline` here: https://github.com/huggingface/transformers/blob/31245775e5772fbded1ac07ed89fbba3b5af0cb9/src/transformers/pipelines/text2text_generation.py#L215 where you don't see any `task` param. But in order to correctly load T5 translation params for `TranslationPipeline`, you actually manually have to pass `task=\"translation_en_to_de\"` to the init (also note here that it's not as easy as just saying - let's just add a class attribute `self.task = \"translation_en_to_de\"` because the same pipeline is also used for EN->RO translation in which case one could not use the class attribute... => this created a lot of problems leading to @julien-c eventually hard-coding (I think) the correct task name for T5 into the inference API code, which then kind of defeated the purpose of having this mechanism.\r\n\r\n**Conclusion**\r\n\r\nThat being said, I see two solutions in general:\r\n\r\n1. Eventually completely remove this mechanism (which I prefer)\r\n2. Keep this mechanism for the `pipelines` only. Since things like the `pipelines` or `AutoNLP` are not meant to be built for researchers I'm ok with having some \"under-the-hood\" magic / very abstracted logic there, but I definitely don't want to have it anywhere else.\r\n\r\n=> This means that I really don't think that should use this param in `run_seq2seq.py`. It creates more confusion than it really helps and is not in line with our motivation to have the `examples` be \"easy to tweak and to understand\" by the user. I think as @sgugger already said multiple times the example scripts should not follow the *\"one-command-fits-all-cases\"* approach, but rather should be easy to understand and to tweak for the specific task. This is why I'm quite strongly against using the `task_specific_params` here. However, @patil-suraj @stas00 I think you are completely correct that we should try to not have a regression in performance here. So I would then actually prefer to hard code T5's prefixes in the script. Something like:\r\n\r\n```\r\nT5_PREFIX = {\r\n \"summary\": ...\r\n \"translation_en_to_de\": ...\r\n}\r\n```\r\n\r\nSorry for the long text, but I think this is actually an important mechanism not too many people are aware of and we should think about a more general solution for how to continue with `task_specific_params`. Actually also pinging @LysandreJik on this one to hear his opinion.\r\n\r\nHappy to hear your opinions on what I wrote above :-) ",
"Thanks a lot for the context @patrickvonplaten \r\n\r\nRegarding the script, to follow the examples philosophy, let's just remove it completely. If a model requires `prefix` it should be passed explicitly and related params should be copied to the `config` manually in case one wants to reproduce some metrics. ",
"Thank you for the detailed explanation, @patrickvonplaten - that was very awesome of you to write it all out in such clarity.\r\n\r\nI'm totally fine with your proposal, yet I think it'd be important to document how does one reproduce the same behavior with the new script and new t5 config then.\r\n\r\nI already started an issue that documents the nuances of porting from `./finetune_trainer.py` https://github.com/huggingface/transformers/issues/10036 so perhaps it can belong there and once the notes have been compiled we can put them into the `seq2seq/README.md` to help users transition before `./finetune_trainer.py` is moved into the unmaintained territory.\r\n\r\nShould you decide to remove this mechanism completely, the t5 models on the hub should probably be updated to reflect that at some future point, so that there is no baggage to carry forward. Perhaps in a few release cycles after the cut is done? Surely, users who use older `transformers` version should still be able to run their scripts normally for quite some time. I'd imagine that's where the model files versioning could come in.",
"@stas00 \r\n\r\nTo reproduce the same behavior with the new script\r\n\r\n1. Use the same dataset\r\n2. if using T5 manually pass the `prefix` argument,\r\n3. manually copy the `task_specific_parms` to `config`\r\n\r\nAgain, this is just for T5, the rest of the models should give similar results. So I'm going to merge this PR and let's update the readme in the clean-up PR #10136."
] | 1,613 | 1,613 | 1,613 | MEMBER | null | # What does this PR do?
- correctly handle `task_specific_params` and `prefix`
The current script tries to access the `prefix` from `config.task_specific_params.prefix`, which is always going to be `None` as `task_specific_params` is a nested `dict` with each key being a task name. This PR retrieves the `task_specific_params` from `config` using the task name (`data_args.task`), updates the `config` with the retrieved params (this is needed for `T5`), and access `prefix` using `config.prefix`
@stas00 as you reported offline, the bleu score for the new script was different from the old script for `T5` on the `en-ro` task. This was because the old script was using the `task_specific_params` and the new script wasn't. This update should resolve that issue.
- Update `rouge` score computation.
The `rougeLsum` metric expects newlines between each sentence, this is usually the score reported in papers. This PR
1. adds newlines to each sentence in `preds` and `labels` using `nltk` to correctly compute `rougeLsum`
2. pass `use_stemmer=True` to `metric.compute` to match the metrics with old script.
- Add `test_file` argument to `DataTrainingArguments` to load custom test dataset. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10133/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10133/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10133",
"html_url": "https://github.com/huggingface/transformers/pull/10133",
"diff_url": "https://github.com/huggingface/transformers/pull/10133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10133.patch",
"merged_at": 1613130501000
} |
https://api.github.com/repos/huggingface/transformers/issues/10132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10132/comments | https://api.github.com/repos/huggingface/transformers/issues/10132/events | https://github.com/huggingface/transformers/issues/10132 | 806,147,244 | MDU6SXNzdWU4MDYxNDcyNDQ= | 10,132 | Where the helsinki models downloaded to? when using the pretrained models | {
"login": "vishnu3741",
"id": 37154661,
"node_id": "MDQ6VXNlcjM3MTU0NjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/37154661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishnu3741",
"html_url": "https://github.com/vishnu3741",
"followers_url": "https://api.github.com/users/vishnu3741/followers",
"following_url": "https://api.github.com/users/vishnu3741/following{/other_user}",
"gists_url": "https://api.github.com/users/vishnu3741/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishnu3741/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishnu3741/subscriptions",
"organizations_url": "https://api.github.com/users/vishnu3741/orgs",
"repos_url": "https://api.github.com/users/vishnu3741/repos",
"events_url": "https://api.github.com/users/vishnu3741/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishnu3741/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can also just clone the repo: \r\n\r\n```\r\ngit clone https://huggingface.co/Helsinki-NLP/opus-mt-es-en\r\n```\r\n\r\nand then load the model and tokenizer locally from the cloned repo:\r\n\r\n```python\r\nmodel = MarianMTModel.from_pretrained(\"/path/to/cloned/repo\")\r\ntokenizer = MarianTokenizer.from_pretrained(\"/path/to/cloned/repo\")\r\n```",
"OSError: Can't load tokenizer for './model/opus-mt-en-es/'. Make sure that:\r\n\r\n- './model/opus-mt-en-es/' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or './model/opus-mt-en-es/' is the correct path to a directory containing relevant tokenizer files\r\n\r\nI am getting this error, even though I am passing the correct path.",
"Thanks, it is working."
] | 1,613 | 1,613 | 1,613 | NONE | null | src_text=['No, los préstamos existentes continuarán por debajo de la tasa de referencia existente.']
model_name='Helsinki-NLP/opus-mt-es-en'
tokenizer=MarianTokenizer.from_pretrained(model_name)
model=MarianMTModel.from_pretrained(model_name)
translated=model.generate(**tokenizer.prepare_seq2seq_batch(src_text, return_tensors="pt"))
tgt_text=[tokenizer.decode(t, skip_special_tokens=True) for t in translated]
when calling the MarianTokenizer and MarianMTModel, the package is automatically downloading the pre_trained model, is there a way to download and integrate it manually from https://huggingface.co/Helsinki-NLP/opus-mt-es-en.
Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10132/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10131/comments | https://api.github.com/repos/huggingface/transformers/issues/10131/events | https://github.com/huggingface/transformers/issues/10131 | 806,104,611 | MDU6SXNzdWU4MDYxMDQ2MTE= | 10,131 | Trainer Evaluates at every step | {
"login": "Megh-Thakkar",
"id": 22455368,
"node_id": "MDQ6VXNlcjIyNDU1MzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/22455368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Megh-Thakkar",
"html_url": "https://github.com/Megh-Thakkar",
"followers_url": "https://api.github.com/users/Megh-Thakkar/followers",
"following_url": "https://api.github.com/users/Megh-Thakkar/following{/other_user}",
"gists_url": "https://api.github.com/users/Megh-Thakkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Megh-Thakkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Megh-Thakkar/subscriptions",
"organizations_url": "https://api.github.com/users/Megh-Thakkar/orgs",
"repos_url": "https://api.github.com/users/Megh-Thakkar/repos",
"events_url": "https://api.github.com/users/Megh-Thakkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Megh-Thakkar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"It's hard to know without seeing your code. This combination of arguments should evaluate every 6 steps.\r\nAlso how do you know it's evaluation every step instead of every 6 steps?",
"Hi, thank you for the reply. I have actually overridden the 'evaluate' function of the trainer, and have certain print statements inside the function plus a tqdm progress bar as well. This function is executed after every training step.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,619 | 1,619 | NONE | null | Hi, thanks for the amazing and easy to use library. While using the Trainer with Training Arguments, the trainer is evaluating at every step, instead of eval_steps.
Version: 4.3.0
```python
training_args = TrainingArguments(output_dir='outputs', per_device_train_batch_size=1,per_device_eval_batch_size=2,
evaluation_strategy='steps', do_eval=True,do_train=True, eval_steps=6)
```
Am I passing any wrong combination of arguments? None of the training_args are getting manually changed at any place in the code.
Thank you very much. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10131/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.